url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/21487
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21487/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21487/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21487/events
|
https://github.com/huggingface/transformers/pull/21487
| 1,573,922,802
|
PR_kwDOCUB6oc5JaRLJ
| 21,487
|
[`Doc`] Fix int8 docs
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Since the `0.37.0` release of `bitsandbytes`, all GPUs architectures should support int8 matrix multiplication. This PR clarifies this on the documentation
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21487/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21487",
"html_url": "https://github.com/huggingface/transformers/pull/21487",
"diff_url": "https://github.com/huggingface/transformers/pull/21487.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21487.patch",
"merged_at": 1675778967000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21486
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21486/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21486/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21486/events
|
https://github.com/huggingface/transformers/pull/21486
| 1,573,891,655
|
PR_kwDOCUB6oc5JaKlq
| 21,486
|
[Tests] Improve flax test_attention_outputs
|
{
"login": "Shubhamai",
"id": 51819922,
"node_id": "MDQ6VXNlcjUxODE5OTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/51819922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shubhamai",
"html_url": "https://github.com/Shubhamai",
"followers_url": "https://api.github.com/users/Shubhamai/followers",
"following_url": "https://api.github.com/users/Shubhamai/following{/other_user}",
"gists_url": "https://api.github.com/users/Shubhamai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shubhamai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shubhamai/subscriptions",
"organizations_url": "https://api.github.com/users/Shubhamai/orgs",
"repos_url": "https://api.github.com/users/Shubhamai/repos",
"events_url": "https://api.github.com/users/Shubhamai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shubhamai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
A copy of https://github.com/huggingface/transformers/pull/20701 by [NielsRogge](https://github.com/NielsRogge) for making corresponding changes in flax. These changes are also necessary for passing tests in [flax convnext implementation PR](https://github.com/huggingface/transformers/pull/21485).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21486/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21486",
"html_url": "https://github.com/huggingface/transformers/pull/21486",
"diff_url": "https://github.com/huggingface/transformers/pull/21486.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21486.patch",
"merged_at": 1676046710000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21485
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21485/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21485/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21485/events
|
https://github.com/huggingface/transformers/pull/21485
| 1,573,890,709
|
PR_kwDOCUB6oc5JaKag
| 21,485
|
Convnext flax
|
{
"login": "Shubhamai",
"id": 51819922,
"node_id": "MDQ6VXNlcjUxODE5OTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/51819922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shubhamai",
"html_url": "https://github.com/Shubhamai",
"followers_url": "https://api.github.com/users/Shubhamai/followers",
"following_url": "https://api.github.com/users/Shubhamai/following{/other_user}",
"gists_url": "https://api.github.com/users/Shubhamai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shubhamai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shubhamai/subscriptions",
"organizations_url": "https://api.github.com/users/Shubhamai/orgs",
"repos_url": "https://api.github.com/users/Shubhamai/repos",
"events_url": "https://api.github.com/users/Shubhamai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shubhamai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21485). All of your documentation changes will be reflected on that endpoint.",
"@sanchit-gandhi this PR is also ready for your review in case it was missed. And again, thank you so much for taking the time to review the PR.",
"@sanchit-gandhi Reminder incase my previous message got missed! Also the https://github.com/huggingface/transformers/pull/21472 ( previous reviews implemented ) and https://github.com/huggingface/transformers/pull/21867 PR are awaiting review. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,686
| 1,686
|
CONTRIBUTOR
| null |
# Flax Implementation of `facebook/convnext-tiny-224`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Flax: @sanchit-gandhi
## TODO
Last Updated : 10 Feb, 2023
- [x] Fixing tests failed in `ci/circleci: tests_flax` actions.
- [ ] Uploading [Shubhamai/convnext-tiny-224](https://huggingface.co/Shubhamai/convnext-tiny-224) flax weights to [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224).
- [x] Depends on merge of [_Improve flax test_attention_outputs_ PR](https://github.com/huggingface/transformers/pull/21486) to pass (or technically skip) `test_attention_outputs` test.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21485/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21485",
"html_url": "https://github.com/huggingface/transformers/pull/21485",
"diff_url": "https://github.com/huggingface/transformers/pull/21485.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21485.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21484
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21484/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21484/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21484/events
|
https://github.com/huggingface/transformers/issues/21484
| 1,573,538,960
|
I_kwDOCUB6oc5dykyQ
| 21,484
|
Add/fix documentation around VideoMAEForPretraining's `bool_masked_pos` argument
|
{
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for raising this issue!\r\n\r\nVideoMAE indeed uses the same mask ratio (number of masked patches) per video to make batching possible. See [this class](https://github.com/MCG-NJU/VideoMAE/blob/main/masking_generator.py) which the authors use to generate boolean masks.\r\n\r\nIn the tests, I just define the same mask for all examples in the batch, but in practice one would use different masks (but with the same mask ratio) in a batch.",
"@NielsRogge can you add the docstring for the param, please? :) I think you'd be best person to write it.\r\n\r\nAs for fixing the example...maybe we write a quick function and include it in the snippet.",
"Thanks for raising @nateraw ! We should definitely update the docstring and example snippet.\r\n\r\nI'm not sure `bool_mask_pos` should be generated in the image processor. The image processor takes the images and makes them ready to be passed into the model, however it doesn't handle other transformations which might be part of the training procedure e.g. augmentation. For similar tasks, where input tokens are randomly masked, this in handled in e.g. `DataCollatorForLanguageModeling`, rather than the tokenizer. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Reopening as it's not resolved yet"
] | 1,675
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
### System Info
N/A
### Who can help?
@NielsRogge @amyeroberts
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The `VideoMAEForPretraining` class is missing a docstring for an important argument to the `forward` function, `bool_masked_pos`. Can see this in the docs for the `main` branch [here](https://huggingface.co/docs/transformers/main/en/model_doc/videomae#transformers.VideoMAEForPreTraining.forward)...it's listed in the function signature but not in the documented arguments.
There is an [example code snippet](https://huggingface.co/docs/transformers/main/en/model_doc/videomae#transformers.VideoMAEForPreTraining.forward.example) that creates this input tensor that you can see in the docs, but I'm not sure it's correct when applied to video inputs with `batch_size > 1` (if not done carefully). When I calculated it example-by-example, I was getting errors, which is what led me to this issue.
I dug a little deeper and noticed that the test suite for VideoMAE has [different logic](https://github.com/huggingface/transformers/blob/5b49376202863d3798d2ff8a8ba61590542a1141/tests/models/videomae/test_modeling_videomae.py#L145-L149) for creating `bool_masked_pos`. When I updated my training code to use that logic, my problems went away. I assume this is related to the note in the tests that mentions each video needing the same number of masked patches.
### Expected behavior
To resolve this issue, I think we should:
1. Add a docstring for `bool_masked_pos` that explains what it is for users
2. Update the example snippet to use the correct logic from the test suite.
Even better I guess would be to handle this for users in the `ImageProcessor`, but I'll leave that for a separate issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21484/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21483
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21483/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21483/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21483/events
|
https://github.com/huggingface/transformers/pull/21483
| 1,573,488,369
|
PR_kwDOCUB6oc5JY1cN
| 21,483
|
[tokenizer] sanitize saved config
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
this PR fixes tokenizer's `save_pretrained` to remove the `name_or_path` entry from `tokenizer_config.json` because:
1. it usually contains the local path that was used to save the model, which is not only invalid once published on the hub, it could potentially reveal some personal information.
2. it is not used anywhere, since one needs to know `name_or_path` before they can load this file.
it also adjusts one tokenizer test not to test for the `name_or_path` entry
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21483/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21483",
"html_url": "https://github.com/huggingface/transformers/pull/21483",
"diff_url": "https://github.com/huggingface/transformers/pull/21483.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21483.patch",
"merged_at": 1675795906000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21482
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21482/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21482/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21482/events
|
https://github.com/huggingface/transformers/issues/21482
| 1,573,418,145
|
I_kwDOCUB6oc5dyHSh
| 21,482
|
Running Trainer.train() with deepspeed throws OSError: handle is closed error when saving checkpoint
|
{
"login": "benproton",
"id": 35586465,
"node_id": "MDQ6VXNlcjM1NTg2NDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/35586465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benproton",
"html_url": "https://github.com/benproton",
"followers_url": "https://api.github.com/users/benproton/followers",
"following_url": "https://api.github.com/users/benproton/following{/other_user}",
"gists_url": "https://api.github.com/users/benproton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benproton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benproton/subscriptions",
"organizations_url": "https://api.github.com/users/benproton/orgs",
"repos_url": "https://api.github.com/users/benproton/repos",
"events_url": "https://api.github.com/users/benproton/events{/privacy}",
"received_events_url": "https://api.github.com/users/benproton/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"thank you for the detailed report, @benproton \r\n\r\nAs you may have derived from the traceback this has nothing to do with deepspeed\r\n\r\nYou have an issue inside `wandb`, which is a 3rd party package, you can either remove it:\r\n\r\n```\r\npip uninstall wandb\r\n```\r\n\r\nor a better long term solution - in your command line add `--report_to none` which will disable wandb (or any other reporting package you happened to have installed in your environment)\r\n\r\nPlease try again and let me know if it fixes the problem.",
"Hey! Thanks so much for the quick reply.\r\n\r\nHmm, still exits at the point of the checkpoint, just not with the error I mentioned:\r\n\r\n```\r\n{'loss': 6.8437, 'learning_rate': 5e-05, 'epoch': 0.01} \r\n 0%|▎ | 500/224238 [51:15<381:53:08, 6.14s/it][INFO|trainer.py:2753] 2023-02-06 16:37:12,461 >> Saving model checkpoint to bennyD/checkpoint-500\r\n[INFO|configuration_utils.py:453] 2023-02-06 16:37:12,462 >> Configuration saved in bennyD/checkpoint-500/config.json\r\n[INFO|configuration_utils.py:359] 2023-02-06 16:37:12,464 >> Configuration saved in bennyD/checkpoint-500/generation_config.json\r\n[INFO|modeling_utils.py:1720] 2023-02-06 16:37:12,809 >> Model weights saved in bennyD/checkpoint-500/pytorch_model.bin\r\n[2023-02-06 16:37:18,583] [INFO] [engine.py:3500:save_16bit_model] Saving model weights to bennyD/checkpoint-500/pytorch_model.bin\r\n[2023-02-06 16:37:18,583] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/pytorch_model.bin...\r\n[2023-02-06 16:37:31,072] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/pytorch_model.bin.\r\n[2023-02-06 16:37:31,187] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step500 is begin to save!\r\n/home/horza/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1365: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/home/horza/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1365: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n[2023-02-06 16:37:31,225] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt\r\n[2023-02-06 16:37:31,225] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-02-06 16:37:31,841] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-02-06 16:37:31,843] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-02-06 16:37:38,871] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 442827\r\n[2023-02-06 16:37:38,875] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 442828\r\n[2023-02-06 16:37:45,767] [ERROR] [launch.py:324:sigkill_handler] ['/usr/bin/python3', '-u', 'examples/pytorch/translation/run-text-gen.py', '--local_rank=1', '--deepspeed', 'tests/deepspeed/ds_config_zero3.json', '--model_name_or_path', 'EleutherAI/gpt-neo-1.3B', '--output_dir=bennyD', '--evaluation_strategy', 'epoch', '--num_train_epochs', '3', '--dataset_name', 'wikitext', '--dataset_config', 'wikitext-2-raw-v1', '--report_to', 'none'] exits with return code = -9\r\n```\r\n\r\nThis is with the following command: `deepspeed examples/pytorch/translation/run-text-gen.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path EleutherAI/gpt-neo-1.3B --output_dir=bennyD --evaluation_strategy epoch --num_train_epochs 3 --dataset_name wikitext --dataset_config \"wikitext-2-raw-v1\" --report_to none`",
"I don't see any traceback there. \r\n\r\nThis often happens when you run out of cpu memory.\r\n\r\nAs it happens during saving the checkpoint, does the problem go away if you set ` \"stage3_gather_16bit_weights_on_model_save\": true` to `false`?",
"Dude! That worked, thanks so much, would never have got that. Logs:\r\n\r\n 0%|▎ | 500/224238 [53:12<396:24:51, 6.38s/it][WARNING|trainer.py:2707] 2023-02-06 18:39:45,438 >> deepspeed.save_16bit_model didn't save the model, since stage3_gather_16bit_weights_on_model_save=false. Saving the full checkpoint instead, use zero_to_fp32.py to recover weights\r\n[INFO|trainer.py:2753] 2023-02-06 18:39:45,439 >> Saving model checkpoint to bennyD/checkpoint-500\r\n[INFO|configuration_utils.py:453] 2023-02-06 18:39:45,440 >> Configuration saved in bennyD/checkpoint-500/config.json\r\n[INFO|configuration_utils.py:359] 2023-02-06 18:39:45,442 >> Configuration saved in bennyD/checkpoint-500/generation_config.json\r\n[INFO|modeling_utils.py:1720] 2023-02-06 18:39:45,795 >> Model weights saved in bennyD/checkpoint-500/pytorch_model.bin\r\n[2023-02-06 18:39:45,825] [INFO] [engine.py:3491:save_16bit_model] Did not save the model bennyD/checkpoint-500/pytorch_model.bin because `stage3_gather_16bit_weights_on_model_save` is False\r\n[WARNING|trainer.py:2707] 2023-02-06 18:39:45,825 >> deepspeed.save_16bit_model didn't save the model, since stage3_gather_16bit_weights_on_model_save=false. Saving the full checkpoint instead, use zero_to_fp32.py to recover weights\r\n[2023-02-06 18:39:45,865] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step500 is begin to save!\r\n/home/horza/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1365: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n/home/horza/.local/lib/python3.10/site-packages/torch/nn/modules/module.py:1365: UserWarning: Positional args are being deprecated, use kwargs instead. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.\r\n warnings.warn(\r\n[2023-02-06 18:39:45,873] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt\r\n[2023-02-06 18:39:45,873] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-02-06 18:39:46,413] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-02-06 18:39:46,414] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-02-06 18:40:37,554] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt.\r\n[2023-02-06 18:40:37,560] [INFO] [engine.py:3397:_save_zero_checkpoint] zero checkpoint saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n[2023-02-06 18:40:37,615] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step500 is ready now!\r\n[2023-02-06 18:40:37,656] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step500 is begin to save!\r\n[2023-02-06 18:40:37,679] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt\r\n[2023-02-06 18:40:37,679] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt...\r\n[2023-02-06 18:40:38,307] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_model_states.pt.\r\n[2023-02-06 18:40:38,310] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt...\r\n[2023-02-06 18:41:19,334] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt.\r\n[2023-02-06 18:41:19,341] [INFO] [engine.py:3397:_save_zero_checkpoint] zero checkpoint saved bennyD/checkpoint-500/global_step500/zero_pp_rank_0_mp_rank_00_optim_states.pt\r\n[2023-02-06 18:41:19,443] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step500 is ready now!\r\n 0%|▎ | 512/224238\r\n\r\nSo what does that do and what is the impact of setting it to false? Thanks again\r\n",
"Excellent. It's because it tries to gather the model on cpu and you don't have enough cpu memory to do that. But you don't need to gather the model on cpu.\r\n\r\nYou can read here about the cost of using `stage3_gather_16bit_weights_on_model_save` and more importantly what you need to know if you're not using it. \r\nhttps://huggingface.co/docs/transformers/main/main_classes/deepspeed#getting-the-model-weights-out\r\nIn particular please make sure to read all the way through to and including `Offline FP32 Weights Recovery` - which you will have to use when you finished training.\r\n\r\nYou may close the Issue if you're satisfied, @benproton \r\n\r\nIf you run into new issues please always open a new Issue. Thank you.",
"Ok thanks. Is that because I'm offloading to cpu? If I choose not to do that, will that prevent the issue?",
"indeed. the offloading takes a lot of space on cpu.",
"Last question then I'll close. Can we therefore assume that the reason I was able to run https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py with checkpoints successfully - without any errors - is because that script isn't as intensive on the cpu? Thanks",
"It's hard to tell, as they are different programs. It's possible that with one program itself you were using more memory than the other \r\n\r\nit's very easy to tell though, just add `--skip_memory_metrics 0`, run a few steps and it'll print you full stats on memory usage - so you can compare the 2 programs. do not use this in production since it adds an overhead.\r\n\r\nIn general if you were able to start training you should be able to continue training w/o cpu memory oom events. This is one exception where due to `zero.Init` when the model gets inited it loads the model directly onto the gpu, so your CPU memory can be actually quite small (smaller than gpu) and it'll still work. However if a user chooses to save the full model they have to consolidate it first on cpu and that's where there might not be enough of memory. That setting is set to `True` by default to make it easy for users to start right of the box. As they learn the ropes they will then discover more efficient ways of doing things.\r\n\r\nAlso unrelated to your questions: If you have plenty of free gpu memory you may want to consider turning offloading off for one or both config entries and even switch to zero stage 2. Each of these will use more gpu memory but will make your training faster. Measure the different options and see which one gives you the fastest training. Again all the stats are printed at the end of each training.",
"That's all incredibly helpful, thanks so much. I think the main culprit was wandb, disabling that stopped the errors. I just tried turning off cpu offloading altogether and training is now running much faster as you anticipated and the checkpoint saving is still working. I have a good amount of GPU memory across 2 x GPUs (48GB total) and I've been attempting to run larger models across multiple GPUs as the previous code I was using was hindered by relying on the capabilities of a single GPU, so from what I've learned from the docs, zero stage 3 for sure seems the way to go for this, correct? Goal was to prove I can achieve this before investing in more GPUs so mission accomplished! Again thanks so much for all of your help.",
"You're welcome, @benproton. I'm glad your goal has been reached without spending additional $$.\r\n\r\nAnd zero stage 2 is even faster than stage 3 if you have enough gpu memory to not need to shard model weights.\r\n\r\nAlso enabling `--gradient_checkpointing 1` will use less gpu memory at the cost of 20% slowdown, but which would enable a larger batchsize or a switch to stage 2, so the overall training will be faster.\r\n\r\nSpend some time experimenting with different knobs and you should be able to get an even faster training.",
"Typically the optimal approach would be along these steps:\r\n\r\n1. enable `--gradient_checkpointing 1` if oom then\r\n2. try zero stage 2 first - if oom then\r\n3. switch to zero 3 - if oom then\r\n4. enable `offload_param` to `cpu` - if oom then\r\n5. enable `offload_optimizer` to `cpu` - if oom\r\n6. repeat all of the above with bs=1 (if it wasn't 1 already) and if possible shorter seq-len - if using `generate` use smaller beam search, etc. or alternatively always start with `bs=1` and instead progress from there.\r\n7. obviously use mixed half-precision over fp32 - so bf16 on ampere and fp16 on earlier gpus\r\n\r\nremember you have `--gradient_accumulation_steps=XXX` to get whatever effective batch size you need regardless of your gpu size and `--per_device_train_batch_size`",
"All super helpful pointers thanks again ",
"@stas00 I've been experimenting and everything is working great when using a hugging face dataset such as the example I gave. However, whenever I try using the bittensor dataset the program always just hangs early on, either while training or while evaluating with nothing obvious appearing in the logs. Any ideas? Is there anything I can do to determine what is causing the hanging? Thanks.\r\n\r\nE.g.: `Time to load utils op: 0.00036215782165527344 seconds\r\n[INFO|trainer.py:1516] 2023-02-09 22:55:56,474 >> ***** Running training *****\r\n[INFO|trainer.py:1517] 2023-02-09 22:55:56,474 >> Num examples = 39291\r\n[INFO|trainer.py:1518] 2023-02-09 22:55:56,474 >> Num Epochs = 4\r\n[INFO|trainer.py:1519] 2023-02-09 22:55:56,474 >> Instantaneous batch size per device = 8\r\n[INFO|trainer.py:1520] 2023-02-09 22:55:56,474 >> Total train batch size (w. parallel, distributed & accumulation) = 16\r\n[INFO|trainer.py:1521] 2023-02-09 22:55:56,474 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1522] 2023-02-09 22:55:56,474 >> Total optimization steps = 9824\r\n[INFO|integrations.py:579] 2023-02-09 22:55:56,994 >> Automatic Weights & Biases logging enabled, to disable set os.environ[\"WANDB_DISABLED\"] = \"true\"\r\n 0%| | 0/9824 [00:00<?, ?it/s][2023-02-09 22:56:02,149] [WARNING] [stage3.py:1939:step] 1 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding torch.cuda.empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time\r\n 1%|▊ | 50/9824 [01:54<6:00:31, 2.21s/it][INFO|trainer.py:2753] 2023-02-09 22:57:52,401 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:2755] 2023-02-09 22:57:52,401 >> Num examples = 1034\r\n[INFO|trainer.py:2758] 2023-02-09 22:57:52,401 >> Batch size = 8\r\n\r\n 49%|█████████████████████████████████████████████████████████████████████████████████▋ | 32/65 [00:31<00:32, 1.01it/s]\r\n`",
"yes, and I will reply once you open a new Issue and fully document the Issue.\r\n\r\nI will give you a quick pointer: https://github.com/stas00/toolbox/blob/master/pytorch/torch-distributed-hanging-solutions.md but we won't continue this discussion in this Issue. \r\n\r\nThis issue has been resolved and closed for good. New problems require new Issues.\r\n\r\nthank you.",
"> yes, and I will reply once you open a new Issue and fully document the Issue.\r\n> \r\n> I will give you a quick pointer: https://github.com/stas00/toolbox/blob/master/pytorch/torch-distributed-hanging-solutions.md but we won't continue this discussion in this Issue.\r\n> \r\n> This issue has been resolved and closed for good. New problems require new Issues.\r\n> \r\n> thank you.\r\n\r\nDone, thank you @stas00 "
] | 1,675
| 1,676
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, via deepspeed
- Using distributed or parallel set-up in script?: yes, via deepspeed
### Who can help?
@stas00, @pacman100
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I've been trying to use the Trainer with deepspeed using the following guide: https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/deepspeed#trainer-deepspeed-integration
Below is my python code:
```
#!/usr/bin/env python
# coding=utf-8
# Copyright The HuggingFace Team and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Fine-tuning the library models for sequence to sequence.
"""
# You can also adapt this script on your own sequence to sequence task. Pointers for this are left as comments.
import logging
import os
import sys
from dataclasses import dataclass, field
from typing import Optional
import datasets
import numpy as np
from datasets import Dataset, DatasetDict, load_dataset
import evaluate
import transformers
from transformers import (
AutoConfig,
AutoTokenizer,
HfArgumentParser,
M2M100Tokenizer,
MBart50Tokenizer,
MBart50TokenizerFast,
MBartTokenizer,
MBartTokenizerFast,
Trainer,
TrainingArguments,
AutoModelForCausalLM,
default_data_collator,
set_seed,
)
from transformers.trainer_utils import get_last_checkpoint
from transformers.utils import check_min_version, send_example_telemetry
from transformers.utils.versions import require_version
import bittensor
from itertools import chain
from tqdm.auto import tqdm
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.27.0.dev0")
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/translation/requirements.txt")
logger = logging.getLogger(__name__)
# A list of all multilingual tokenizer which require src_lang and tgt_lang attributes.
MULTILINGUAL_TOKENIZERS = [MBartTokenizer, MBartTokenizerFast, MBart50Tokenizer, MBart50TokenizerFast, M2M100Tokenizer]
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
model_name_or_path: str = field(
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None,
metadata={"help": "Where to store the pretrained models downloaded from huggingface.co"},
)
use_fast_tokenizer: bool = field(
default=True,
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
)
model_revision: str = field(
default="main",
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
)
use_auth_token: bool = field(
default=False,
metadata={
"help": (
"Will use the token generated when running `huggingface-cli login` (necessary to use this script "
"with private models)."
)
},
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
source_lang: str = field(default=None, metadata={"help": "Source language id for translation."})
target_lang: str = field(default=None, metadata={"help": "Target language id for translation."})
dataset_name: Optional[str] = field(
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
)
dataset_config_name: Optional[str] = field(
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
)
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a jsonlines)."})
validation_file: Optional[str] = field(
default=None,
metadata={
"help": "An optional input evaluation data file to evaluate the metrics (sacrebleu) on a jsonlines file."
},
)
test_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input test data file to evaluate the metrics (sacrebleu) on a jsonlines file."},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
preprocessing_num_workers: Optional[int] = field(
default=None,
metadata={"help": "The number of processes to use for the preprocessing."},
)
max_source_length: Optional[int] = field(
default=1024,
metadata={
"help": (
"The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
)
},
)
max_target_length: Optional[int] = field(
default=128,
metadata={
"help": (
"The maximum total sequence length for target text after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
)
},
)
val_max_target_length: Optional[int] = field(
default=None,
metadata={
"help": (
"The maximum total sequence length for validation target text after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded. Will default to `max_target_length`."
"This argument is also used to override the ``max_length`` param of ``model.generate``, which is used "
"during ``evaluate`` and ``predict``."
)
},
)
pad_to_max_length: bool = field(
default=False,
metadata={
"help": (
"Whether to pad all samples to model maximum sentence length. "
"If False, will pad the samples dynamically when batching to the maximum length in the batch. More "
"efficient on GPU but very bad for TPU."
)
},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": (
"For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
)
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": (
"For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
)
},
)
max_predict_samples: Optional[int] = field(
default=None,
metadata={
"help": (
"For debugging purposes or quicker training, truncate the number of prediction examples to this "
"value if set."
)
},
)
num_beams: Optional[int] = field(
default=None,
metadata={
"help": (
"Number of beams to use for evaluation. This argument will be passed to ``model.generate``, "
"which is used during ``evaluate`` and ``predict``."
)
},
)
ignore_pad_token_for_loss: bool = field(
default=True,
metadata={
"help": "Whether to ignore the tokens corresponding to padded labels in the loss computation or not."
},
)
source_prefix: Optional[str] = field(
default=None, metadata={"help": "A prefix to add before every source text (useful for T5 models)."}
)
forced_bos_token: Optional[str] = field(
default=None,
metadata={
"help": (
"The token to force as the first generated token after the :obj:`decoder_start_token_id`.Useful for"
" multilingual models like :doc:`mBART <../model_doc/mbart>` where the first generated token needs to"
" be the target language token.(Usually it is the target language token)"
)
},
)
def __post_init__(self):
if self.dataset_name is None and self.train_file is None and self.validation_file is None:
raise ValueError("Need either a dataset name or a training/validation file.")
# accepting both json and jsonl file extensions, as
# many jsonlines files actually have a .json extension
valid_extensions = ["json", "jsonl"]
if self.train_file is not None:
extension = self.train_file.split(".")[-1]
assert extension in valid_extensions, "`train_file` should be a jsonlines file."
if self.validation_file is not None:
extension = self.validation_file.split(".")[-1]
assert extension in valid_extensions, "`validation_file` should be a jsonlines file."
if self.val_max_target_length is None:
self.val_max_target_length = self.max_target_length
def load_raw_datasets(name: str, confName: str) -> DatasetDict:
if name == "bittensor":
dataset = bittensor.dataset(
no_tokenizer=True,
# batch_size=cfg.training.train_batch_size,
# block_size=cfg.dataset.block_size,
)
dataloader = dataset.dataloader(1000)
bittensor_dataset = {"text": []}
for batch in tqdm(dataloader, desc="Loading data from bittensor IPFS"):
bittensor_dataset["text"].extend(batch)
raw_datasets = Dataset.from_dict(bittensor_dataset)
dataset.close() # Avoid leaving threadqueue running.
return raw_datasets
if os.path.exists(name):
data_files = {"text": name}
dataset_args = {}
extension = os.path.splitext(name)[-1].lstrip(".")
if extension == "txt":
extension = "text"
dataset_args["keep_linebreaks"] = True
raw_datasets = load_dataset(
extension, data_files=data_files, **dataset_args)
raw_datasets = raw_datasets["text"]
else:
raw_datasets = load_dataset(name, confName)
return raw_datasets
def load_model_and_tokenizer(model_args: ModelArguments):
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=model_args.use_fast_tokenizer,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
# tokenizer.pad_token = cfg.tokenizer.pad_token
if tokenizer.pad_token is None and tokenizer.eos_token is not None:
tokenizer.pad_token = tokenizer.eos_token
# model = AutoModelForCausalLM.from_pretrained(
# name,
# from_tf=bool(".ckpt" in name),
# config=config,
# )
# model.to('cuda')
# model.resize_token_embeddings(len(tokenizer))
# We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch
# on a small vocab and want a smaller embedding size, remove this test.
embedding_size = model.get_input_embeddings().weight.shape[0]
if len(tokenizer) > embedding_size:
model.resize_token_embeddings(len(tokenizer))
return tokenizer, model
def preprocess(blockSize, tokenizer, raw_datasets):
# First we tokenize all the texts.
column_names = raw_datasets.column_names
text_column_name = "text" if "text" in column_names else column_names["train"][0]
if True is True:
pad = False
else:
pad = "max_length"
def group_texts(examples):
# print(examples)
# Concatenate all texts.
concatenated_examples = {
k: list(chain(*examples[k])) for k in examples.keys()}
# print(concatenated_examples)
total_length = len(concatenated_examples[list(examples.keys())[0]])
if total_length >= blockSize:
total_length = (
total_length // blockSize
) * blockSize
# Split by chunks of max_len.
result = {
k: [
t[i: i + blockSize]
for i in range(0, total_length, blockSize)
]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
def tokenize_fn(examples):
# result = tokenizer(
# examples[text_column_name],
# padding=pad,
# truncation=True,
# max_length=cfg.dataset.block_size,
# )
# result["labels"] = result["input_ids"].copy()
# return result
return tokenizer(examples[text_column_name])
tokenized_datasets = raw_datasets.map(
tokenize_fn,
batched=True,
remove_columns=text_column_name,
load_from_cache_file=not False,
desc="Running tokenizer on dataset",
)
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=None,
load_from_cache_file=not False,
desc=f"Grouping texts in chunks of {blockSize}",
)
return lm_datasets
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
# Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
# information sent is the one passed as arguments along with your Python/PyTorch versions.
send_example_telemetry("run_translation", model_args, data_args)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
transformers.utils.logging.enable_default_handler()
transformers.utils.logging.enable_explicit_format()
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
logger.info(f"Training/evaluation parameters {training_args}")
tokenizer, model = load_model_and_tokenizer(model_args)
# dataset = load_raw_datasets("bittensor", None)
dataset = load_raw_datasets("wikitext", "wikitext-2-raw-v1")
tokenized_datasets = preprocess(2, tokenizer, dataset)
if "train" not in tokenized_datasets.column_names:
tokenized_datasets = tokenized_datasets.train_test_split(
test_size=5 / 100
)
tokenized_datasets_test_valid = tokenized_datasets["test"].train_test_split(
test_size=0.5
)
tokenized_datasets["test"] = tokenized_datasets_test_valid["train"]
tokenized_datasets["validation"] = tokenized_datasets_test_valid["test"]
train_dataset = tokenized_datasets["train"]
eval_dataset = tokenized_datasets["validation"]
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
# tokenizer=tokenizer,
# compute_metrics=compute_metrics,
)
trainer.train()
if __name__ == "__main__":
main()`
The JSON config I'm using for deepspeed is:
`{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
And the command I'm using is:
`deepspeed examples/pytorch/translation/run-text-gen.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path EleutherAI/gpt-neo-1.3B --output_dir=bennyD --evaluation_strategy epoch --num_train_epochs 2 --dataset_name wikitext --dataset_config "wikitext-2-raw-v1"`
The full stack trace:
```
Exception in thread MsgRouterThr:
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/horza/.local/lib/python3.10/site-packages/wandb/sdk/interface/router.py", line 69, in message_loop
msg = self._read_message()
File "/home/horza/.local/lib/python3.10/site-packages/wandb/sdk/interface/router_queue.py", line 32, in _read_message
msg = self._response_queue.get(timeout=1)
File "/usr/lib/python3.10/multiprocessing/queues.py", line 117, in get
res = self._recv_bytes()
File "/usr/lib/python3.10/multiprocessing/connection.py", line 212, in recv_bytes
self._check_closed()
File "/usr/lib/python3.10/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
```
It's worth noting that if I run the following code: https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py used in the guide, and modify it to make checkpoints, I do not get the same error.
Additionally if I add `--save_strategy no` to my command, it completes with no errors. But I need the checkpoints.
Please help, been trying to figure this one out for a while.
### Expected behavior
The command runs with checkpoints and completes without errors.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21482/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21481
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21481/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21481/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21481/events
|
https://github.com/huggingface/transformers/pull/21481
| 1,573,367,022
|
PR_kwDOCUB6oc5JYbAC
| 21,481
|
Bump oauthlib from 3.2.1 to 3.2.2 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
Bumps [oauthlib](https://github.com/oauthlib/oauthlib) from 3.2.1 to 3.2.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/oauthlib/oauthlib/releases">oauthlib's releases</a>.</em></p>
<blockquote>
<h2>3.2.2</h2>
<h2>OAuth2.0 Provider:</h2>
<ul>
<li>CVE-2022-36087</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/oauthlib/oauthlib/blob/master/CHANGELOG.rst">oauthlib's changelog</a>.</em></p>
<blockquote>
<h2>3.2.2 (2022-10-17)</h2>
<p>OAuth2.0 Provider:</p>
<ul>
<li>CVE-2022-36087</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/oauthlib/oauthlib/commit/e6c33e41a8ce6dadff387cdc4613a55b63d1827e"><code>e6c33e4</code></a> Add 3.2.2 version</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/4a4d65f8eeecfe7d778269466871c5c15fe9c1bc"><code>4a4d65f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/832">#832</a> from oauthlib/3.2.1</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/2e40b412c844ecc4673c3fa3f72181f228bdbacd"><code>2e40b41</code></a> Merge pull request from GHSA-3pgj-pg6c-r5p7</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/b4bdd09c56aa5dedb475529e75ce73c092ca0898"><code>b4bdd09</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/818">#818</a> from dasm/master</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/5d85c61998692643dd9d17e05d2646e06ce391e8"><code>5d85c61</code></a> Fix IPV6 regex used to check redirect_uri</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/e514826eea15f2b62bbc13da407b71552ef5ff4c"><code>e514826</code></a> Add check of performance of ipv6 check</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/9aa45aaff0cdeab258d18c025cf66e9bdba529c0"><code>9aa45aa</code></a> Restored test for port 0.</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/f52f641d763e4958d108e875e0cd6fca50d110f2"><code>f52f641</code></a> Merge branch 'oauthlib:master' into master</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/ed0cb63945c4a5940b185823809693b7f97989ad"><code>ed0cb63</code></a> Removed unused query and fragment</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/d05c388078b45285ac4a012c568a5e2d56556a34"><code>d05c388</code></a> Removed dependency on split</li>
<li>Additional commits viewable in <a href="https://github.com/oauthlib/oauthlib/compare/v3.2.1...v3.2.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21481/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21481",
"html_url": "https://github.com/huggingface/transformers/pull/21481",
"diff_url": "https://github.com/huggingface/transformers/pull/21481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21481.patch",
"merged_at": 1675726034000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21480
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21480/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21480/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21480/events
|
https://github.com/huggingface/transformers/pull/21480
| 1,573,172,316
|
PR_kwDOCUB6oc5JXwJ4
| 21,480
|
Update quality tooling for formatting
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
This updates the quality tools for 2023. Mainly:
- use black 23
- replace isort and flake8 by ruff, which is faster and merges imports contrarily to isort
- change a few rules in the isorting
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21480/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21480",
"html_url": "https://github.com/huggingface/transformers/pull/21480",
"diff_url": "https://github.com/huggingface/transformers/pull/21480.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21480.patch",
"merged_at": 1675725056000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21479
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21479/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21479/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21479/events
|
https://github.com/huggingface/transformers/pull/21479
| 1,573,171,214
|
PR_kwDOCUB6oc5JXv6u
| 21,479
|
[`pipeline`] A simple fix for half-precision & 8bit models
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the well thought out issue and proposed fix. \n\nI don't particularly like the fix because it depends on some weird internal and still forces users to use `device_map` and `device` iiuc.\n\nCouldn't we just use `device_map` and use `accelerate` api to figure out where to put the inputs? (most likely cuda:0 but still cpu if no gpus are available I think.)\nThat or just do something special for `device_map` without asking where the model is (if the API doesn't exist or is tricky).\n\nImo using `device_map` and `device` should be an error (ambiguous intent) ",
"Thanks for the feedback @Narsil !\r\nI think \r\n> Imo using device_map and device should be an error (ambiguous intent)\r\n\r\nMakes sense !\r\nAnother fix would be to force-upcast the logits in fp32 when doing top_k & top_p sampling on the generation side only if the logits are on `cpu`, is this solution a reasonable fix @gante ? Happy to open a PR to fix it!",
"> force-upcast \r\n\r\nI would highly advise against it too. There's a limit to magic. Doing half precision on cpu should crash in a lot of places. We shouldn't upcast on behalf of a user that explicitely asked for half precision imo. That's breaking user intent.\r\nBut the user also asked for GPU, that's where we're breaking his intent and that's what should be fixed IMO.\r\n\r\nDoes `accelerate` allow to know on which device is the start of the model ?",
"I see, makes sense! \r\n\r\n> Does accelerate allow to know on which device is the start of the model ?\r\n\r\nI am not sure here, maybe @sgugger & @muellerzr knows better",
"> I am not sure here, maybe @sgugger & @muellerzr knows better\r\n\r\nif not the pipeline could have the simplest heuristic `'cuda:0' if torch.cuda.is_avalaible() else 'cpu'` which should work 99% of the time. \r\nBut it wouldn't if a user specified an odd map (which is why having direct access would be better).",
"What do you think:\r\n\r\n```diff\r\ndiff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py\r\nindex 30402b36e..e698f1aa3 100644\r\n--- a/src/transformers/pipelines/base.py\r\n+++ b/src/transformers/pipelines/base.py\r\n@@ -749,7 +749,7 @@ class Pipeline(_ScikitCompat):\r\n framework: Optional[str] = None,\r\n task: str = \"\",\r\n args_parser: ArgumentHandler = None,\r\n- device: Union[int, str, \"torch.device\"] = -1,\r\n+ device: Union[int, str, \"torch.device\"] = None,\r\n torch_dtype: Optional[Union[str, \"torch.dtype\"]] = None,\r\n binary_output: bool = False,\r\n **kwargs,\r\n@@ -764,6 +764,20 @@ class Pipeline(_ScikitCompat):\r\n self.image_processor = image_processor\r\n self.modelcard = modelcard\r\n self.framework = framework\r\n+\r\n+ # Special handling\r\n+ if self.framework == \"pt\" and device is not None:\r\n+ self.model = self.model.to(device=device)\r\n+\r\n+ if device is None:\r\n+ # `accelerate` device map\r\n+ hf_device_map = getattr(self.model, \"hf_device_map\", None)\r\n+ if hf_device_map is not None:\r\n+ # Take the first device used by `accelerate`.\r\n+ device = next(iter(hf_device_map.values()))\r\n+ else:\r\n+ device = -1\r\n+\r\n if is_torch_available() and self.framework == \"pt\":\r\n if isinstance(device, torch.device):\r\n self.device = device\r\n@@ -775,13 +789,10 @@ class Pipeline(_ScikitCompat):\r\n self.device = torch.device(f\"cuda:{device}\")\r\n else:\r\n self.device = device\r\n+\r\n self.torch_dtype = torch_dtype\r\n self.binary_output = binary_output\r\n\r\n- # Special handling\r\n- if self.framework == \"pt\" and self.device.type != \"cpu\":\r\n- self.model = self.model.to(self.device)\r\n-\r\n # Update config with task specific parameters\r\n task_specific_params = self.model.config.task_specific_params\r\n if task_specific_params is not None and task in task_specific_params:\r\n```\r\n\r\nHere we just modify the default device **when** the model uses `accelerate` 's `device_map`.\r\nWe still depend on something magic, but it only modifies the `default` device, and doesn't modify `model` unless `device` was specified by user (which is correct in terms of intent IMO)",
"I think that would totally work @Narsil ! Happy to change the PR with your proposed changes, let me know!",
"Sure. Let's update the doc too.",
"Side notes:\r\n- you should probably do something if the user passes a device and the model has a `hf_device_map` (at least a warning) as the line `model.to(device)` will probably screw things up (it will at least error if there are some weights offloaded)\r\n- the device on which the model is executed is determined by this rule in Accelerate, maybe you should use the same code? I can also store it on the Accelerate side in a special attribute but then you'd have to wait for a release.\r\n```py\r\n if set(device_map.values()) == {\"cpu\"} or set(device_map.values()) == {\"cpu\", \"disk\"}:\r\n main_device = \"cpu\"\r\n else:\r\n main_device = [d for d in device_map.values() if d not in [\"cpu\", \"disk\"]][0]\r\n```",
"Thanks a lot for the valuable feedback @Narsil @sgugger ! \r\nI updated the PR and added more clarification (and also a new section) on the docs",
"the failing test is indenpendent to our PR! Merging!\r\nThanks for all your comments!"
] | 1,675
| 1,677
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Currently on the `main` branch of `transformers` if a user wants to run a `pipeline` using large models (thus, ideally loaded with `device_map=...`) and in half precision (or in int8), they may encounter some issues when calling `pipeline` with `top_p` & `top_k` sampling:
```bash
RuntimeError: "topk_cpu" not implemented for 'Half'
```
## Snippet to reproduce & explanations:
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
model_id = "EleutherAI/gpt-neo-125M"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.float16)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_length=20, temperature=1, do_sample=True, top_p=0.95, top_k=60, num_return_sequences=3)
text = "What can you tell me about the LHC?"
response = pipe(text)
print(response[0]["generated_text"])
```
This is because the `input_ids` are automatically set on `cpu` since the argument `device` is not passed when initializing the `pipeline`. A model that is loaded with `device_map=...` (i.e. with `accelerate`) always sets the output tensor of the model to the `device` of the input tensor thanks to the forward hooks. Therefore when calling the top_k method, the output tensor is in fp16 (because the model has been loaded in fp16) & on `cpu` hence the torch error above.
Currently a hack to fix this is to add `device=0` when initializing the `pipeline` but this leads to inconsistent and undesirable behaviours for some cases, for example when loading large models in several GPUs, since the call `model.to(self.device)` will break some internals (the hooks will be still there but the weights will be set on the wrong devices). A snippet to reproduce below:
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
model_id = "EleutherAI/gpt-neo-125M"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="balanced", torch_dtype=torch.float16)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_length=20, temperature=1, do_sample=True, top_p=0.95, top_k=60, num_return_sequences=3, device=0)
text = "What can you tell me about the LHC?"
response = pipe(text)
print(response[0]["generated_text"])
```
adding this hack also breaks the usage of `pipeline` with int8 models, since the `to` method is blocked for these models:
```bash
ValueError: `.to` is not supported for `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```
Thus, I propose to fix this by simply checking whether a model has been loaded with `accelerate` by looking at the attribute `hf_device_map` , and set the model on the correct device only if it has not been loaded with accelerate as backend. This fixes 3 bugs: using `pipeline` with a fp16 model that has been loaded with `accelerate` without having any error in case of multi-gpu usage, using `pipeline` with a fp16 model w `accelerate` & sampling strategies, and using `pipeline` with int8 models & sampling strategies.
cc @sgugger @Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21479/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21479/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21479",
"html_url": "https://github.com/huggingface/transformers/pull/21479",
"diff_url": "https://github.com/huggingface/transformers/pull/21479.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21479.patch",
"merged_at": 1676021177000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21478
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21478/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21478/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21478/events
|
https://github.com/huggingface/transformers/pull/21478
| 1,573,161,886
|
PR_kwDOCUB6oc5JXt6N
| 21,478
|
Fix epoch number when resuming training
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
As @stas00 pointed out in #21390, the epoch number reported after skipping some batches in a training resumed was wrong. This PR fixes it and adds a test.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21478/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21478/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21478",
"html_url": "https://github.com/huggingface/transformers/pull/21478",
"diff_url": "https://github.com/huggingface/transformers/pull/21478.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21478.patch",
"merged_at": 1675730074000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21477
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21477/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21477/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21477/events
|
https://github.com/huggingface/transformers/pull/21477
| 1,572,984,556
|
PR_kwDOCUB6oc5JXH2I
| 21,477
|
OPT: BLIP2-ready `prepare_inputs_for_generation`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @NielsRogge ",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
This PR makes 2 changes to OPT's `prepare_inputs_for_generation`:
1. Adds the possibility of passing `inputs_embeds`
2. Removes the case when `attantion_mask is None`: a) it is redundant, the base case of inferring an attention mask with all ones is also in the model itself [here](https://github.com/huggingface/transformers/blob/baf4bacb1f10ecb63f0efc98d07463ae8799c7e3/src/transformers/models/opt/modeling_opt.py#L636) b) the shape isn't right when `inputs_embeds` is passed
✅ Slow OPT tests were run locally.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21477/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21477/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21477",
"html_url": "https://github.com/huggingface/transformers/pull/21477",
"diff_url": "https://github.com/huggingface/transformers/pull/21477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21477.patch",
"merged_at": 1675707558000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21476
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21476/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21476/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21476/events
|
https://github.com/huggingface/transformers/pull/21476
| 1,572,931,424
|
PR_kwDOCUB6oc5JW8Sq
| 21,476
|
[`BLIP`] update blip path on slow tests
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can you add it to hf-internal-testing instead? Might be better there. Happy to merge any PR or approve your demand to join ;-)",
"Thanks for adding! I've transferred it :) ",
"@sgugger , I think that we can merge this! 🙏 "
] | 1,675
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR updates the path to the image we use for running BLIP slow tests - as pointed out by @NielsRogge better to upload these images on the Hub in case they get removed from the original place
Happy also to move the Hub repo on `hf-internal-testing` but I am not a member of the org
Also the image is quite large, so I prefer to upload it on the Hub rather than pushing it on the repo here
cc @NielsRogge @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21476/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21476",
"html_url": "https://github.com/huggingface/transformers/pull/21476",
"diff_url": "https://github.com/huggingface/transformers/pull/21476.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21476.patch",
"merged_at": 1676658397000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21475
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21475/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21475/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21475/events
|
https://github.com/huggingface/transformers/pull/21475
| 1,572,900,045
|
PR_kwDOCUB6oc5JW1mo
| 21,475
|
Generate: TF can now generate from embeddings in encoder-decoder models
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(merging -- the failing test, `test_from_pretrained_dynamic_model_distant` is a known flaky)"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
TF generation test addition PR 2 (out of ???).
In an effort to move generation integration tests to be framework-agnostic, I'll be adding low-hanging fruit to TF. This PR brings the ability to generate from input embeddings, with encoder-decoder models (or, more specifically, from non-input_ids inputs). The code added is almost copy-paste from PT.
Since this PR made a few changes in the main code path for generate, the following slow tests were run to ensure XLA compatibility:
- [x] GPT2
- [x] T5
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21475/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21475",
"html_url": "https://github.com/huggingface/transformers/pull/21475",
"diff_url": "https://github.com/huggingface/transformers/pull/21475.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21475.patch",
"merged_at": 1675768704000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21474
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21474/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21474/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21474/events
|
https://github.com/huggingface/transformers/pull/21474
| 1,572,720,342
|
PR_kwDOCUB6oc5JWPB7
| 21,474
|
Generate: TF `.generate()` can now be exported with dynamic length
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
As requested by @mfuntowicz, makes TF `.generate()` exportable with dynamic input length 🔥
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21474/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21474/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21474",
"html_url": "https://github.com/huggingface/transformers/pull/21474",
"diff_url": "https://github.com/huggingface/transformers/pull/21474.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21474.patch",
"merged_at": 1675947150000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21473
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21473/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21473/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21473/events
|
https://github.com/huggingface/transformers/pull/21473
| 1,572,567,002
|
PR_kwDOCUB6oc5JVtxr
| 21,473
|
Removing `more_itertools` dependency.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I don't think the failure is linked in any way to this PR, is it ?"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Removes `more_itertools` optional dependency.
Fixes #20508
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21473/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21473",
"html_url": "https://github.com/huggingface/transformers/pull/21473",
"diff_url": "https://github.com/huggingface/transformers/pull/21473.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21473.patch",
"merged_at": 1675701201000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21472
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21472/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21472/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21472/events
|
https://github.com/huggingface/transformers/pull/21472
| 1,572,544,463
|
PR_kwDOCUB6oc5JVo2c
| 21,472
|
Resnet flax
|
{
"login": "Shubhamai",
"id": 51819922,
"node_id": "MDQ6VXNlcjUxODE5OTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/51819922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shubhamai",
"html_url": "https://github.com/Shubhamai",
"followers_url": "https://api.github.com/users/Shubhamai/followers",
"following_url": "https://api.github.com/users/Shubhamai/following{/other_user}",
"gists_url": "https://api.github.com/users/Shubhamai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shubhamai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shubhamai/subscriptions",
"organizations_url": "https://api.github.com/users/Shubhamai/orgs",
"repos_url": "https://api.github.com/users/Shubhamai/repos",
"events_url": "https://api.github.com/users/Shubhamai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shubhamai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Very cool! Sorry to only reply here now - looks like you've made a really solid start to this PR! Let's get the Batch Norm PR merged ASAP and then go full send on Flax ResNet! 🚀\r\n\r\nFeel free to ping me with any questions / queries! More than happy to help with the integration!",
"@sanchit-gandhi the PR is also now ready for your review, thanks a lot for your time. ",
"Thanks for the review, I will make the changes soon. Many of the reviews here also apply to the [Flax Convnext PR](https://github.com/huggingface/transformers/pull/21485) so will make corresponding changes there too. ",
"Hey @Shubhamai! Really nice work on this PR - just a few small changes to go now! Feel free to ping me once you're happy with the last bits and I'll get you a final review. Taking a look at Flax RegNet in the meantime!",
"Made the request changes and apologies for the late reply, had a busy schedule lately.",
"Thanks for the review @amyeroberts 🙌 Feel free to propagate the changes forward to [Flax RegNet](https://github.com/huggingface/transformers/pull/21867) @Shubhamai and we can get a final review there!",
"We can merge when the CI is green! 🟢",
"Thank you so much for the review & merge @sanchit-gandhi @amyeroberts , really appreciate your work & time on this ❤️ "
] | 1,675
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
# Flax Implementation of `microsoft/resnet-50`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi I guess :sweat_smile:
## Status
Last Updated - Sunday, 12 February 2023
### TODO
- [x] Blocked on merge of [this PR](https://github.com/huggingface/transformers/pull/21581) to add support for BatchNorm layers.
- [ ] Uploading [Shubhamai/resnet-50](https://huggingface.co/Shubhamai/resnet-50) flax weights to [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21472/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21472",
"html_url": "https://github.com/huggingface/transformers/pull/21472",
"diff_url": "https://github.com/huggingface/transformers/pull/21472.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21472.patch",
"merged_at": 1679687158000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21471
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21471/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21471/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21471/events
|
https://github.com/huggingface/transformers/issues/21471
| 1,572,479,213
|
I_kwDOCUB6oc5duiDt
| 21,471
|
Add TF GPTNeoX
|
{
"login": "JIPHF",
"id": 12882600,
"node_id": "MDQ6VXNlcjEyODgyNjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/12882600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JIPHF",
"html_url": "https://github.com/JIPHF",
"followers_url": "https://api.github.com/users/JIPHF/followers",
"following_url": "https://api.github.com/users/JIPHF/following{/other_user}",
"gists_url": "https://api.github.com/users/JIPHF/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JIPHF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JIPHF/subscriptions",
"organizations_url": "https://api.github.com/users/JIPHF/orgs",
"repos_url": "https://api.github.com/users/JIPHF/repos",
"events_url": "https://api.github.com/users/JIPHF/events{/privacy}",
"received_events_url": "https://api.github.com/users/JIPHF/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"(@Rocketknight1 FYI)",
"@JIPHF If you're happy to make a PR for this, then please do! Let us know if you need any help with that.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
### Feature request
Add the GPTNeoX model in TensorFlow.
### Motivation
Having GPTNeoX in TensorFlow would benefit the community.
### Your contribution
@gante is it possible to assign this to me?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21471/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21471/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21470
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21470/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21470/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21470/events
|
https://github.com/huggingface/transformers/pull/21470
| 1,572,337,077
|
PR_kwDOCUB6oc5JU71R
| 21,470
|
make SpeechT5 doc examples deterministic
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you @hollance . Before I could merge, could you change the docstrings that contain(s)\r\n\r\n```\r\ndataset = load_dataset(...) \r\n```\r\nto \r\n```\r\ndataset = load_dataset(...) # doctest: +IGNORE_RESULT\r\n```\r\n\r\n🙏 Thank you.",
"@ydshieh If I do this, `make fixup` will wrap this code like so:\r\n\r\n```python\r\n >>> dataset = load_dataset(\r\n ... \"hf-internal-testing/librispeech_asr_demo\", \"clean\", split=\"validation\"\r\n ... ) # doctest: +IGNORE_RESULT\r\n```\r\n\r\nIs that OK?",
"> @ydshieh If I do this, `make fixup` will wrap this code like so:\r\n> \r\n> ```python\r\n> >>> dataset = load_dataset(\r\n> ... \"hf-internal-testing/librispeech_asr_demo\", \"clean\", split=\"validation\"\r\n> ... ) # doctest: +IGNORE_RESULT\r\n> ```\r\n> \r\n> Is that OK?\r\n\r\ntotally OK :-)"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes an issue with the doc examples for SpeechT5. Due to the dropout layer being used in inference mode, the predicted sequence length is not always the same, which causes the doc tests to fail. Setting the seed fixes this.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21470/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21470",
"html_url": "https://github.com/huggingface/transformers/pull/21470",
"diff_url": "https://github.com/huggingface/transformers/pull/21470.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21470.patch",
"merged_at": 1675694636000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21469
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21469/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21469/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21469/events
|
https://github.com/huggingface/transformers/issues/21469
| 1,572,307,142
|
I_kwDOCUB6oc5dt4DG
| 21,469
|
ForcedBOSTokenLogitsProcessor take input_ids.shape[-1] as number of generated tokens
|
{
"login": "Dounm",
"id": 9065640,
"node_id": "MDQ6VXNlcjkwNjU2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9065640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dounm",
"html_url": "https://github.com/Dounm",
"followers_url": "https://api.github.com/users/Dounm/followers",
"following_url": "https://api.github.com/users/Dounm/following{/other_user}",
"gists_url": "https://api.github.com/users/Dounm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dounm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dounm/subscriptions",
"organizations_url": "https://api.github.com/users/Dounm/orgs",
"repos_url": "https://api.github.com/users/Dounm/repos",
"events_url": "https://api.github.com/users/Dounm/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dounm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @Dounm 👋 \r\n\r\n`ForcedBOSTokenLogitsProcessor` is normally used with encoder-decoder models. For those models, when the generation loop begins, it only contains one token per batch/beam by default -- `model.generation_config.decoder_start_token_id ` or `model.generation_config.bos_token_ids`. As such, `ForcedBOSTokenLogitsProcessor` forces an additional token at the beginning of the sequence in those cases.\r\n\r\nI hope this makes it clearer 🤗 ",
"Much thanks for your rely!"
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
### System Info
`ForcedBOSTokenLogitsProcessor` enforces the specified token as the first generated token.
In the code below, it takes `input_ids.shape[-1]` as the length of the generated tokens, but as I know, `input_ids.shape[-1]` equals to `prompt_length + generated_length`.
https://github.com/huggingface/transformers/blob/0db5d911fc94604f9568b4b212e005ec4600d157/src/transformers/generation/logits_process.py#L769
So, is this a bug, or there is something I've missing?
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
None
### Expected behavior
None
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21469/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21468
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21468/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21468/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21468/events
|
https://github.com/huggingface/transformers/issues/21468
| 1,572,286,302
|
I_kwDOCUB6oc5dty9e
| 21,468
|
Error when fine-tuning XLM-RoBERTa base on TF/Keras
|
{
"login": "scottlin19",
"id": 37428823,
"node_id": "MDQ6VXNlcjM3NDI4ODIz",
"avatar_url": "https://avatars.githubusercontent.com/u/37428823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scottlin19",
"html_url": "https://github.com/scottlin19",
"followers_url": "https://api.github.com/users/scottlin19/followers",
"following_url": "https://api.github.com/users/scottlin19/following{/other_user}",
"gists_url": "https://api.github.com/users/scottlin19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scottlin19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scottlin19/subscriptions",
"organizations_url": "https://api.github.com/users/scottlin19/orgs",
"repos_url": "https://api.github.com/users/scottlin19/repos",
"events_url": "https://api.github.com/users/scottlin19/events{/privacy}",
"received_events_url": "https://api.github.com/users/scottlin19/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,675
| 1,675
| 1,675
|
NONE
| null |
Hello,
I am trying to finetune XLM-RoBERTa for text classification on tensorflow-keras. I am using the `TFXLMRobertaForSequenceClassification` class for training
I am using google colab GPU for finetuning. Tensoflow version is 2.9.2
en_y_pred = model.predict(en_x_test_in, batch_size=128, verbose=1)
**InvalidArgumentError: indices[2,268] = 124030 is not in [0, 50265) [[node tf_roberta_for_sequence_classification_1/roberta/embeddings/Gather (defined at /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_tf_roberta.py:149) ]] [Op:__inference_train_function_82886]
Errors may have originated from an input operation. Input Source operations connected to node tf_roberta_for_sequence_classification_1/roberta/embeddings/Gather: In[0] tf_roberta_for_sequence_classification_1/roberta/embeddings/Gather/resource:
In[1] IteratorGetNext (defined at /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:866)**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21468/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21467
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21467/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21467/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21467/events
|
https://github.com/huggingface/transformers/issues/21467
| 1,572,135,458
|
I_kwDOCUB6oc5dtOIi
| 21,467
|
Whisper: Decode with condition_on_previous_text=False
|
{
"login": "m-bain",
"id": 36994049,
"node_id": "MDQ6VXNlcjM2OTk0MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/36994049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m-bain",
"html_url": "https://github.com/m-bain",
"followers_url": "https://api.github.com/users/m-bain/followers",
"following_url": "https://api.github.com/users/m-bain/following{/other_user}",
"gists_url": "https://api.github.com/users/m-bain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m-bain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m-bain/subscriptions",
"organizations_url": "https://api.github.com/users/m-bain/orgs",
"repos_url": "https://api.github.com/users/m-bain/repos",
"events_url": "https://api.github.com/users/m-bain/events{/privacy}",
"received_events_url": "https://api.github.com/users/m-bain/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi we could add this as a generation config argument, and in the `prepare_inputs_for_generation` can just remove all the input_ids if asked. WDYT? \r\nMy question is more about the usage/quality tradeoff but doesn't seem like something hard to maintain. ",
"Quality is much better without conditioning on previous text https://github.com/openai/whisper/discussions/679#discussioncomment-4449150\r\nSimilarly whisperx requires this because theres just too much hallucination otherwise\r\n\r\n>just remove all the input_ids if asked\r\n\r\nYes trying this, fairly straightforward, but not when batch_size > 1\r\n\r\nSince each sample in the batch resets at different indexes (when there is a pair of consecutive timestamps). A lot of the methods are nested quite deep so it's taking me a while to sift through, but seems like the best approach, given this variable length prompt per batch, would be to supply an attention mask to the decoder ?\r\n\r\nOr just pad according to the variable length\r\n\r\n\r\n",
"I think the attention mask is the best way to get what you want indeed. Padding can also work, as it should create the attention mask of the padding and pass it to the network. \r\nI think it makes sense to add this, we just have to keep it to Whisper, so either in the modeling file, or a new logit processor 😉 I won't have time to do this until at least a week, do you want to open a PR and ping me for pointers and reviews? 🤗 ",
"Hacked attempt here, seems to work on my end -- can now run very fast whisper without hallucination :')\r\nhttps://github.com/huggingface/transformers/pull/21491/commits/cf2ad49fae43e8355655c5392d4dca0bdd1a733e",
"Super cool feature! Thanks for the PR @m-bain! Reviewed directly there!",
"Hi there, I was looking into this issue in some detail and I'm not sure this is relevant for the 🤗 Transformers implementation of Whisper, since it never actually conditions on the previous text.\r\n\r\nIt's true that the OpenAI implementation does this, but the Transformers ASR `pipeline` treats all 30-second chunks of audio independently. It never passes in the previous context when doing the predictions.\r\n\r\nThe chunks do overlap partially, so they do have some duplicate tokens, but this overlap is pretty small — not nearly as large as the context provided by the OpenAI implementation. And even if `condition_on_previous_text = False` in the OpenAI code, they still add the output from the previous chunk to the context, which is actually longer than the small bit of overlap used by our `pipeline`.\r\n\r\nIn any case, I will look a bit closer at your PR in the coming days to see exactly what it does. Perhaps it is still an improvement that we could use. 😃 ",
"So this would mean we would support `conditioning on previous text` by adding the sequential processing on the PR 😄 ",
"Great point @hollance, shall we keep this open if we have sequential processing on the roadmap?",
"Leaving this open as it could be relevant for https://github.com/huggingface/transformers/issues/23231#issuecomment-1545684559",
"Doing this only makes sense if we decide to support a sequential pipeline, and I think we weren't really in favor of this? \r\n\r\nRight now, there is no conditioning on previous text going on (except for explicit prompting, once that PR is merged, which you have to enable manually by passing in prompt tokens).\r\n",
"I don't think so, no. Running the sequential pipeline is just the same as the original repo, so I struggle to see what the appeal to the user is here vs our batched method (feels like a step backwards).\r\n\r\nLet's close this one then?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@hollance if all of the chunks are processed independently as you say, why it happens quite often that the model starts to repeat itself after the short segments were processed? E.g actual audio is \"and who\"<silence>\"okay\" but the model will output \"and who and who okay\" even though there were 2 separate segments?",
"@vsokolovskii Do you have a code snippet and audio file that can reproduce this problem?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Leaving this one closed unless we identify a use case that suggests a need for the sequential pipeline. Note that adding a stride between chunks should alleviate any mis-match between them (you can read more about this [here](https://huggingface.co/blog/asr-chunking)).",
"I stumbled upon this thread after benchmarking insanely-fast-whisper, seamless-m4t-v2, faster-whisper and the hugginface implementation of whisper based on transformers pipeline with bettertransformers. I found a bug related to the this thread or rather parameter.\r\n\r\nI was able to reproduce the papers with the Fleurs Dataset, however I went a stop further and wanted to benchmark longer textes. So I concatenated the Fleurs files and Transcriptions accordingly, deleted any duplicate sentences and tested it in 30sec, 1min, 5min and 30min chunks on the models. I used english language.\r\n\r\nLong Story short: In terms of scoring faster whisper was the only model which was able to reproduce the scores from the original Fleurs Dataset, every other model degraded. \r\n\r\nInsanely Fast Whisper and HF Whisper had a WER of 0.14 (0.7 on original Fleurs), seamles-v2 completely broke down to a WER of 0.5. Why? \r\n\r\nBecause faster whisper was the only model, which supported `condition_on_previous_text=False` as a parameter. I think it would be very beneficial to give users the option to use this, especially for the new seamles m4t-v2 model, because even with 30-60s chunks the performance is already at 0.5 for english language.\r\n\r\nI know that a reason for this is that these sentences are concatenated without sharing any semantic background, but models shouldn't be falling apart that severly, because of some implementation decision."
] | 1,675
| 1,704
| 1,689
|
CONTRIBUTOR
| null |
### Feature request
Whisper speech recognition without conditioning on previous text.
As in https://github.com/openai/whisper/blob/7858aa9c08d98f75575035ecd6481f462d66ca27/whisper/transcribe.py#L278
### Motivation
Whisper implementation is great however conditioning the decoding on previous text can cause significant hallucination and repetitive text, e.g.:
>"Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice? Do you have malpractice?"
Running openai's model with `--condition_on_previous_text False` drastically reduces hallucination
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21467/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21466
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21466/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21466/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21466/events
|
https://github.com/huggingface/transformers/issues/21466
| 1,572,074,030
|
I_kwDOCUB6oc5ds_Iu
| 21,466
|
Datasets performance :(
|
{
"login": "webshared",
"id": 6676463,
"node_id": "MDQ6VXNlcjY2NzY0NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6676463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/webshared",
"html_url": "https://github.com/webshared",
"followers_url": "https://api.github.com/users/webshared/followers",
"following_url": "https://api.github.com/users/webshared/following{/other_user}",
"gists_url": "https://api.github.com/users/webshared/gists{/gist_id}",
"starred_url": "https://api.github.com/users/webshared/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/webshared/subscriptions",
"organizations_url": "https://api.github.com/users/webshared/orgs",
"repos_url": "https://api.github.com/users/webshared/repos",
"events_url": "https://api.github.com/users/webshared/events{/privacy}",
"received_events_url": "https://api.github.com/users/webshared/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi there. Please use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only. Also make sure to include the code you are running or no one will be able to help.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
I am using https://huggingface.co/docs/transformers/model_doc/time_series_transformer as an example to setup time series transformers for my use case.
I have modified the training script to just traverse all batches and dump a summary - essentially max and min value of first & last time feature elements for each of 366 static categories.
40 epochs, 100 batches per epoch, 256 train sequences per batch
= 1_024_000 training sequences in total
**Elapsed time: 452.257402 seconds**
Iterating over 1M elements with really simple logic took over 7 minutes on M2 macbook :(
I am new to python - is it kinda expected?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21466/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21465
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21465/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21465/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21465/events
|
https://github.com/huggingface/transformers/issues/21465
| 1,571,755,774
|
I_kwDOCUB6oc5drxb-
| 21,465
|
Obtaining text embeddings from CLIP
|
{
"login": "preethiseshadri518",
"id": 60128552,
"node_id": "MDQ6VXNlcjYwMTI4NTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/60128552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/preethiseshadri518",
"html_url": "https://github.com/preethiseshadri518",
"followers_url": "https://api.github.com/users/preethiseshadri518/followers",
"following_url": "https://api.github.com/users/preethiseshadri518/following{/other_user}",
"gists_url": "https://api.github.com/users/preethiseshadri518/gists{/gist_id}",
"starred_url": "https://api.github.com/users/preethiseshadri518/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/preethiseshadri518/subscriptions",
"organizations_url": "https://api.github.com/users/preethiseshadri518/orgs",
"repos_url": "https://api.github.com/users/preethiseshadri518/repos",
"events_url": "https://api.github.com/users/preethiseshadri518/events{/privacy}",
"received_events_url": "https://api.github.com/users/preethiseshadri518/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nYou can use both as text embeddings.\r\n\r\nThe former (`text_embeds`) are embeddings which are in the same embedding space as the image embeddings (so it allows you to compare images and text - which is what people mainly use CLIP for). However if you just want text embeddings, and don't care about image embeddings, then you can use the `pooler_output`.\r\n\r\nBtw, if you only need text embeddings (and no image embeddings), it's more memory efficient to only load the text encoder of CLIP. You can choose between `CLIPTextModel` (which is the text encoder) and `CLIPTextModelWithProjection` (which is the text encoder + projection layer, which projects the text embeddings into the same embedding space as the image embeddings):\r\n\r\n```\r\nfrom transformers import AutoTokenizer, CLIPTextModelWithProjection\r\n\r\nmodel = CLIPTextModelWithProjection.from_pretrained(\"openai/clip-vit-base-patch32\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"openai/clip-vit-base-patch32\")\r\n\r\ninputs = tokenizer([\"a photo of a cat\", \"a photo of a dog\"], padding=True, return_tensors=\"pt\")\r\n\r\noutputs = model(**inputs)\r\ntext_embeds = outputs.text_embeds\r\n```",
"Also, please ask such questions on our [forum](https://discuss.huggingface.co/) - we'd like to keep Github issues for bugs/feature requests.\r\n\r\nThanks!",
"Will do, apologies! Wasn't sure which was the appropriate place. "
] | 1,675
| 1,703
| 1,675
|
NONE
| null |
I am trying to obtain text embeddings from CLIP as shown below. However, I am confused about the difference between text_embeds vs. pooler_output, since they output different things. According to the documentation, text_embeds is "the text embeddings obtained by applying the projection layer to the pooler_output", but I am not sure what this means? Are both acceptable to use as text embeddings (if I want to compare text similarity), or is one more correct than the other?
```
from transformers import CLIPProcessor, CLIPModel
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
text_embeds = outputs['text_embeds']
pooler_output = outputs['text_model_output']['pooler_output']
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21465/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21465/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21464
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21464/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21464/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21464/events
|
https://github.com/huggingface/transformers/issues/21464
| 1,571,744,414
|
I_kwDOCUB6oc5druqe
| 21,464
|
How can I fine-tune other languages in trocr? CER over 1
|
{
"login": "saeu5407",
"id": 55076217,
"node_id": "MDQ6VXNlcjU1MDc2MjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/55076217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saeu5407",
"html_url": "https://github.com/saeu5407",
"followers_url": "https://api.github.com/users/saeu5407/followers",
"following_url": "https://api.github.com/users/saeu5407/following{/other_user}",
"gists_url": "https://api.github.com/users/saeu5407/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saeu5407/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saeu5407/subscriptions",
"organizations_url": "https://api.github.com/users/saeu5407/orgs",
"repos_url": "https://api.github.com/users/saeu5407/repos",
"events_url": "https://api.github.com/users/saeu5407/events{/privacy}",
"received_events_url": "https://api.github.com/users/saeu5407/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nRefer to this thread: https://github.com/huggingface/transformers/issues/18163. Also, please ask such questions on our [forum](https://discuss.huggingface.co/), as we'd like to keep Github issues for bugs/feature requests.\r\n\r\nThanks!"
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
### System Info
I want to do fine-tuning using trocr.
There is a problem that CER exceeds 1.
The correct answer for the dataset is one word, but it seems that the predict value is too long.
What's the problem? Should I implement the new language with the following code?
I've been learning several times, but I've found that I'm making long sentences about text.
So there is a problem that CER goes over 1.
The link below is a code that only weights other languages without fine-tuning.
The predicted characters are quite long compared to the existing characters, and I wonder if there is a problem with EOS.
https://colab.research.google.com/drive/1ovc-9aXsYKDXAfrO0FUtoiLVkXEdrz6s?usp=sharing
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1ovc-9aXsYKDXAfrO0FUtoiLVkXEdrz6s?usp=sharing
### Expected behavior
CER over 1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21464/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21463
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21463/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21463/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21463/events
|
https://github.com/huggingface/transformers/pull/21463
| 1,571,720,648
|
PR_kwDOCUB6oc5JS3rt
| 21,463
|
[examples] improve block_size warning message
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
there is an odd warning inside the examples wrt model_max_length value e.g. in `run_clm.py`
```
01/28/2023 16:03:50 - WARNING - __main__ - The tokenizer picked seems to have a very large `model_max_length`
(1000000000000000019884624838656). Picking 1024 instead. You can change that default value by passing --block_size xxx.
```
As the models now can work with much longer sequence lengths (bloom, opt, others) should it try to truncate it to 1024?
But actually for me what stood out is `1000000000000000019884624838656` - when I see such huge numbers it usually means a bug, so it is worrying and suggests that either I am doing something wrong or there is a bug somewhere.
so this PR is proposing to reword the message to be as informative, but not as scary:
```
01/28/2023 16:03:50 - WARNING - __main__ - The chosen tokenizer supports a `model_max_length` that is longer than the
default `block_size` value of 1024. If you would like to use a longer `block_size` up to `tokenizer.model_max_length` you
can override this default with `--block_size xxx`.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21463/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21463",
"html_url": "https://github.com/huggingface/transformers/pull/21463",
"diff_url": "https://github.com/huggingface/transformers/pull/21463.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21463.patch",
"merged_at": 1675701373000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21462
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21462/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21462/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21462/events
|
https://github.com/huggingface/transformers/issues/21462
| 1,571,597,583
|
I_kwDOCUB6oc5drK0P
| 21,462
|
HubertModel output wrong `last_hidden_state` shape.
|
{
"login": "celsofranssa",
"id": 11181748,
"node_id": "MDQ6VXNlcjExMTgxNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11181748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/celsofranssa",
"html_url": "https://github.com/celsofranssa",
"followers_url": "https://api.github.com/users/celsofranssa/followers",
"following_url": "https://api.github.com/users/celsofranssa/following{/other_user}",
"gists_url": "https://api.github.com/users/celsofranssa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/celsofranssa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/celsofranssa/subscriptions",
"organizations_url": "https://api.github.com/users/celsofranssa/orgs",
"repos_url": "https://api.github.com/users/celsofranssa/repos",
"events_url": "https://api.github.com/users/celsofranssa/events{/privacy}",
"received_events_url": "https://api.github.com/users/celsofranssa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @celsofranssa! Really great question! In the HuBERT model, we take an input sequence of raw audio waveforms, _downsample_ them using a series of 1d-convolutional networks, and pass the downsampled hidden-states to a Transformer network.\r\n\r\nIn this case, `sequence_length` is referring to the downsampled sequence length (i.e. the sequence length _after_ we apply the 1-d convolutional networks). This is equal to the final sequence length of the HuBERT model (since there's no further downsampling by the Transformer network).\r\n\r\nWe can verify this with the `_get_feat_extract_output_length` method, which computes the downsampled sequence length of the HuBERT model:\r\nhttps://github.com/huggingface/transformers/blob/21a2d900eceeded7be9edc445b56877b95eda4ca/src/transformers/models/hubert/modeling_hubert.py#L867\r\n\r\nUsing this method, we get:\r\n```python\r\nfrom transformers import AutoProcessor, HubertModel\r\nfrom datasets import load_dataset\r\nimport torch\r\n\r\nprocessor = AutoProcessor.from_pretrained(\"facebook/hubert-large-ls960-ft\")\r\nmodel = HubertModel.from_pretrained(\"facebook/hubert-large-ls960-ft\")\r\n\r\ninput_values = torch.rand(16,4096) # sequence_length = 4096\r\nprint(\"Input shape: \", input_values.shape)\r\n\r\nwith torch.no_grad():\r\n last_hidden_state = model(input_values).last_hidden_state\r\nprint(\"Last hidden dim: \", last_hidden_state.shape)\r\n\r\nsequence_len = model._get_feat_extract_output_lengths(input_lengths=input_values.shape[-1])\r\nprint(\"seq len: \", sequence_len)\r\n\r\nprint(\"Shapes match? \", sequence_len == last_hidden_state.shape[1])\r\n```\r\n**Print Output:**\r\n```\r\nInput shape: torch.Size([16, 4096])\r\nLast hidden dim: torch.Size([16, 12, 1024])\r\nseq len: tensor(12)\r\nShapes match? tensor(True)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@sanchit-gandhi
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
As stated in [HuBERTModel docs](https://huggingface.co/docs/transformers/model_doc/hubert#transformers.HubertModel), `last_hidden_state` shape should be `(batch_size, sequence_length, hidden_size)`.
```python
from transformers import AutoProcessor, HubertModel
from datasets import load_dataset
import soundfile as sf
processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft")
model = HubertModel.from_pretrained("facebook/hubert-large-ls960-ft")
input_values = torch.rand(16,4096) # sequence_length = 4096
input_values.shape
last_hidden_state = model(input_values).last_hidden_state
last_hidden_state.shape
```
However, the shape of `last_hidden_state` was `torch.Size([16, 12, 1024])`.
### Expected behavior
The shape of `last_hidden_state` should be `torch.Size([16, 4096, 1024])`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21462/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21461
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21461/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21461/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21461/events
|
https://github.com/huggingface/transformers/pull/21461
| 1,571,595,043
|
PR_kwDOCUB6oc5JSfCX
| 21,461
|
Fix multiple `eos_token_id`s in model.generate(...)
|
{
"login": "tokestermw",
"id": 4722119,
"node_id": "MDQ6VXNlcjQ3MjIxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4722119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tokestermw",
"html_url": "https://github.com/tokestermw",
"followers_url": "https://api.github.com/users/tokestermw/followers",
"following_url": "https://api.github.com/users/tokestermw/following{/other_user}",
"gists_url": "https://api.github.com/users/tokestermw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tokestermw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tokestermw/subscriptions",
"organizations_url": "https://api.github.com/users/tokestermw/orgs",
"repos_url": "https://api.github.com/users/tokestermw/repos",
"events_url": "https://api.github.com/users/tokestermw/events{/privacy}",
"received_events_url": "https://api.github.com/users/tokestermw/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @tokestermw 👋 \r\n\r\nThank you for spotting the issues and adding a fix! One request, for two reasons: a) thin function wrappers are very undesirable, as they add another abstraction layer b) tensor ops should ideally be done with `torch` operations, otherwise there will be CPU<>GPU data movement 👉 can you replace the implementation with something like the snippet below, which computes the same thing using torch operators?\r\n\r\n```py\r\nimport torch\r\neos_token_id = torch.tensor([797, 641])\r\nunfinished_sequences = torch.tensor([1, 1, 1])\r\nnext_tokens = torch.tensor([797, 641, 98])\r\nnext_in_eos = next_tokens.tile((eos_token_id.shape[0], 1)).ne(eos_token_id.unsqueeze(1)).prod(dim=0)\r\nunfinished_sequences = unfinished_sequences.mul(next_in_eos).long()\r\n```\r\n",
"I just found the same issue I think and this is the code snippet I wanted to use for reporting the bug. Probably redundant as of now but before throwing it away, maybe it helps another user finding the issue. No further comment/processing required from my point of view:\r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, GenerationConfig\r\n\r\nMODEL = \"gpt2\"\r\nNUM_RETURN_SEQUENCES = 2\r\nMAX_NEW_TOKENS = 64\r\nCONFIG_DIR = \"./generation_test\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(MODEL)\r\nmodel.save_pretrained(CONFIG_DIR)\r\n\r\nconfig = GenerationConfig(\r\n num_return_sequences=NUM_RETURN_SEQUENCES,\r\n max_new_tokens=MAX_NEW_TOKENS,\r\n return_full_text=True,\r\n do_sample=True,\r\n bos_token_id=50256,\r\n pad_token_id=50256,\r\n eos_token_id=[50000,50256], # the 50000 is just an example to prove the issue\r\n)\r\nconfig.save_pretrained(CONFIG_DIR)\r\nmodel = AutoModelForCausalLM.from_pretrained(CONFIG_DIR)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL)\r\npipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer)\r\ngenerated = pipe(\"As always this is a\")\r\n\r\nprint(generated[0][\"generated_text\"])\r\n```\r\n",
"Thanks @gante! will make the change in a bit\r\n\r\nAnother issue I just found with beam search + multiple eos_token_id is that, on occasion we get this error:\r\n\r\n```python\r\nValueError: At most 3 tokens in tensor([ 198, 198, 198, 0, 628, 14373], device='cuda:0') can be equal to\r\n`eos_token_id: [198, 628]`. Make sure tensor([ 198, 198, 198, 0, 628, 14373], device='cuda:0') are corrected.\r\n```\r\n\r\n<img width=\"864\" alt=\"Screenshot 2023-02-07 at 16 26 49\" src=\"https://user-images.githubusercontent.com/4722119/217397589-53beae41-fa84-4792-bea6-db6056d33972.png\">\r\n\r\nThis is because we generate 2 * num_beams,\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L2766\r\n\r\nwhich can fail this check when we have more than one `eos_token_id`\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/generation/beam_search.py#L612\r\n\r\n(I can post a separate issue if that's better)",
"@tokestermw if that is not breaking the existing tests, yes, let's move it to a new issue.\r\n\r\nIn essence, we probably want to keep `1+len(eos_token_id)` beam candidates running, to ensure we have at least 1 non-`eos_token_id` candidate to proceed.",
"Mmm, looks like a lot of tests have started failing @gante and @tokestermw ",
"fixed\r\n\r\nthough there is a seemingly unrelated test error\r\n\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/57219/workflows/68817729-bfae-4e9a-8139-5e76e0e6ed5d/jobs/693592",
"Yes, this one has been fixed on main :-)",
"Hi @tokestermw Thank you for working on this. After this PR being merged to `main`, there are some CI regression. Could you take a look 🙏 . Also cc @gante \r\n\r\n## To reproduce:\r\n\r\n### We can check with specific commit on `main` branch\r\n```bash\r\ngit checkout 06d940ef # One commit before this PR on `main`\r\ngit checkout 9960506c # This PR - failed the following tests\r\n```\r\n\r\n### Then prepare the file format for doctests\r\n```python\r\npython utils/prepare_for_doc_test.py src docs\r\n```\r\n\r\n### This\r\n```python\r\npython3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules docs/source/en/model_doc/t5.mdx::t5.mdx -sv --doctest-continue-on-failure --doctest-glob=\"*.mdx\"\r\n```\r\ngives error\r\n```bash\r\nExpected:\r\n ['Das Haus ist wunderbar.', 'Ich arbeite gerne in NYC.']\r\nGot:\r\n ['Das Haus ist wunderbar. Das Haus ist wunderschön. Sehr', 'Ich arbeite gerne in NYC. Ich arbeite in NYC.']\r\n```\r\n### and this\r\n```python\r\npython3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules docs/source/en/model_doc/tapex.mdx::tapex.mdx -sv --doctest-continue-on-failure --doctest-glob=\"*.mdx\"\r\n```\r\ngives error\r\n```bash\r\nExpected:\r\n [' 53', ' george clooney', ' brad pitt']\r\nGot:\r\n [' 53 lithuania, french montana, french montana, french montana, french montana, french montana ...(very long non-sense string)]\r\n```",
"@ydshieh thanks, ah i see the issue 😓 . we're not carrying over the `unfinished_sequences`\r\n\r\nmaking a fix here: https://github.com/huggingface/transformers/pull/21529"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/pull/20727 for using multiple `eos_token_id`s
## Small repro
```python
import math
import torch
unfinished_sequences = torch.tensor([1,1,1])
next_tokens = torch.tensor([797, 641, 98])
unfinished_sequences.mul((math.prod(next_tokens != i for i in eos_token_id)).long())
```
## Error
if you run
```python
from transformers import pipeline
generator = pipeline('text-generation', 'gpt2')
generator('hello', eos_token_id=[628, 198], do_sample=True, num_return_sequences=3)
```
then it errors
```python
input = tensor([[-32]])
weight = Parameter containing:
tensor([[-0.0206, 0.0125, -0.0289, ..., 0.0018, -0.0300, 0.0111],
[-0.0239, -0.0158,...0, 0.0075, 0.0113],
[-0.0177, -0.0268, 0.0023, ..., 0.0135, 0.0077, -0.0042]],
requires_grad=True)
padding_idx = -1, max_norm = None, norm_type = 2.0, scale_grad_by_freq = False, sparse = False
...
if has_torch_function_variadic(input, weight):
return handle_torch_function(
embedding,
(input, weight),
input,
weight,
padding_idx=padding_idx,
max_norm=max_norm,
norm_type=norm_type,
scale_grad_by_freq=scale_grad_by_freq,
sparse=sparse,
)
if padding_idx is not None:
if padding_idx > 0:
assert padding_idx < weight.size(0), "Padding_idx must be within num_embeddings"
elif padding_idx < 0:
assert padding_idx >= -weight.size(0), "Padding_idx must be within num_embeddings"
padding_idx = weight.size(0) + padding_idx
else:
padding_idx = -1
if max_norm is not None:
# Note [embedding_renorm contiguous]
# `embedding_renorm_` will call .contiguous() on input anyways, so we
# call it here and take advantage of the improved locality in the
# `embedding` call below too.
input = input.contiguous()
# Note [embedding_renorm set_grad_enabled]
# XXX: equivalent to
# with torch.no_grad():
# torch.embedding_renorm_
# remove once script supports set_grad_enabled
_no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
> return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
E IndexError: index out of range in self
venv/lib/python3.8/site-packages/torch/nn/functional.py:2210: IndexError
```
## Tests
```
pytest tests/generation/test_utils.py::GenerationIntegrationTests::test_eos_token_id_int_and_list_greedy_search --disable-warnings -vv
pytest tests/generation/test_utils.py::GenerationIntegrationTests::test_eos_token_id_int_and_list_contrastive_search --disable-warnings -vv
pytest tests/generation/test_utils.py::GenerationIntegrationTests::test_eos_token_id_int_and_list_top_k_top_sampling --disable-warnings -vv
pytest tests/generation/test_utils.py::GenerationIntegrationTests::test_eos_token_id_int_and_list_beam_search --disable-warnings -vv
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21461/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21461",
"html_url": "https://github.com/huggingface/transformers/pull/21461",
"diff_url": "https://github.com/huggingface/transformers/pull/21461.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21461.patch",
"merged_at": 1675882126000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21460
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21460/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21460/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21460/events
|
https://github.com/huggingface/transformers/pull/21460
| 1,571,567,582
|
PR_kwDOCUB6oc5JSZuP
| 21,460
|
Fix `SpeechT5ForSpeechToSpeechIntegrationTests` device issue
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ah yes, nice catch. I don't have merge rights though.\r\n\r\nCould you also fix this in modeling_speecht5? I think that has the same issue. Around line 2871:\r\n\r\n```python\r\n if speaker_embeddings is None:\r\n speaker_embeddings = torch.zeros((1, 512), device=input_values.device)\r\n```\r\n\r\nThanks!\r\n",
"> Could you also fix this in modeling_speecht5? I think that has the same issue. Around line 2871:\r\n\r\nDone!\r\n\r\n> Ah yes, nice catch. I don't have merge rights though.\r\n\r\nYou have approval right :-) @hollance Then I can merge 🚀 \r\n\r\n"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Just a torch device issue being ifxed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21460/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21460",
"html_url": "https://github.com/huggingface/transformers/pull/21460",
"diff_url": "https://github.com/huggingface/transformers/pull/21460.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21460.patch",
"merged_at": 1675676588000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21459
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21459/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21459/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21459/events
|
https://github.com/huggingface/transformers/pull/21459
| 1,571,382,057
|
PR_kwDOCUB6oc5JR0N9
| 21,459
|
adding a tip for deepspeed integration in multi-node environment
|
{
"login": "izapolsk",
"id": 21039333,
"node_id": "MDQ6VXNlcjIxMDM5MzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/21039333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/izapolsk",
"html_url": "https://github.com/izapolsk",
"followers_url": "https://api.github.com/users/izapolsk/followers",
"following_url": "https://api.github.com/users/izapolsk/following{/other_user}",
"gists_url": "https://api.github.com/users/izapolsk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/izapolsk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izapolsk/subscriptions",
"organizations_url": "https://api.github.com/users/izapolsk/orgs",
"repos_url": "https://api.github.com/users/izapolsk/repos",
"events_url": "https://api.github.com/users/izapolsk/events{/privacy}",
"received_events_url": "https://api.github.com/users/izapolsk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@stas00, could you please review ? ^^",
"_The documentation is not available anymore as the PR was closed or merged._",
"Maybe, it's also worth adding `use_node_local_storage: true` when `save_on_each_node=True` and use_node_local_storage isn't defined in deepspeed config file.",
"Great additions, @izapolsk!\r\n\r\n> Maybe, it's also worth adding use_node_local_storage: true when save_on_each_node=True and use_node_local_storage isn't defined in deepspeed config file.\r\n\r\nyes, please and thank you!\r\n\r\nAlso if you'd like it might be a good idea to update the doc to replace `torch.distributed.launch` with the new API of `torch.distributed.run` as the former is deprecated now since about a year. If you want to that is - if not, no worries, I can update it later.\r\n\r\nWe can also say expand your note that any launcher can be used, including `accelerate` I think (need to check though). i.e. the launcher is independent from the program it runs.",
"I'll do. thank you ",
"Please let me know when you finished editing and I will add a few more notes - as some users will be still using `launch`, so we should mentioned both. ",
"@stas00, please review",
"Excellent integration addition, @izapolsk - thank you for the initiative.\r\n\r\nI took this opportunity to expand much further these sections, as it has been long overdue! Hope you don't mind that I did it in your PR. All your content is there, I just expanded the content a lot more.\r\n\r\nPlease let me know if it looks good to you and if you have any suggestions to make. (please note that I reverted to using the full `torch.distributed.run` way since it's easier for users who are transitioning from `torch.distributed.launch`)\r\n\r\np.s. I also edited the OP to reflect the changes.",
"awesome, thank you !"
] | 1,675
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
This PR
1. adds a tip for training in multi-node environment with deepspeed w/o shared filesystem
2. automatically configures deepspeed to inject:
```
{
"checkpoint": {
"use_node_local_storage": true
}
}
```
when `--save_on_each_node` is passed.
-------------------
note from @stas00: I took this opportunity to expand much further these sections, as it has been long overdue!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21459/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21459",
"html_url": "https://github.com/huggingface/transformers/pull/21459",
"diff_url": "https://github.com/huggingface/transformers/pull/21459.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21459.patch",
"merged_at": 1676038377000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21458
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21458/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21458/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21458/events
|
https://github.com/huggingface/transformers/pull/21458
| 1,571,089,662
|
PR_kwDOCUB6oc5JRI9v
| 21,458
|
[i18n-fr] Translate index page to French
|
{
"login": "NoB0",
"id": 28621493,
"node_id": "MDQ6VXNlcjI4NjIxNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/28621493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NoB0",
"html_url": "https://github.com/NoB0",
"followers_url": "https://api.github.com/users/NoB0/followers",
"following_url": "https://api.github.com/users/NoB0/following{/other_user}",
"gists_url": "https://api.github.com/users/NoB0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NoB0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NoB0/subscriptions",
"organizations_url": "https://api.github.com/users/NoB0/orgs",
"repos_url": "https://api.github.com/users/NoB0/repos",
"events_url": "https://api.github.com/users/NoB0/events{/privacy}",
"received_events_url": "https://api.github.com/users/NoB0/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Translated the `index.mdx` file of the documentation to French.
Part of #21456
Thank you in advance for your review.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, could you review this PR?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21458/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21458",
"html_url": "https://github.com/huggingface/transformers/pull/21458",
"diff_url": "https://github.com/huggingface/transformers/pull/21458.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21458.patch",
"merged_at": 1675704350000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21457
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21457/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21457/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21457/events
|
https://github.com/huggingface/transformers/pull/21457
| 1,571,083,479
|
PR_kwDOCUB6oc5JRHv7
| 21,457
|
Fix `PushToHubCallback` import in Share a model docs
|
{
"login": "ireneisdoomed",
"id": 45119610,
"node_id": "MDQ6VXNlcjQ1MTE5NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/45119610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ireneisdoomed",
"html_url": "https://github.com/ireneisdoomed",
"followers_url": "https://api.github.com/users/ireneisdoomed/followers",
"following_url": "https://api.github.com/users/ireneisdoomed/following{/other_user}",
"gists_url": "https://api.github.com/users/ireneisdoomed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ireneisdoomed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ireneisdoomed/subscriptions",
"organizations_url": "https://api.github.com/users/ireneisdoomed/orgs",
"repos_url": "https://api.github.com/users/ireneisdoomed/repos",
"events_url": "https://api.github.com/users/ireneisdoomed/events{/privacy}",
"received_events_url": "https://api.github.com/users/ireneisdoomed/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes a typo in the Share a model docs section. The example to push a Tensorflow model to the Hub used to call the method`PushToHubCallback` from `transformers.keras.callbacks`, resulting in `ImportError`.
This PR corrects that example in all languages so that `PushToHubCallback` is imported directly from `transformers`.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
# Who can review?
@sgugger, @stevhliu and @MKhalusova
Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21457/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21457/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21457",
"html_url": "https://github.com/huggingface/transformers/pull/21457",
"diff_url": "https://github.com/huggingface/transformers/pull/21457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21457.patch",
"merged_at": 1675693583000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21456
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21456/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21456/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21456/events
|
https://github.com/huggingface/transformers/issues/21456
| 1,570,993,422
|
I_kwDOCUB6oc5do3UO
| 21,456
|
[i18n-fr] Translating docs to fr
|
{
"login": "NoB0",
"id": 28621493,
"node_id": "MDQ6VXNlcjI4NjIxNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/28621493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NoB0",
"html_url": "https://github.com/NoB0",
"followers_url": "https://api.github.com/users/NoB0/followers",
"following_url": "https://api.github.com/users/NoB0/following{/other_user}",
"gists_url": "https://api.github.com/users/NoB0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NoB0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NoB0/subscriptions",
"organizations_url": "https://api.github.com/users/NoB0/orgs",
"repos_url": "https://api.github.com/users/NoB0/repos",
"events_url": "https://api.github.com/users/NoB0/events{/privacy}",
"received_events_url": "https://api.github.com/users/NoB0/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"Question for @ArthurZucker and/or @sgugger\r\nI am currently translating the autoclass tutorial and am not sure about the best translation for the terms `checkpoint` and `feature`. \r\nThe literal translation would be `point de contrôle` and `caractéristiques` respectively, however, I wonder if they are spot on. Should I even translate these standard terms?\r\nI would appreciate to have another opinion on this 😄 ",
"Ohhhhh no 🤣 Checkpoint would be `les poids` feature would be `les élément caractéristiques` maybe? But I never had ML in french so no idea 😅 ",
"Same for me, I agree that `poids` is better for that context. After looking at few ML courses, I would say that `caractéristique` is the appropriate term for `feature`. However, I propose to put feature in parentheses as it is a standard term that may be used as is in French.",
"Alright! "
] | 1,675
| 1,704
| null |
CONTRIBUTOR
| null |
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the french-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) (https://github.com/huggingface/transformers/pull/21458)
- [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx)(https://github.com/huggingface/transformers/pull/21589)
- [x] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx)(https://github.com/huggingface/transformers/pull/27657)
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)(https://github.com/huggingface/transformers/pull/28359)
- [x] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)(https://github.com/huggingface/transformers/pull/27659)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)(https://github.com/huggingface/transformers/pull/28418)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go 🔥
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21456/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21456/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21455
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21455/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21455/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21455/events
|
https://github.com/huggingface/transformers/pull/21455
| 1,570,958,857
|
PR_kwDOCUB6oc5JQusw
| 21,455
|
Fix Whisper Positional Embeddings when using decoder context
|
{
"login": "andyehrenberg",
"id": 32784181,
"node_id": "MDQ6VXNlcjMyNzg0MTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/32784181?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andyehrenberg",
"html_url": "https://github.com/andyehrenberg",
"followers_url": "https://api.github.com/users/andyehrenberg/followers",
"following_url": "https://api.github.com/users/andyehrenberg/following{/other_user}",
"gists_url": "https://api.github.com/users/andyehrenberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andyehrenberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andyehrenberg/subscriptions",
"organizations_url": "https://api.github.com/users/andyehrenberg/orgs",
"repos_url": "https://api.github.com/users/andyehrenberg/repos",
"events_url": "https://api.github.com/users/andyehrenberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/andyehrenberg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21455). All of your documentation changes will be reflected on that endpoint.",
"cc @ArthurZucker ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"is this getting in or really not needed?",
"Seems to work well without 😉 Also not sure if the updates on whisper fixed the original issue, would have to check! "
] | 1,675
| 1,681
| 1,679
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21455/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21455/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21455",
"html_url": "https://github.com/huggingface/transformers/pull/21455",
"diff_url": "https://github.com/huggingface/transformers/pull/21455.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21455.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21454
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21454/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21454/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21454/events
|
https://github.com/huggingface/transformers/pull/21454
| 1,570,891,767
|
PR_kwDOCUB6oc5JQg-R
| 21,454
|
Generate: TF can now accept custom logits processors
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
TF generation test addition PR 1 (out of ???).
In an effort to move generation integration tests to be framework-agnostic, I'll be adding low-hanging fruit to TF. This PR brings custom logits processors to TF `.generate()`. The code added is almost copy-paste from PT.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21454/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21454",
"html_url": "https://github.com/huggingface/transformers/pull/21454",
"diff_url": "https://github.com/huggingface/transformers/pull/21454.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21454.patch",
"merged_at": 1675698288000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21453
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21453/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21453/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21453/events
|
https://github.com/huggingface/transformers/pull/21453
| 1,570,819,288
|
PR_kwDOCUB6oc5JQSHy
| 21,453
|
A new test to check config attributes being used
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger Ready for review - The failing tests will be addressed (by adding specific rules in subclasses) once the PR is approved 🙏 .",
"The report looks like (if unused attributes are detected)\r\n```bash\r\nValueError: The following configuration classes contain unused attributes in the corresponding modeling files:\r\nCLIPSegConfig: ['decoder_attention_dropout', 'decoder_hidden_act']\r\n...\r\nDinatConfig: ['patch_norm']\r\n...\r\n```",
"> Thanks for iterating! Just left a couple more comments.\r\n> \r\n> As you noticed the job is failing right now, are you planning to add everything in the special map or to fix all the attributes not used?\r\n\r\nYes, that's mentioned in the description/comment (in previous version, it's better doing so, but with this new version you suggested, I can indeed add them earlier though).\r\n\r\nFYI: the special map I will update will contain\r\n- some confirmed allowed cases (i.e. we know the reasons and nothing we can do but just allow)\r\n- some skipped cases **for now** to allow temporarily with #TODO comment",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21453). All of your documentation changes will be reflected on that endpoint."
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Add a new test to check config attributes being used.
For edge cases, I only add rules to 2 files. If the concept is approved, **I will add more to pass CI**, and continue the work of cleaning up in follow-up PR(s).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21453/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21453",
"html_url": "https://github.com/huggingface/transformers/pull/21453",
"diff_url": "https://github.com/huggingface/transformers/pull/21453.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21453.patch",
"merged_at": 1675788571000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21452
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21452/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21452/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21452/events
|
https://github.com/huggingface/transformers/pull/21452
| 1,570,812,835
|
PR_kwDOCUB6oc5JQQ0D
| 21,452
|
Added documentation for DagsHubCallback
|
{
"login": "jinensetpal",
"id": 52078103,
"node_id": "MDQ6VXNlcjUyMDc4MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/52078103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinensetpal",
"html_url": "https://github.com/jinensetpal",
"followers_url": "https://api.github.com/users/jinensetpal/followers",
"following_url": "https://api.github.com/users/jinensetpal/following{/other_user}",
"gists_url": "https://api.github.com/users/jinensetpal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinensetpal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinensetpal/subscriptions",
"organizations_url": "https://api.github.com/users/jinensetpal/orgs",
"repos_url": "https://api.github.com/users/jinensetpal/repos",
"events_url": "https://api.github.com/users/jinensetpal/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinensetpal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds documentation for [DagsHubCallback](https://github.com/huggingface/transformers/blob/59d5edef34ae0fa56065a2e863736d4f133c558b/src/transformers/integrations.py#L1054-L1100)!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, please and thank you! 🙂
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21452/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21452",
"html_url": "https://github.com/huggingface/transformers/pull/21452",
"diff_url": "https://github.com/huggingface/transformers/pull/21452.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21452.patch",
"merged_at": 1675693459000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21451
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21451/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21451/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21451/events
|
https://github.com/huggingface/transformers/issues/21451
| 1,570,634,852
|
I_kwDOCUB6oc5dnfxk
| 21,451
|
AutomaticSpeechRecognitionPipeline throws dict key error even with the correct keys
|
{
"login": "dsingal0",
"id": 41652974,
"node_id": "MDQ6VXNlcjQxNjUyOTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/41652974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsingal0",
"html_url": "https://github.com/dsingal0",
"followers_url": "https://api.github.com/users/dsingal0/followers",
"following_url": "https://api.github.com/users/dsingal0/following{/other_user}",
"gists_url": "https://api.github.com/users/dsingal0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsingal0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsingal0/subscriptions",
"organizations_url": "https://api.github.com/users/dsingal0/orgs",
"repos_url": "https://api.github.com/users/dsingal0/repos",
"events_url": "https://api.github.com/users/dsingal0/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsingal0/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Narsil and @sanchit-gandhi ",
"This is just because the `sample` is consumed when passed to the pipeline:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import pipeline\r\nfrom datasets import load_dataset\r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\n\r\npipe = pipeline(\r\n \"automatic-speech-recognition\",\r\n model=\"openai/whisper-small.en\",\r\n chunk_length_s=30,\r\n device=device,\r\n)\r\n\r\nds = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\nsample = ds[0][\"audio\"]\r\n\r\nprediction = pipe(sample.copy())[\"text\"] # <------------CHANGE HERE\r\n\r\n# we can also return timestamps for the predictions\r\nprediction = pipe(sample, return_timestamps=True)[\"chunks\"]\r\n```\r\n\r\nThis should work.\r\n\r\n@sgugger we could remove that by cloning everything ourselves, but it forces a copy of the entire audio array when passed to the pipeline. We have even more subtle modifications where we don't copy, but we do need to pass *some* keys for live inference (used by the API at least) where we pass extra keys as-is to the caller so it can know how to handle the temporary results (since pipeline is stateless it's cumbersome to deal with those by other means).",
"Hey @dsingal0!\r\n\r\nLooks like this code snippet was taken from the Whisper small.en README example which I added last week: https://huggingface.co/openai/whisper-small.en#long-form-transcription\r\n\r\nI've updated the model README with @Narsil's fix: https://huggingface.co/openai/whisper-small.en/commit/d34e5b8002f2524cb84680607caa2f802de266cd (and all other Whisper model READMEs accordingly)\r\n\r\nFeel free to open issues/PRs on the Hugging Face Hub if a code example doesn't look right!\r\n\r\nWith regards to `pipeline`, this behaviour is potentially a bit confusing coming from the `model`/`processor` approach, since this way does not consume the input dict, allowing the user to re-use inputs as they wish. This is ok with me provided it's suitably well explained in the docs!\r\n\r\nI think it would be more elegant if we had a copy-free approach in the processing that did not consume the audio inputs as we currently do (if feasible):\r\nhttps://github.com/huggingface/transformers/blob/3b9a1dc13209d0cab347bf2363d18963cc3f9194/src/transformers/pipelines/automatic_speech_recognition.py#L447",
"It's exactly as how I explained above we have 3 choices:\r\n\r\n\r\n- `consume` (current behavior) this makes samples non reusable. IMO the best choice since reusing is only likely to be used while exploring.\r\n- `copy`. This makes an extra copy of the audio. On small files it doesn't matter that much, but way too costly for hour long audio files.\r\n- `not-passthrough`. Do not pass extra keys around (like `partial` during live microphone inference). This makes this particular use case quite hard to work with (because of the statelessness of pipeline and it would be a breaking change).",
"Sure, thanks for clarifying! Happy to stick with `consume` in this case!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
ValueError: When passing a dictionary to AutomaticSpeechRecognitionPipeline, the dict needs to contain a "raw" key containing the numpy array representing the audio and a "sampling_rate" key, containing the sampling_rate associated with that array
reproduced by using the following code:
```
import torch
from transformers import pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
pipe = pipeline(
"automatic-speech-recognition",
model="openai/whisper-small.en",
chunk_length_s=30,
device=device,
)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = ds[0]["audio"]
prediction = pipe(sample)["text"]
# we can also return timestamps for the predictions
prediction = pipe(sample, return_timestamps=True)["chunks"]
```
versions:
torch==1.13.1
transformers==4.26.0
datasets==2.9.0
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21451/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21451/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21450
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21450/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21450/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21450/events
|
https://github.com/huggingface/transformers/pull/21450
| 1,570,545,110
|
PR_kwDOCUB6oc5JPXkv
| 21,450
|
Typos/fixes to link syntax
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I was just checking what that formatting looked like before committing to it!"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
Noticed a couple of small errors and incorrect link syntax in the TPU tutorial, sorry about that!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21450/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21450",
"html_url": "https://github.com/huggingface/transformers/pull/21450",
"diff_url": "https://github.com/huggingface/transformers/pull/21450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21450.patch",
"merged_at": 1675783160000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21449
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21449/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21449/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21449/events
|
https://github.com/huggingface/transformers/issues/21449
| 1,570,542,005
|
I_kwDOCUB6oc5dnJG1
| 21,449
|
Longformer FP16 training broken since transformers 4.21
|
{
"login": "geniki",
"id": 13801078,
"node_id": "MDQ6VXNlcjEzODAxMDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/13801078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/geniki",
"html_url": "https://github.com/geniki",
"followers_url": "https://api.github.com/users/geniki/followers",
"following_url": "https://api.github.com/users/geniki/following{/other_user}",
"gists_url": "https://api.github.com/users/geniki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/geniki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/geniki/subscriptions",
"organizations_url": "https://api.github.com/users/geniki/orgs",
"repos_url": "https://api.github.com/users/geniki/repos",
"events_url": "https://api.github.com/users/geniki/events{/privacy}",
"received_events_url": "https://api.github.com/users/geniki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @geniki Thank you for reporting the issue.\r\n\r\n> but the problem should be easy to reproduce with any Longformer + FP16 example\r\n\r\nIt would be really nice if you can provide an example script that could reproduce the issue you reported, especially you mentioned `should be easy to reproduce` 🙏 Looking forward for it!\r\n\r\n> some of which have been fixed one by one\r\n\r\nCould you remind me which PRs or commits fixed this issue 🙏 That will help a lot, thank you.",
"Thanks for your response @ydshieh. Here are some example where this issue has been addressed for other models: \r\nhttps://github.com/huggingface/transformers/pull/20605\r\nhttps://github.com/huggingface/transformers/pull/18057\r\nhttps://github.com/huggingface/transformers/pull/19229\r\nhttps://github.com/huggingface/transformers/pull/17437\r\n\r\nI'll try to make an online example with Longformer work somehow. Do you have any model training tests with small dummy data?",
"Hi @geniki You can take any dataset on HF Hub (that are for specific task you are working on), and select a subset of it (say the first 1024 examples).\r\n\r\nHowever, as you already know some fixes (in you above comment), would you like to try to experiment a fix for this model (with your own dataset, potentially a subset) and open a PR ❤️ ? If not, no worry, but in this case, as I mentioned, a script that could reproduce would be really nice 👍 \r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
### System Info
transformers 4.20 / transformers 4.21
Ubuntu 20, python 3.8
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Apologies, I'm using my own dataset but the problem should be easy to reproduce with any Longformer + FP16 example. Upgrading from transformers 4.20 to 4.21 causes Longformer training loss to stay stuck around its initial value. When using transformers 4.20 + FP16 and transformers >= 4.21 + FP32, training loss declines as expected.
https://github.com/huggingface/transformers/pull/17306 seems to be what caused this. You can see on that issue that it affected other models too, some of which have been fixed one by one. Longformer is still affected as of transformers 4.26.
### Expected behavior
Be able to train Longformer using fp16 precision on recent version of transformers.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21449/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21448
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21448/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21448/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21448/events
|
https://github.com/huggingface/transformers/pull/21448
| 1,570,454,948
|
PR_kwDOCUB6oc5JPDvc
| 21,448
|
Deprecate parallelize API
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
This PR deprecates the parallelize API now that the big model API has been tested for a bit. Using `device_map="balanced"` in the call to `from_pretrained` will do the same thing as the API, and it's still possible to pass along a custom `device_map` (although they are not in the same format).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21448/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21448",
"html_url": "https://github.com/huggingface/transformers/pull/21448",
"diff_url": "https://github.com/huggingface/transformers/pull/21448.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21448.patch",
"merged_at": 1675730354000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21447
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21447/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21447/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21447/events
|
https://github.com/huggingface/transformers/pull/21447
| 1,570,366,371
|
PR_kwDOCUB6oc5JOwHG
| 21,447
|
For IterableDataset, return DataLoader using self._train_batch_size. …
|
{
"login": "agossard",
"id": 76619631,
"node_id": "MDQ6VXNlcjc2NjE5NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/76619631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agossard",
"html_url": "https://github.com/agossard",
"followers_url": "https://api.github.com/users/agossard/followers",
"following_url": "https://api.github.com/users/agossard/following{/other_user}",
"gists_url": "https://api.github.com/users/agossard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agossard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agossard/subscriptions",
"organizations_url": "https://api.github.com/users/agossard/orgs",
"repos_url": "https://api.github.com/users/agossard/repos",
"events_url": "https://api.github.com/users/agossard/events{/privacy}",
"received_events_url": "https://api.github.com/users/agossard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
…This is consistent with how we generate a regular DataLoader, and leads to the correct args.per_device_train_batch_size eventually ending up on each GPU.
Fixes # 21444#issuecomment-1416252207
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21447/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21447",
"html_url": "https://github.com/huggingface/transformers/pull/21447",
"diff_url": "https://github.com/huggingface/transformers/pull/21447.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21447.patch",
"merged_at": 1675456368000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21446
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21446/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21446/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21446/events
|
https://github.com/huggingface/transformers/pull/21446
| 1,570,340,149
|
PR_kwDOCUB6oc5JOqcR
| 21,446
|
Added timesformer configuration
|
{
"login": "AdiaWu",
"id": 60185619,
"node_id": "MDQ6VXNlcjYwMTg1NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/60185619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdiaWu",
"html_url": "https://github.com/AdiaWu",
"followers_url": "https://api.github.com/users/AdiaWu/followers",
"following_url": "https://api.github.com/users/AdiaWu/following{/other_user}",
"gists_url": "https://api.github.com/users/AdiaWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdiaWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdiaWu/subscriptions",
"organizations_url": "https://api.github.com/users/AdiaWu/orgs",
"repos_url": "https://api.github.com/users/AdiaWu/repos",
"events_url": "https://api.github.com/users/AdiaWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdiaWu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @AdiaWu and @JuheonChu, thank you for the contribution 🚀 The doctest is confirmed to pass with this change, but there are 2 lines that should not be deleted in this PR. Once that part is reverted, we are ready to merge 💯 ",
"Hi @ydshieh Thank you very much for your advice, we will work on it now! ",
"Hi, @AdiaWu, \r\n\r\n[Update]\r\n\r\nYou created a new file `src/transformers/utils/documentation_tests.txt`. This should be removed, and we only need to add 1 line in the existing file `utils/documentation_tests.txt`.\r\n\r\n~~I am not sure if something changed since yesterday, but currently, the file `documentation_tests.txt` is completely modified, in particular in [this commit](https://github.com/AdiaWu/transformers/commit/657de28e023ff305fde3356e187b96c29bca1f07)~~\r\n\r\n~~Why that file is re-created in that commit? Before we can merge this PR, you will have to make that file clean - there should be only 1 line change instead of all file being changed.~~\r\n\r\n\r\n\r\n",
"I am sorry for the trouble. However, I only added line 172 and line 173 as you instructed yesterday. May I ask you what is the one line that you want me to revise? We will try to work on it again. ",
"Dear @ydshieh \r\nand from this site: https://github.com/huggingface/transformers/pull/21446/commits/82d5a6216e3c4596f2213dbee440f12bdaa35fcb. You can check that I only added two lines to the document... Not sure if somewhere went wrong.. \r\nAgian, sorry for the trouble. ",
"One commit before, there is unusual changes:\r\n\r\nhttps://github.com/huggingface/transformers/pull/21446/commits/657de28e023ff305fde3356e187b96c29bca1f07",
"@ydshieh Thank you! We will remove the file right away.",
"@ydshieh Hello, I just wanted to double check. So, what @JuheonChu and I should do is deleting \"`src/transformers/utils/documentation_tests.txt`\" file. Are we understanding correctly?",
"> @ydshieh Hello, I just wanted to double check. So, what @JuheonChu and I should do is deleting \"`src/transformers/utils/documentation_tests.txt`\" file. Are we understanding correctly?\r\n\r\nYes",
"Hi @AdiaWu If you check on the changed file page\r\n\r\nhttps://github.com/huggingface/transformers/pull/21446/files\r\n\r\nIt shows the file `utils/documentation_tests.txt` is not there (or points to `src/transformers/utils/documentation_tests.txt`.\r\n\r\nI suggest you verify the file locally, make sure `utils/documentation_tests.txt` exist, not a symbolic link to other files, and with the expected one-line change you want to add for this PR 🙏.",
"@ydshieh Is the file `utils/documentation_tests.txt` already in our commits? ",
"Dear @ydshieh , \r\nSince the file \"utils/documentation_tests.txt\" is missing. @JuheonChu and I just updated the \"utils/documentation_tests.txt\". There should be no problem with the file now. Please check it whenever you are available and see if there are still some errors this time. ",
"Hi @AdiaWu \r\n\r\n- The file `src/transformers/utils/documentation_tests.txt` should not be added.\r\n- The file `utils/documentation_tests.txt` should not be deleted.\r\n\r\nIt seems at some point of your commits, you have done something to these 2 files.\r\n\r\nWhat needs to be done is:\r\n\r\n- remove the newly added file `src/transformers/utils/documentation_tests.txt`\r\n- make sure `utils/documentation_tests.txt` exist, and is not a symbolic link to any other file\r\n- make sure `utils/documentation_tests.txt` is updated with one line `src/transformers/models/timesformer/configuration_timesformer.py`\r\n\r\nI hope this is clear and we can merge the PR once everything is fine, thank you.\r\nI can help to resolve this issue if necessary, please let me know :-) ",
"Dear @ydshieh , I am sorry for misunderstanding your instructions, we have now deleted the \"src/transformers/utils/documentation_tests.txt\" file, and the \"utils/documentation_tests.txt\" is currently in the path with only one line added. I hope this time this PR should be all good and ready to be merged. Thank you very much! ",
"> Dear @ydshieh , I am sorry for misunderstanding your instructions, we have now deleted the \"src/transformers/utils/documentation_tests.txt\" file, and the \"utils/documentation_tests.txt\" is currently in the path with only one line added. I hope this time this PR should be all good and ready to be merged. Thank you very much!\r\n\r\n@ydshieh Thank you for your guidance, and do you mind if you can verify it?",
"No worry🤗, and yes it is good now🚀 Thank you so much for making it and the contribution ❤️",
"> No worry🤗, and yes it is good now🚀 Thank you so much for making it and the contribution ❤️\r\n\r\nThank you for your patience! "
] | 1,675
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
Co-authored-by: JuheonChu <chuj@dickinson.edu>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # 19487 (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21446/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21446",
"html_url": "https://github.com/huggingface/transformers/pull/21446",
"diff_url": "https://github.com/huggingface/transformers/pull/21446.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21446.patch",
"merged_at": 1676066081000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21445
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21445/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21445/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21445/events
|
https://github.com/huggingface/transformers/pull/21445
| 1,570,310,693
|
PR_kwDOCUB6oc5JOkEM
| 21,445
|
Avoid flaky generation sampling tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Avoid the CI failure
```bash
tests/models/switch_transformers/test_modeling_switch_transformers.py::SwitchTransformersModelTest::test_beam_sample_generate_dict_output
(line 3099) RuntimeError: probability tensor contains either inf, nan or element < 0
```
For
```bash
tests/models/marian/test_modeling_marian.py::MarianStandaloneDecoderModelTest::test_sample_generate
(line 2482) RuntimeError: probability tensor contains either inf, nan or element < 0
```
it's not clear what I can change in https://github.com/huggingface/transformers/blob/6c62cfb2eff095c181481d8ae86c7f836b65d2d7/tests/generation/test_utils.py#L108-L155
I changed to `logits_warper_kwargs, logits_warper = self._get_warper_and_kwargs(num_beams=2)` despite this is not beam sampling test.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21445/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21445",
"html_url": "https://github.com/huggingface/transformers/pull/21445",
"diff_url": "https://github.com/huggingface/transformers/pull/21445.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21445.patch",
"merged_at": 1675458085000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21444
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21444/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21444/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21444/events
|
https://github.com/huggingface/transformers/issues/21444
| 1,570,272,257
|
I_kwDOCUB6oc5dmHQB
| 21,444
|
Trainer get_train_dataloader creates wrong batch size when using IterableDataset and multi-gpu training on single machine
|
{
"login": "agossard",
"id": 76619631,
"node_id": "MDQ6VXNlcjc2NjE5NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/76619631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agossard",
"html_url": "https://github.com/agossard",
"followers_url": "https://api.github.com/users/agossard/followers",
"following_url": "https://api.github.com/users/agossard/following{/other_user}",
"gists_url": "https://api.github.com/users/agossard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agossard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agossard/subscriptions",
"organizations_url": "https://api.github.com/users/agossard/orgs",
"repos_url": "https://api.github.com/users/agossard/repos",
"events_url": "https://api.github.com/users/agossard/events{/privacy}",
"received_events_url": "https://api.github.com/users/agossard/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Sounds like the `self.args.per_device_train_batch_size` should be `self._train_batch_size` indeed. Do you want to open a PR?\r\n\r\nAs an aside, using DataParallel is not the recommended way to run a multiple GPUs by PyTorch, you should launch your training script with `torchrun`",
"Thanks, Sylvain. I issue the pull request. My first time doing so, so hope I did it OK!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### System Info
@sgugger
I'm not sure if I'm missing something here or not. But I am doing masked language modeling with RobertaForMaskedLM and working in pytorch on an AWS machine with 8 V100s. I set args.per_device_train_batch_size=32. If I train with a regular Dataset object, the data loader will produce a big batch of 32 * 8 = 256 examples each time, and then they will be split up and sent to each GPU in batches of 32 as expected. But if I switch to an IterableDataset, I end up with the DataLoader producing batches of 32, which get split into batches of 4 being send to each GPU.
This happens because of this code in Trainer.get_train_data_loader. If we have an iterable Dataset, we end up creating a DataLoader based on **per_device_train_batch_size** (which is 32). But if we have any other type of dataset, we create a DataLoader with self.**_train_batch_size** (which is 256). I confess I don't what the first if self.args.world_size > 1 block is supposed to be doing, but that doesn't get executed in my situation (running on a single machine with multiple GPUs).
Am I doing something wrong, or is this a bug?
Thanks,
Andy
if isinstance(train_dataset, torch.utils.data.IterableDataset):
if self.args.world_size > 1:
train_dataset = IterableDatasetShard(
train_dataset,
batch_size=self._train_batch_size,
drop_last=self.args.dataloader_drop_last,
num_processes=self.args.world_size,
process_index=self.args.process_index,
)
return DataLoader(
train_dataset,
batch_size=self.args.**per_device_train_batch_size**,
collate_fn=data_collator,
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.dataloader_pin_memory,
)
train_sampler = self._get_train_sampler()
return DataLoader(
train_dataset,
batch_size=self.**_train_batch_size**,
sampler=train_sampler,
collate_fn=data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.dataloader_pin_memory,
worker_init_fn=seed_worker,
)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Use a pytorch model on a single single machine with multiple GPUs
2. Set TrainingArguments.per_device_train_batch_size=32
3. Create a regular dataset in memory from a pandas data frame (or whatever)
4. Put a breakpoint (or debugging statement) in the forward pass of the model to print out inputs.shape -> Very that first dimension=32
5. Now create a IterableDataset and run again
6. See that inputs.shape has first dimension of 4
### Expected behavior
The train batch size should be the same whether using regular or IterableDataset
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21444/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21443
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21443/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21443/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21443/events
|
https://github.com/huggingface/transformers/pull/21443
| 1,570,265,994
|
PR_kwDOCUB6oc5JOadi
| 21,443
|
[CI ] Remove `past` in favor of `pat_key_values`
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Other tests might also pass! ",
"_The documentation is not available anymore as the PR was closed or merged._",
"Git overides the `use_cache` arguments (only if labels are provided) to `False` see [here](https://github.com/ArthurZucker/transformers/blob/c2f9aacee9326b9db886036497a4d157666cb040/src/transformers/models/git/modeling_git.py#L1475-L1476). When generating, `use_cache` is set to false, but when we run `model.group_beam_search`, the ` self.prepare_inputs_for_generation(input_ids, **model_kwargs)` method forces `use_cache` to True see [here ](https://github.com/ArthurZucker/transformers/blob/c2f9aacee9326b9db886036497a4d157666cb040/src/transformers/models/git/modeling_git.py#L1516-L1534). \r\nEDIT: just update this tests will pass now. ",
"The failing tests are unrelated, will merge"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Related to #20944, The `past` arg was removed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21443/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21443",
"html_url": "https://github.com/huggingface/transformers/pull/21443",
"diff_url": "https://github.com/huggingface/transformers/pull/21443.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21443.patch",
"merged_at": 1675759896000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21442
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21442/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21442/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21442/events
|
https://github.com/huggingface/transformers/pull/21442
| 1,570,095,472
|
PR_kwDOCUB6oc5JN1Tn
| 21,442
|
Draft Pull request
|
{
"login": "JuheonChu",
"id": 35699839,
"node_id": "MDQ6VXNlcjM1Njk5ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/35699839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuheonChu",
"html_url": "https://github.com/JuheonChu",
"followers_url": "https://api.github.com/users/JuheonChu/followers",
"following_url": "https://api.github.com/users/JuheonChu/following{/other_user}",
"gists_url": "https://api.github.com/users/JuheonChu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuheonChu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuheonChu/subscriptions",
"organizations_url": "https://api.github.com/users/JuheonChu/orgs",
"repos_url": "https://api.github.com/users/JuheonChu/repos",
"events_url": "https://api.github.com/users/JuheonChu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuheonChu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21442/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21442",
"html_url": "https://github.com/huggingface/transformers/pull/21442",
"diff_url": "https://github.com/huggingface/transformers/pull/21442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21442.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21441
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21441/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21441/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21441/events
|
https://github.com/huggingface/transformers/pull/21441
| 1,570,046,139
|
PR_kwDOCUB6oc5JNqvx
| 21,441
|
Add BLIP-2
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger all comments are addressed, feel free to approve :)",
"@NielsRogge Curious, what is the timeline for this to make it into a stable release version?",
"Usually there's a Transformers release once every 1 to 2 months, so at the very least in March.",
"Hi, thanks for the great work! I'm running into problems trying to use this in the multigpu setting and saw this was mentioned by @younesbelkada earlier -- is there an issue to follow for that? (Specifically, in line 2765 of transformers->generation->utils.py, the devices don't match -- `Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cuda:0!` because beam_scores is on cuda:0 while next_token_scores and next_token_scores_processed are on cuda:3 after using \"auto\" for the device_map when loading.)\r\n\r\nI'm also getting a weirder error where it causes a CUDA illegal memory access error for any model used downstream of it on GPU 0, even when it's given no GPU memory on GPU 0 in max_memory. (This doesn't occur for the original BLIP2, which I'm trying to migrate from.)",
"Same problem here @sachit-menon \r\n\"Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!\"\r\nhttps://github.com/TimDettmers/bitsandbytes/issues/153",
"Hi @sachit-menon @xszheng2020 \r\nThis is a known issue on my end, I can confirm this should be at least fixed for `blip2-opt` at https://github.com/huggingface/transformers/pull/21707\r\nCan you try to checkout from this branch and let us know on the PR if the fix works? thanks!",
"Hi, @younesbelkada \r\nthanks! will test it on blip2-opt to see whether it works!\r\nand hope the blip2-flant5 could be fixed soon"
] | 1,675
| 1,676
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds BLIP-2 to the library.
To do:
- [x] make sure generation works exactly as the original implementation, (maybe @gante can have a look here - based on original code [here](https://github.com/salesforce/LAVIS/blob/main/lavis/models/blip2_models/blip2_opt.py#L207-L211)). Edit: seems to be solved by properly setting the `eos_token_id`!
- [x] add more tests for BLIP-2 with `AutoModelForSeq2SeqLM` once designed gets approved
- [x] transfer checkpoints, update integration tests
- [ ] make it possible to instantiate Blip2Config with config objects, rather than dicts (also check default text config) - will be done in a separate PR
cc @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21441/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21441/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21441",
"html_url": "https://github.com/huggingface/transformers/pull/21441",
"diff_url": "https://github.com/huggingface/transformers/pull/21441.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21441.patch",
"merged_at": 1675957931000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21440
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21440/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21440/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21440/events
|
https://github.com/huggingface/transformers/issues/21440
| 1,569,882,242
|
I_kwDOCUB6oc5dkoCC
| 21,440
|

|
{
"login": "iamnmn9",
"id": 41872440,
"node_id": "MDQ6VXNlcjQxODcyNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/41872440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamnmn9",
"html_url": "https://github.com/iamnmn9",
"followers_url": "https://api.github.com/users/iamnmn9/followers",
"following_url": "https://api.github.com/users/iamnmn9/following{/other_user}",
"gists_url": "https://api.github.com/users/iamnmn9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamnmn9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamnmn9/subscriptions",
"organizations_url": "https://api.github.com/users/iamnmn9/orgs",
"repos_url": "https://api.github.com/users/iamnmn9/repos",
"events_url": "https://api.github.com/users/iamnmn9/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamnmn9/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please do not spam the repository by opening duplicate issues. You can find help to debug your training by posting on the [forums](https://discuss.huggingface.co/) as we keep issues for bugs in the library and feature requests only. You will need to share how you are launching your script for anyone to be able to help."
] | 1,675
| 1,675
| 1,675
|
NONE
| null |

I have 8 gpu's in this machine.

I think its not taking all 8 gpu's. Already tried changing batch_sizes and with multiple of 8
_Originally posted by @namanpundir in https://github.com/huggingface/transformers/issues/21407#issuecomment-1415898415_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21440/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21439
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21439/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21439/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21439/events
|
https://github.com/huggingface/transformers/issues/21439
| 1,569,480,777
|
I_kwDOCUB6oc5djGBJ
| 21,439
|
BertTokenizer cannot properly tokenize words with dashes
|
{
"login": "poteminr",
"id": 38759021,
"node_id": "MDQ6VXNlcjM4NzU5MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/38759021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/poteminr",
"html_url": "https://github.com/poteminr",
"followers_url": "https://api.github.com/users/poteminr/followers",
"following_url": "https://api.github.com/users/poteminr/following{/other_user}",
"gists_url": "https://api.github.com/users/poteminr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/poteminr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/poteminr/subscriptions",
"organizations_url": "https://api.github.com/users/poteminr/orgs",
"repos_url": "https://api.github.com/users/poteminr/repos",
"events_url": "https://api.github.com/users/poteminr/events{/privacy}",
"received_events_url": "https://api.github.com/users/poteminr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I am not sure what you would like us to do about that. If we change the tokenizer, the model will get inputs different from its training and thus won't perform as well. You should use a different model with a tokenizer that suits your needs :-)",
"This is not an issue on our side, it's just the way the WordPiece algorithm works, given the corpus that the BERT authors trained on.\r\n\r\nCheck out our course for more info on tokenization algorithms: https://huggingface.co/course/chapter6/1?fw=pt\r\n\r\nClosing this issue, feel free to reopen."
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
The `BertTokenizer` doesn't tokenize words with dashes correctly. I tried to use tokenizer for **Italian** and **English** languages and got unexpected results. This issue similar to https://github.com/huggingface/transformers/issues/5136.
```python3
from transformers import AutoTokenizer, BertTokenizer
model1 = 'cointegrated/rubert-tiny2'
model2 = 'Babelscape/wikineural-multilingual-ner'
tokenizer = BertTokenizer.from_pretrained(model1, model_max_length=50)
>>> tokenizer.tokenize('so-called')
['so', '-', 'called']
>> tokenizer.tokenize('era il figlio di ghazi-ud-din haidar.')
['era', 'il', 'fi', '##gli', '##o', 'di', 'gh', '##azi', '-', 'ud', '-', 'din', 'hai', '##dar', '.']
```
### Expected behavior
Expected something like that:
```python3
>>> tokenizer.tokenize('so-called')
['so', '##-', '##called']
>> tokenizer.tokenize('era il figlio di ghazi-ud-din haidar.')
['era', 'il', 'fi', '##gli', '##o', 'di', 'gh', '##azi', '##-', '##ud', '##-', '##din', 'hai', '##dar', '.']
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21439/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21438
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21438/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21438/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21438/events
|
https://github.com/huggingface/transformers/pull/21438
| 1,569,477,172
|
PR_kwDOCUB6oc5JLvNY
| 21,438
|
Fix device issue in a `ConvBertModelTest` test
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
CI failed after #21398
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21438/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21438",
"html_url": "https://github.com/huggingface/transformers/pull/21438",
"diff_url": "https://github.com/huggingface/transformers/pull/21438.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21438.patch",
"merged_at": 1675433549000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21437
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21437/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21437/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21437/events
|
https://github.com/huggingface/transformers/issues/21437
| 1,569,470,135
|
I_kwDOCUB6oc5djDa3
| 21,437
|
BertTokenizer cannot properly tokenize words with dashes
|
{
"login": "poteminr",
"id": 38759021,
"node_id": "MDQ6VXNlcjM4NzU5MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/38759021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/poteminr",
"html_url": "https://github.com/poteminr",
"followers_url": "https://api.github.com/users/poteminr/followers",
"following_url": "https://api.github.com/users/poteminr/following{/other_user}",
"gists_url": "https://api.github.com/users/poteminr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/poteminr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/poteminr/subscriptions",
"organizations_url": "https://api.github.com/users/poteminr/orgs",
"repos_url": "https://api.github.com/users/poteminr/repos",
"events_url": "https://api.github.com/users/poteminr/events{/privacy}",
"received_events_url": "https://api.github.com/users/poteminr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,675
| 1,675
| 1,675
|
NONE
| null |
### System Info
The `BertTokenizer` doesn't tokenize words with dashes correctly. I tried to use tokenizer for **Italian** and **English** languages and got unexpected results. This issue similar to https://github.com/huggingface/transformers/issues/5136.
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python3
from transformers import AutoTokenizer, BertTokenizer
model1 = 'cointegrated/rubert-tiny2'
model2 = 'Babelscape/wikineural-multilingual-ner'
tokenizer = BertTokenizer.from_pretrained(model1, model_max_length=50)
>>> tokenizer.tokenize('so-called')
['so', '-', 'called']
>> tokenizer.tokenize('era il figlio di ghazi-ud-din haidar.')
['era', 'il', 'fi', '##gli', '##o', 'di', 'gh', '##azi', '-', 'ud', '-', 'din', 'hai', '##dar', '.']
```
### Expected behavior
Expected something like that:
```python3
>>> tokenizer.tokenize('so-called')
['so', '##-', '##called']
>> tokenizer.tokenize('era il figlio di ghazi-ud-din haidar.')
['era', 'il', 'fi', '##gli', '##o', 'di', 'gh', '##azi', '##-', '##ud', '##-', '##din', 'hai', '##dar', '.']
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21437/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21436
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21436/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21436/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21436/events
|
https://github.com/huggingface/transformers/pull/21436
| 1,569,369,503
|
PR_kwDOCUB6oc5JLYJB
| 21,436
|
exclude deleted files in the fixup script
|
{
"login": "dtuit",
"id": 8114067,
"node_id": "MDQ6VXNlcjgxMTQwNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8114067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dtuit",
"html_url": "https://github.com/dtuit",
"followers_url": "https://api.github.com/users/dtuit/followers",
"following_url": "https://api.github.com/users/dtuit/following{/other_user}",
"gists_url": "https://api.github.com/users/dtuit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dtuit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dtuit/subscriptions",
"organizations_url": "https://api.github.com/users/dtuit/orgs",
"repos_url": "https://api.github.com/users/dtuit/repos",
"events_url": "https://api.github.com/users/dtuit/events{/privacy}",
"received_events_url": "https://api.github.com/users/dtuit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Running `make fixup` after a file is deleted on a branch causes `black` to exit with an error.
`Error: Invalid value for 'SRC ...': Path /path/to/deleted/file' does not exist.`
This PR resolves the error by setting the `git diff` flag `--diff-filter=d` to exclude deleted files from the list of modified files.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21436/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21436",
"html_url": "https://github.com/huggingface/transformers/pull/21436",
"diff_url": "https://github.com/huggingface/transformers/pull/21436.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21436.patch",
"merged_at": 1675447022000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21435
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21435/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21435/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21435/events
|
https://github.com/huggingface/transformers/pull/21435
| 1,569,303,617
|
PR_kwDOCUB6oc5JLJ_c
| 21,435
|
Make beam sample more robust
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Uhmmm... if all tokens in a batch have `-inf` scores, something has gone very wrong. `-inf` can only happen when some tokens are forbidden (e.g. through `NoBadWordsLogitsProcessor`), and if they are all forbidden then it means `.generate` is incorrectly parameterized. \r\n\r\nI think the runtime error is adequate in that situation -- perhaps we could make it an informative exception explaining the problem?",
"@gante Nothing is wrong, at least in the situation I observed:\r\n\r\n- The `logits_warper` I checked contains:\r\n - TemperatureLogitsWarper \r\n - TopKLogitsWarper \r\n - TopPLogitsWarper \r\n\r\nHere, \r\n - `TopKLogitsWarper`: keeps top 10 logits, set others to `-inf`\r\n - `TopPLogitsWarper `: keeps top 1 ([`<eos>` token]) logits, which has probability > top_p (= 0.7), and set others to `-inf`.\r\n \r\nThen in beam scores, it can't choose `<eos>`, so other tokens are chosen with scores `-inf`.\r\nIn the next iteration, the line\r\n```\r\nnext_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores)\r\n```\r\nintroduce the `all -inf` for some batch dimension, as `beam_scores` contains some `-inf`.\r\n\r\nThe `all -inf` comes from the fact the scores go through `TopPLogitsWarper` while `eos` has a higher probability > `top_p` and `min_tokens_to_keep = 1`.\r\n\r\nWDYT? ",
"@ydshieh I understand what causes it, but I still think we shouldn't change the code -- if this situation happens in production, it means that the user has selected a bad combination of model + processors/warpers. Allowing this behavior results in a silent failure, which is much worse than a crash :)\r\n\r\nThe solution should be a fix that would be applicable in real usage, i.e. fixing parameterization. For instance, changing `TopPLogitsWarper` to have `min_tokens_to_keep=2` would fix this issue (and is a potential solution if this problem was happening in an actual use case)",
"Well, I do agree some arguments, but I also don't think this is a real problem: Given a set of parametrization, the algorithm is to give the result which makes its own sense. In this case, the result (at some batch dimension) are with scores `-inf`, which totally makes sense: the users can verify the scores and decide what to do on their own.\r\n\r\n**IMO, as long as we define clearly the behavior, and document it, it is fine.**\r\n\r\n# Examples\r\n\r\nTaking one example (not 100% relevant): when we want to sample without replacement, but the places with positive probabilities > 0 are fewer than the sample size request.\r\n\r\n### PyTorch gives what you want, even those elements with probability 0 \r\n```python\r\nimport torch\r\ndevice = \"cpu\"\r\n\r\nprobs = torch.tensor([[1, 1, 0, 0, 0, 0]], device=device, dtype=torch.float)\r\no = torch.multinomial(probs, num_samples=4)\r\nprint(o)\r\n\r\n# tensor([[0, 1, 5, 4]])\r\n# The last 2 elements are meaningless\r\n# On CPU/GPU, the behavior seems different too!\r\n```\r\nWhile\r\n\r\n### NumPy throws an error\r\n```python\r\nimport numpy as np\r\n\r\no = np.random.choice(6, 4, replace=False, p=[1.0/3, 1.0/3, 1.0/3, 0, 0, 0])\r\n\r\n# ValueError: Fewer non-zero entries in p than size\r\n```",
"Also, thinking in batch mode, what if a user really want to use a fixed parametrization, but with it, some example gives error while all other examples could generate successfully? Force them to change the parametrization doesn't seem really good, and it is also not easy to determine beforehand which example will fail with this kind of error.",
"And yet another argument:\r\n\r\n> The solution should be a fix that would be applicable in real usage, i.e. fixing parameterization. For instance, changing TopPLogitsWarper to have min_tokens_to_keep=2 would fix this issue (and is a potential solution if this problem was happening in an actual use case)\r\n\r\nWell, how about a user have extra `NoBadWordsLogitsProcessor`, which have a bad word, and that one has the higher probability together with the `eos` token for some example(s) in the dataset? Then the generation will fail again, and to make it work, users have to change to `min_tokens_to_keep=2`, while all other examples (in validation/test datasets) all work with previous parametrization? Such failure in the middle of the process will be really annoying IMO.",
"@ydshieh The thing is, all the examples you pointed out won't happen unless the user has made a mistake. Beam search methods require at least two tokens per round to operate correctly, so `min_tokens_to_keep=2` should always be set in `beam_sample` (perhaps we can modify `.generate()` to set it by default when `num_beams>1`). \r\n\r\nAgain, if we merge this behavior, we expose ourselves to many silent failure modes, where the user will say \"the model is bad\"/\"HF's generate is wrong\" instead of being pointed at the root cause. Here are a few examples:\r\n1. logits processors that force tokens in a certain position with unfeasible constraints (e.g.`ForcedBOSTokenLogitsProcessor` + `ForceTokensLogitsProcessor`)\r\n2. logits processors that prevent tokens in a certain position / all positions with unfeasible constraints (e.g. `NoBadWordsLogitsProcessor`, `WhisperTimeStampLogitsProcessor`)\r\n3. A combination of the above\r\n\r\nI'm sorry, I'll be very stubborn against this change :) ",
"> won't happen unless the user has made a mistake\r\n\r\nI don't agree with this. A parametrization may work very well for all examples in a dataset but will fail with a single one\r\n - it's really arguable what is right and wrong here\r\n - to reiterate again, this is much more annoying if one has to figure out what parametrization to change for a single/few examples, especially in the prod environment, where a crash should really be avoided\r\n - current implementation doesn't allow one to deal with such error with try/except(they can, but doing this in batch mode while the failing case may just be a single example in that batch is annoying)\r\n\r\n> Again, if we merge this behavior, we expose ourselves to many silent failure modes, where the user will say \"the model is bad\"/\"HF's generate is wrong\" instead of being pointed at the root cause.\r\n\r\n- For users that are not developers + with no motivation to dive into + just want complain:\r\n - I am not sure if a crush will make them to change their mind and motivate them to dive into\r\n- For other users who are willing to debug\r\n - Make sure we communicate well/clearly the returned scores should be checked (at least when debugging/analyze logs) should be already super good\r\n \r\n> Beam search methods require at least two tokens per round to operate correctly, so `min_tokens_to_keep=2` should always be set in `beam_sample` (perhaps we can modify `.generate()` to set it by default when `num_beams>1`). \r\n\r\n- Hmm, the test I checked has `logits_warper_kwargs, logits_warper = self._get_warper_and_kwargs(num_beams=1)`, in a test named `test_beam_sample_generate_dict_output` 😕 . I can re-work the test, but this is the least point in our discussion here.\r\n\r\n> I'm sorry, I'll be very stubborn against this change :) \r\n\r\nI understand, but nice to have a discussion anyway. Let's bring @sgugger and @patrickvonplaten into discussion and see what they think :-)",
"Would adding warning around the changes made in this PR will change your mind, @gante ?",
"> Would adding warning around the changes made in this PR will change your mind, @gante ?\r\n\r\nHaha no, users ignore warnings (and documentation) most of the time 🙃 ",
"> > Would adding warning around the changes made in this PR will change your mind, @gante ?\r\n> \r\n> Haha no, users ignore warnings (and documentation) most of the time 🙃\r\n\r\nWell, I do agree with you (with this) 🙃🙃",
"I'd also be more on the error side, with a clearer error message and actionable reactions printed to the user. If we get to an all nan line in the batch, there is nothing we can really generate.",
"I also think that a runtime error should be thrown here ideally. **However** checking all values for `-inf` can seriously slow down generation, so let's maybe make sure to first test for a potential slow down. \r\n\r\nAlso to me it seems this PR is just to make the tests less flaky. Can we maybe instead try to relax parameters like `top_k` (instead of using default 50, we disable it) to minimize flakiness ? ",
"> I also think that a runtime error should be thrown here ideally. **However** checking all values for `-inf` can seriously slow down generation, so let's maybe make sure to first test for a potential slow down.\r\n> \r\n\r\nOK, I understand better now what you mean, but I left a comment above.\r\n\r\n> Also to me it seems this PR is just to make the tests less flaky. Can we maybe instead try to relax parameters like `top_k` (instead of using default 50, we disable it) to minimize flakiness ?\r\n\r\nThe flaky tests have been addressed in #21445\r\n\r\n",
"As we all agree that an error should be thrown with clear message, I am going to close this PR. The work on make more clear message should be done in another PR :-)"
] | 1,675
| 1,679
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Make beam sample more robust.
The probability (more generally, scores) in `torch.multinomial` could not contain `nan`. However, the computation
```python
probs = nn.functional.softmax(next_token_scores, dim=-1)
```
could have `next_token_scores` being all `-inf` (along some batch dim.) due to the processing in logit warpers and beam scorers,
and this leads to `probs` being all `nan` along that batch dim, and we sometimes get the following error in the sampling-style generation methods
```
RuntimeError: probability tensor contains either inf, nan or element < 0
```
This PR makes sure the probability to `torch.multinomial` is a valid input while not affecting the existing generation process.
Related PRs:
#17972
#18053
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21435/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21435",
"html_url": "https://github.com/huggingface/transformers/pull/21435",
"diff_url": "https://github.com/huggingface/transformers/pull/21435.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21435.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21434
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21434/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21434/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21434/events
|
https://github.com/huggingface/transformers/pull/21434
| 1,569,200,893
|
PR_kwDOCUB6oc5JKzss
| 21,434
|
add customizable ending learning rate arguments to warmup schedulers
|
{
"login": "NoTody",
"id": 88493484,
"node_id": "MDQ6VXNlcjg4NDkzNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/88493484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NoTody",
"html_url": "https://github.com/NoTody",
"followers_url": "https://api.github.com/users/NoTody/followers",
"following_url": "https://api.github.com/users/NoTody/following{/other_user}",
"gists_url": "https://api.github.com/users/NoTody/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NoTody/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NoTody/subscriptions",
"organizations_url": "https://api.github.com/users/NoTody/orgs",
"repos_url": "https://api.github.com/users/NoTody/repos",
"events_url": "https://api.github.com/users/NoTody/events{/privacy}",
"received_events_url": "https://api.github.com/users/NoTody/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21434). All of your documentation changes will be reflected on that endpoint.",
"Not sure I can properly review this code, sorry I never touch that file.",
"Thank you for your PR, but the Transformers library is primarily a library of models. Those schedulers are just implemented for ease of use in our Trainer (which wouldn't be able to set that `end_lr` new argument), anything more involved should come from another library/custom user code :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
# What does this PR do?
I noticed that in the original implementation, the learning rate for cosine and linear scheduler with warmup is always scheduled to 0. However, in many new research such as Masked Autoencoder, BEiT, they scheduled the learning rate to some no 0 end learning rate (1e-6 in their case). Hence, I added this features for these schedulers, so people who don't want to schedule learning rate directly to 0 can also use huggingface schedulers.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21434/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21434",
"html_url": "https://github.com/huggingface/transformers/pull/21434",
"diff_url": "https://github.com/huggingface/transformers/pull/21434.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21434.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21433
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21433/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21433/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21433/events
|
https://github.com/huggingface/transformers/issues/21433
| 1,569,142,703
|
I_kwDOCUB6oc5dhzev
| 21,433
|
Problem with tokenization using the 'distil-bert-uncased' tokenizer
|
{
"login": "QuantumStatic",
"id": 67118602,
"node_id": "MDQ6VXNlcjY3MTE4NjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/67118602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QuantumStatic",
"html_url": "https://github.com/QuantumStatic",
"followers_url": "https://api.github.com/users/QuantumStatic/followers",
"following_url": "https://api.github.com/users/QuantumStatic/following{/other_user}",
"gists_url": "https://api.github.com/users/QuantumStatic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QuantumStatic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QuantumStatic/subscriptions",
"organizations_url": "https://api.github.com/users/QuantumStatic/orgs",
"repos_url": "https://api.github.com/users/QuantumStatic/repos",
"events_url": "https://api.github.com/users/QuantumStatic/events{/privacy}",
"received_events_url": "https://api.github.com/users/QuantumStatic/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You are not dropping the text columns of your dataset, so the Trainer is then unable to make tensors out of them. You need to either remove the problematic columns of the dataset or remove the `remove_unused_columns=False` argument.",
"Could you please explain how removing the text column will help? Wouldn't the transformer need text column for text2text generation. Even if am failing to grasp the idea of dropping the column and `remove_unused_columns=True` is the correct way to move forward. I get the following error:\r\n\r\n\r\n\r\nWhy would the model not generate outputs? I double checked that my `input_ids` & `attention_mask` they are sufficiently sized according to the input data.",
"You should really go on the [forums](https://discuss.huggingface.co/) to help debug your code as the wider community will be there to help. Your code fine-tunes the base model, there is no text2text class here and no `labels` to go with it. That's why you get this error message, there is no loss to optimize.",
"I am sorry, I don't want hugging face to debug my code. I completely understand your time is more valuable than debugging my code. However, I am a beginner and I did try to look up online and I did try the forums and I assure you this is my last option to get help. \r\n\r\nIn addition, I thank you once again for helping me with such a trivial problem. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
### System Info
```
- `transformers` version: 4.24.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.8
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@ArthurZucker, @younesbelkada & @sgugger - since I am facing a tokenization issue with the trainer on `distil-bert-uncased` mostly using the official scripts.
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I was following official script titled [Fine-tune a pretrained model](https://huggingface.co/docs/transformers/training) with the only difference being that I am loading my dataset from a local `.csv` file with structure its described below.
| Text | Label (integers 0-2) |
| - | - |
| string 1 | label 1 |
| string 2 | label 2 |
| string n | label n |
My real dataset however, does not have a header row, It's just strings & integral labels ranging from 0-2. Header row was added for better readability of my structure using markdown.
I am using the library-defined `load_dataset` function to load my csv file right into a Hugging Face dataset. My code is as follows:
```python
from datasets import dataset_dict, load_dataset
from transformers import DistilBertTokenizerFast, DistilBertModel, Trainer, TrainingArguments
import torch
DATA_PATH = "SOMEPATH"
dataset = load_dataset('csv', data_files={'train': f"{DATA_PATH}\\train.csv", 'test': f"{DATA_PATH}\\test.csv", 'validation': f"{DATA_PATH}\\validation.csv"}, column_names=['text', 'label'], split=['train', 'test', 'validation'])
dataset = dataset_dict.DatasetDict({'train':dataset[0], 'test':dataset[1], 'validation':dataset[2]})
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
def tokenize_function(examples):
return tokenizer(examples["text"], padding=True, truncation=True, max_length=512)
FINAL_DS = dataset.map(tokenize_function, batched=True)
training_stuff = {
"batch_size": 64,
"epochs": 4,
"learning_rate": 1e-5,
"weight_decay": 0.01
}
training_args = TrainingArguments(
output_dir="/Models/DistilBert",
per_device_train_batch_size=training_stuff["batch_size"],
evaluation_strategy="steps",
num_train_epochs=training_stuff["epochs"],
fp16=True,
save_steps=100,
eval_steps=50,
logging_steps=10,
weight_decay=training_stuff["weight_decay"],
learning_rate=training_stuff["learning_rate"],
save_total_limit=64,
remove_unused_columns=False,
push_to_hub=False,
report_to='tensorboard',
load_best_model_at_end=True,
)
model = DistilBertModel.from_pretrained(
'distilbert-base-uncased',
num_labels=3,
id2label={0: 'Biased', 1: 'Non-biased', 2: 'No agreemnt'},
label2id={'Biased': 0, 'Non-biased': 1, 'No agreement': 2},
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=FINAL_DS['train'],
eval_dataset=FINAL_DS['validation'],
tokenizer=tokenizer,
)
train_results = trainer.train()
trainer.save_model()
```
However, when I run this script (at `trainer.train()`), I get the following error.
<img width="745" alt="err1" src="https://user-images.githubusercontent.com/67118602/216493803-c8d640f0-d3e8-4116-9f75-c4379ce8a290.png">

The hugging face forum [link](https://discuss.huggingface.co/t/getting-a-value-error-unable-to-create-a-tensor-because-the-feature-text-has-excessive-nesting-and-it-expects-it-to-be-int-for-some-reason/30890?u=quantumstatic) to this issue, unfortunately with no responses.
### Expected behavior
I would expect the model to start training.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21433/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21432
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21432/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21432/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21432/events
|
https://github.com/huggingface/transformers/pull/21432
| 1,569,103,117
|
PR_kwDOCUB6oc5JKeuE
| 21,432
|
annotated TFvisionEncoderDecoder input type hints
|
{
"login": "miyu386",
"id": 60191117,
"node_id": "MDQ6VXNlcjYwMTkxMTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/60191117?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miyu386",
"html_url": "https://github.com/miyu386",
"followers_url": "https://api.github.com/users/miyu386/followers",
"following_url": "https://api.github.com/users/miyu386/following{/other_user}",
"gists_url": "https://api.github.com/users/miyu386/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miyu386/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miyu386/subscriptions",
"organizations_url": "https://api.github.com/users/miyu386/orgs",
"repos_url": "https://api.github.com/users/miyu386/repos",
"events_url": "https://api.github.com/users/miyu386/events{/privacy}",
"received_events_url": "https://api.github.com/users/miyu386/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hello @Rocketknight1, I have one failing CircleCI test yet to resolve [here](https://app.circleci.com/pipelines/github/huggingface/transformers/56839/workflows/c6593d32-bae5-46fa-96cb-d0d05bdee084/jobs/687513?invite=true#step-110-7710) that I need some help with. I tried searching for solutions but couldn't find anything that fixes it. I'm assuming I'm either missing or need to upgrade some package but can't quite pinpoint the issue.",
"Hi @miyu386 - a couple of the issues were caused by a difference between two copied functions. Running `make fix-copies` fixed that!\r\n\r\nThe other issues are in our CI - they're caused by a version mismatch in TF vs. TF Probability. This can be fixed by rebasing your PR, but these issues will also be fixed when we merge the PR.\r\n\r\nIf you're happy for me to merge now, I can do that!",
"_The documentation is not available anymore as the PR was closed or merged._",
"@Rocketknight1 Thank you for the follow-up! I rebased with upstream and 1 test remained to fail. I don't know if this will be addressed when the PR gets merged, if that's the case the PR is ready to be merged now",
"@miyu386 Thanks for doing the rebase! The remaining issue is just a flaky test on our end in one of the PyTorch examples. It has nothing to do with your PR here, so I'm happy to merge now. Thanks for your contribution!"
] | 1,675
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
Co-authored-by: JuheonChu <chuj@dickinson.edu>
Co-authored-by: AdiaWu <wua@dickinson.edu>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes issue #[16059](https://github.com/huggingface/transformers/issues/16059)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21432/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21432",
"html_url": "https://github.com/huggingface/transformers/pull/21432",
"diff_url": "https://github.com/huggingface/transformers/pull/21432.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21432.patch",
"merged_at": 1676301619000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21431
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21431/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21431/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21431/events
|
https://github.com/huggingface/transformers/pull/21431
| 1,568,747,489
|
PR_kwDOCUB6oc5JJZdZ
| 21,431
|
🚨🚨🚨 Enforce single model initialization
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@stas00 In initial discussions with @LysandreJik , he mentioned he preferred not having a wrapper. Though the argument about init weights code in the wild is a sound one, so showed how it could look like with the last two commits.",
"Thanks for the PR, and for showing the two options! I feel like the wrapper is a little bit magical, but would make contributions simpler while reducing the complexity of the code.\r\n\r\nI would go with the wrapper, if possible.",
"Thank you for making it simpler for the end user, Sylvain - I will test this today on m4 and get back to you.",
"Thank you for doing a massive adjustment work and the explanations, Sylvain!\r\n\r\nThis is hard work and very awesome for everybody to benefit from!",
"Last failing test is flaky so this is good for final review!",
"so it didn't make it into https://github.com/huggingface/transformers/releases/tag/v4.26.1, right?\r\n\r\ndo you know if you plan another hotfix release in the future or plan to wait for 4.27.0? \r\n\r\nAsking as I'm needing to anchor requirements on this fix for m4 where I found this bug.",
"This won't be until 4.27.0 as it could come with bugs we need to fix (and it's not a regression fix so won't go in a patch).",
"Thank you for the clarity, Sylvain. 4.27.0 it is."
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
There are currently three problems with the mode inits:
**Problem 1:** When not using the fast init (so in practice when using the model constructor or `AutoXxx.from_config` instead of `from_pretrained`) weights are initialized multiple times. @stas00 showed the example of `OPTForCausalLM` where we have a call to `post_init()` three times: in `OPTForCausalLM`, `OptModel` and `OptDecoder`. Each of those calls launches a recursive call of `_init_weights` to all submodules of the model, so this makes three inits.
**Problem 2:** The fast init (of random weights of the head in `from_pretrained`) and non-fast init (as above) are not always equivalent. This is because in `from_pretrained` init is done on calling `_init_weights` only on leaf modules with weights not present in the checkpoint, but sometimes `_init_weights` contains class checks for bigger modules ([here](https://github.com/huggingface/transformers/blob/77db257e2a67d4b043cf03bf390947fcd71a9f53/src/transformers/models/oneformer/modeling_oneformer.py#L2801) is one example in OneFormer)
**Problem 3:** Some of the models have `_init_weights` function that will initialize the same weights with two different ways. We can take back [this example](https://github.com/huggingface/transformers/blob/77db257e2a67d4b043cf03bf390947fcd71a9f53/src/transformers/models/oneformer/modeling_oneformer.py#L2801) in OneFormer which initializes a weight that is a Conv2D, but _init_weights is applied recursively, so that Conv2D will also be initialized [here](https://github.com/huggingface/transformers/blob/77db257e2a67d4b043cf03bf390947fcd71a9f53/src/transformers/models/oneformer/modeling_oneformer.py#L2891) with a different rule.
This PR should solve these three problems with one stone by changing slightly the `_init_weights` function to look for a private `_is_hf_initialized` attribute in the module and skip the init if it's there and `True`. Of course when initializing a module, this private attribute is set to `True` after the initialization is done.
This PR gets the 🚨🚨🚨 sign because it might break user's code if they were relying on the (buggy) init of composite models: if a model has an encoder or backbone that is initialized differently from the rest, the init of the encoder/backbone was previously erased by the bigger model init.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21431/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21431/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21431",
"html_url": "https://github.com/huggingface/transformers/pull/21431",
"diff_url": "https://github.com/huggingface/transformers/pull/21431.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21431.patch",
"merged_at": 1675975586000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21430
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21430/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21430/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21430/events
|
https://github.com/huggingface/transformers/pull/21430
| 1,568,712,949
|
PR_kwDOCUB6oc5JJSIK
| 21,430
|
Add `inputs_embeds` support for `.generate()` with BLOOM models
|
{
"login": "akreal",
"id": 243812,
"node_id": "MDQ6VXNlcjI0MzgxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/243812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akreal",
"html_url": "https://github.com/akreal",
"followers_url": "https://api.github.com/users/akreal/followers",
"following_url": "https://api.github.com/users/akreal/following{/other_user}",
"gists_url": "https://api.github.com/users/akreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akreal/subscriptions",
"organizations_url": "https://api.github.com/users/akreal/orgs",
"repos_url": "https://api.github.com/users/akreal/repos",
"events_url": "https://api.github.com/users/akreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/akreal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"There seems to be a CircleCI issue when triggering the tests 🤔 \r\n\r\n@akreal could you try following [these instructions](https://support.circleci.com/hc/en-us/articles/360056873811-Your-access-to-a-project-from-CircleCI-was-revoked-by-GitHub)? I'm not sure whether they will help, but they were the closest match I found based on CircleCI's error message."
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Adds accepting `.generate()` calls with `inputs_embeds` on BLOOM models (following GPT2 example in #21405).
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21430/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21430",
"html_url": "https://github.com/huggingface/transformers/pull/21430",
"diff_url": "https://github.com/huggingface/transformers/pull/21430.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21430.patch",
"merged_at": 1675427474000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21429
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21429/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21429/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21429/events
|
https://github.com/huggingface/transformers/pull/21429
| 1,568,676,349
|
PR_kwDOCUB6oc5JJKXg
| 21,429
|
Add tutorial doc for TF + TPU
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sayakpaul The spaces have been removed because the extra content after them got moved to a whole other doc!"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
This is the sidebar tutorial for training with TF + TPU, to go with [the code notebook](https://github.com/huggingface/notebooks/pull/313).
Note that the Markdown is exported straight from Notion, so some formatting will probably look very wrong - I'm working on cleanup!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21429/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21429",
"html_url": "https://github.com/huggingface/transformers/pull/21429",
"diff_url": "https://github.com/huggingface/transformers/pull/21429.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21429.patch",
"merged_at": 1675451262000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21428
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21428/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21428/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21428/events
|
https://github.com/huggingface/transformers/pull/21428
| 1,568,514,805
|
PR_kwDOCUB6oc5JInJE
| 21,428
|
do not scale gradient in bf16 mode
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Turn off gradient scaling in the trainer when bf16 mode is selected. Only use gradient scaling in float16 mode.
## Who can review?
@sgugger and @stas00
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21428/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21428/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21428",
"html_url": "https://github.com/huggingface/transformers/pull/21428",
"diff_url": "https://github.com/huggingface/transformers/pull/21428.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21428.patch",
"merged_at": 1675443454000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21427
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21427/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21427/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21427/events
|
https://github.com/huggingface/transformers/pull/21427
| 1,568,354,015
|
PR_kwDOCUB6oc5JIDx-
| 21,427
|
Refactor whisper asr pipeline to include language too.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Still not finished (I'm seeing weird drops in WER when changing parameter combinations) but as a starter @sgugger if you have early feedback I'd take it.",
"Ok, I found that whisper doesn't play well **at all** without `return_timestamps:\r\n\r\n```python\r\n# Whisper HF captioning\r\nfrom evaluate import load\r\nfrom datasets import load_dataset\r\nimport numpy as np\r\nfrom transformers import pipeline\r\nimport time\r\nimport whisper\r\n\r\n\r\nlibri = load_dataset(\"librispeech_asr\", \"clean\", split=\"test\")\r\nmodel_name = \"openai/whisper-large-v2\"\r\n\r\nspeech_recognizer = pipeline(task=\"automatic-speech-recognition\", model=model_name, framework=\"pt\", device=2)\r\n\r\nname = model_name.split(\"/\")[-1][len(\"whisper\") + 1 :]\r\nmodel = whisper.load_model(f\"{name}-v1\" if name == \"large\" else name).to(device=\"cuda:3\")\r\n\r\n# Faulty example\r\nstart = 10\r\nend = 15\r\niterator = range(start, end)\r\nspeech_file = np.concatenate([libri[i][\"audio\"][\"array\"] for i in iterator])\r\nlabels = \" \".join(libri[i][\"text\"] for i in iterator)\r\n\r\nfor return_timestamps in [True, False]:\r\n print(f\"========Timestamps ({return_timestamps})=========\")\r\n start = time.time()\r\n hf = speech_recognizer(\r\n [speech_file],\r\n return_timestamps=return_timestamps,\r\n chunk_length_s=30,\r\n stride_length_s=[4, 4],\r\n batch_size=32,\r\n num_workers=1,\r\n )[0]\r\n end = time.time()\r\n print(f\"model : {model_name}\\nhf time : \", end - start)\r\n\r\n norm_labels = speech_recognizer.tokenizer._normalize(labels)\r\n norm_res = speech_recognizer.tokenizer._normalize(hf[\"text\"])\r\n print(\"HF TEXT:\", hf[\"text\"])\r\n\r\n wer = load(\"wer\")\r\n hf_wer = wer.compute(predictions=[norm_res], references=[norm_labels])\r\n print(\"hf wer :\", hf_wer)\r\n\r\n start = time.time()\r\n openai = model.transcribe(np.asarray(speech_file, dtype=np.float32), without_timestamps=not return_timestamps)\r\n end = time.time()\r\n norm_open = speech_recognizer.tokenizer._normalize(openai[\"text\"])\r\n print(\"openai time : \", end - start)\r\n openai_wer = wer.compute(predictions=[norm_open], references=[norm_labels])\r\n print(\"openai wer :\", openai_wer)\r\n print(\"Openai TEXT:\", openai[\"text\"])\r\n\r\n # # if hf_wer > 1.5 * openai_wer:\r\n # # import ipdb\r\n\r\n # # ipdb.set_trace()\r\n```\r\n\r\nIf you check, the output is really different in both cases. Whisper seems to stop outputting tokens when not using timestamps instead of continuing.",
"> Can we do the changes in the tests in a followup PR so that the diffs only show potential new tests, but no changes otherwise?\r\n\r\nThere's less change than the diff leads to believe. I only removed the tests linked to a function I had removed, and there's a few tiny changes in the actual timestamps being outputted. Happy to make the modifications, it will leave a function unused here but would be the tests modifications easier to spot."
] | 1,675
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
## Why a refacto of this magnitude ?
- Earlier iterations tried to keep in line with `pipeline`, which started a cascade of `if ... else ` just for `whisper`.
- Those cascade of functions where all loosing the `language` information which would make it tough to include
`language` in the return.
- Since we have chunking, potentially we have several `language` codes within a single file making this a segmentation problem, not a classification problem. (So using `detect_language(..) -> str` not really applicable to the pipeline).
Given this problem space, it was simpler to reimplement the whole thing within the `tokenizer` akin to `_build_conversational_inputs` which can allow lots of specifities within each model.
Here the new `_decode_asr` doesn't contain any information outside the tokenizer.
## How does it work ?
Hopefully inline comments should be the most comprehensive tools.
The tokenizer, needs `stride`, expressed in `seconds` and `time_precision` to convert from token space to seconds space.
We could do this outside of `tokenizer` if necessary (everything kept in tokenizer space) but since `time_precision` is already used as args of some methods, we can use this, so the output of this function doesn't need to be converted back.
=========== Overview ============
- iterate over all outputs
- all tokens within output
- Each token can be
- language token
- special token
- timestamp token
- text token
- We accumulate the text tokens.
- We split on end timestamps
- Lots of complexity comes from stride and timestamps
Most importantly we need to handle strides, and timestamps within strides. In order to handle those, we're simply not splitting on them, and handling merges later.
All the merges is relatively simple, we find the maximum overlapping sequence.
Small optimization: The overlap sequence might contain errors/conflicts. We're choosing from the previous sequence on the left side of the overlap, and next sequence on right side of the overlap. Since those tokens should correspond to the same audio, midway should be correct most of the time.
## Benchmark
Courtesy of @ArthurZucker
```python
# Whisper HF captioning
from evaluate import load
from datasets import load_dataset
import numpy as np
from transformers import pipeline
import time
import whisper
libri = load_dataset("librispeech_asr", "clean", split="test")
batch = len(libri)
start_dataset = 0
# batch = 10
# start = 0 * batch
models = ["tiny", "tiny.en", "base", "base.en", "small", "small.en", "medium", "medium.en", "large", "large-v2"]
# models = ["tiny.en"]
# models = ["large-v2"]
for model_name in models:
speech_recognizer = pipeline(
task="automatic-speech-recognition", model=f"openai/whisper-{model_name}", framework="pt", device=1
)
model = whisper.load_model(f"{model_name}" + "-v1" if model_name == "large" else f"{model_name}").to(
device="cuda:2"
)
for offset in range(start_dataset, len(libri), batch):
iterator = range(offset, offset + batch)
speech_file = np.concatenate([libri[i]["audio"]["array"] for i in iterator])
labels = " ".join([libri[i]["text"] for i in iterator])
start = time.time()
hf = speech_recognizer(
[speech_file],
return_timestamps=True,
chunk_length_s=30,
stride_length_s=[4, 4],
batch_size=32,
ignore_warning=True,
num_workers=1,
)[0]
end = time.time()
# print(res)
print(f"model : {model_name}\nhf time : ", end - start)
# print(res["text"])
with open("hf.txt", "w") as f:
f.write(hf["text"])
norm_labels = speech_recognizer.tokenizer._normalize(labels)
norm_res = speech_recognizer.tokenizer._normalize(hf["text"])
wer = load("wer")
hf_wer = wer.compute(predictions=[norm_res], references=[norm_labels])
print("hf wer :", hf_wer)
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21427/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21427",
"html_url": "https://github.com/huggingface/transformers/pull/21427",
"diff_url": "https://github.com/huggingface/transformers/pull/21427.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21427.patch",
"merged_at": 1677777139000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21426
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21426/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21426/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21426/events
|
https://github.com/huggingface/transformers/pull/21426
| 1,568,311,477
|
PR_kwDOCUB6oc5JH6t-
| 21,426
|
Allow to add more information in `is_flaky`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
As mentioned once offline, I think it's better for us to make a bit more effort to describe the situation for tests decorated by `is_flaky`. We don't always know the exact reasons (and for known cases, we don't always have good way to fix - at least not in a few months sometimes). A `description` is always good IMO.
For future PRs, if new tests being decorated with `is_flaky`, let's keep 👀 on if `description` is provided - despite it's an optional parameter 🙏 .
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21426/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21426",
"html_url": "https://github.com/huggingface/transformers/pull/21426",
"diff_url": "https://github.com/huggingface/transformers/pull/21426.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21426.patch",
"merged_at": 1675356083000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21425
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21425/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21425/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21425/events
|
https://github.com/huggingface/transformers/pull/21425
| 1,568,304,310
|
PR_kwDOCUB6oc5JH5NF
| 21,425
|
[`ImageProcessor`] Refactor default `mean` & `std` to `OPENAI_CLIP_MEAN` & `OPENAI_CLIP_STD`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,676
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
Initially this PR was intended to fix a small nit for BLIP feature extractors, the default normalization `mean` & `std` are currently incorrect as pointed out by @NielsRogge , BLIP uses the mean and std that are identical to CLIP: https://github.com/salesforce/LAVIS/blob/5ddd9b4e5149dbc514e81110e03d28458a754c5d/lavis/processors/blip_processors.py#L21 - this has no effect for the current models as the values were already correct on the Hub.
Therefore this PR adds a new variable in `constants.py`, `OPENAI_CLIP_MEAN` & `OPENAI_CLIP_STD` as this value is used by other models as well `CLIP`, `CLIPSeg`, `OWL-ViT` , etc.
cc @NielsRogge @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21425/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21425",
"html_url": "https://github.com/huggingface/transformers/pull/21425",
"diff_url": "https://github.com/huggingface/transformers/pull/21425.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21425.patch",
"merged_at": 1676656626000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21424
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21424/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21424/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21424/events
|
https://github.com/huggingface/transformers/pull/21424
| 1,568,244,732
|
PR_kwDOCUB6oc5JHsa5
| 21,424
|
Add tips for generation with Int8 models
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<3 "
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds some tips to avoid gotchas with text generation and Int8 models
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
cc @gante @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21424/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21424",
"html_url": "https://github.com/huggingface/transformers/pull/21424",
"diff_url": "https://github.com/huggingface/transformers/pull/21424.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21424.patch",
"merged_at": 1675711541000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21423
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21423/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21423/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21423/events
|
https://github.com/huggingface/transformers/pull/21423
| 1,568,219,538
|
PR_kwDOCUB6oc5JHnA4
| 21,423
|
Fixes bug in the creation of ExponentialDecayLengthPenalty
|
{
"login": "jorgemcgomes",
"id": 3987574,
"node_id": "MDQ6VXNlcjM5ODc1NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3987574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jorgemcgomes",
"html_url": "https://github.com/jorgemcgomes",
"followers_url": "https://api.github.com/users/jorgemcgomes/followers",
"following_url": "https://api.github.com/users/jorgemcgomes/following{/other_user}",
"gists_url": "https://api.github.com/users/jorgemcgomes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jorgemcgomes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jorgemcgomes/subscriptions",
"organizations_url": "https://api.github.com/users/jorgemcgomes/orgs",
"repos_url": "https://api.github.com/users/jorgemcgomes/repos",
"events_url": "https://api.github.com/users/jorgemcgomes/events{/privacy}",
"received_events_url": "https://api.github.com/users/jorgemcgomes/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"merging as the failing test (`tests/models/cvt/test_modeling_cvt.py::CvtModelTest::test_save_load_fast_init_to_base`) is a known flaky test"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
input_ids_seq_length doesn't exist in the GenerationConfig, it exists as local variable in the function.
Setting exponential_decay_length_penalty therefore results in an error: `AttributeError: 'GenerationConfig' object has no attribute 'input_ids_seq_length'`
This simple change fixes this issue, and the exponential_decay_length_penalty works as expected.
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21423/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21423",
"html_url": "https://github.com/huggingface/transformers/pull/21423",
"diff_url": "https://github.com/huggingface/transformers/pull/21423.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21423.patch",
"merged_at": 1675363914000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21422
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21422/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21422/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21422/events
|
https://github.com/huggingface/transformers/pull/21422
| 1,568,080,797
|
PR_kwDOCUB6oc5JHJPD
| 21,422
|
Add distinct section names for PyTorch and TF
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for fixing this!"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
Super-small fix here - the section names for PyTorch and TF were identical in the notebooks doc, which meant that when you clicked on one of the TF categories in the TOC you got sent to the PyTorch one instead (because it came first).
This PR adds separate section names so this doesn't happen! (Thanks @mishig25 for telling me how to do that!)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21422/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21422",
"html_url": "https://github.com/huggingface/transformers/pull/21422",
"diff_url": "https://github.com/huggingface/transformers/pull/21422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21422.patch",
"merged_at": 1675348198000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21421
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21421/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21421/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21421/events
|
https://github.com/huggingface/transformers/pull/21421
| 1,568,041,797
|
PR_kwDOCUB6oc5JHAxA
| 21,421
|
Use torch `1.13.1` in push/scheduled CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Well, I should update these much earlier.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21421/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21421",
"html_url": "https://github.com/huggingface/transformers/pull/21421",
"diff_url": "https://github.com/huggingface/transformers/pull/21421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21421.patch",
"merged_at": 1675346333000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21420
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21420/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21420/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21420/events
|
https://github.com/huggingface/transformers/issues/21420
| 1,567,867,023
|
I_kwDOCUB6oc5dc8CP
| 21,420
|
[examples/research_projects/onnx/summarization] is outdated
|
{
"login": "un-certainty",
"id": 102007104,
"node_id": "U_kgDOBhSBQA",
"avatar_url": "https://avatars.githubusercontent.com/u/102007104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/un-certainty",
"html_url": "https://github.com/un-certainty",
"followers_url": "https://api.github.com/users/un-certainty/followers",
"following_url": "https://api.github.com/users/un-certainty/following{/other_user}",
"gists_url": "https://api.github.com/users/un-certainty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/un-certainty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/un-certainty/subscriptions",
"organizations_url": "https://api.github.com/users/un-certainty/orgs",
"repos_url": "https://api.github.com/users/un-certainty/repos",
"events_url": "https://api.github.com/users/un-certainty/events{/privacy}",
"received_events_url": "https://api.github.com/users/un-certainty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is an unmaintained example, it won't work without using the transformers version corresponding to the time it was written.",
"@sgugger Thanks for your quick reply.\r\n\r\nFor some reason, I have to use transformer==4.25.0. I wonder if @fatcat-z has any suggestion on how to adapt the code?",
"Hi,\r\n\r\nFor converting summarization models to ONNX, we now have a lot of classes implemented in [HuggingFace Optimum](https://huggingface.co/docs/optimum/index).\r\n\r\nGeneration with ONNX models is also implemented there (greedy, beam search, etc.). Check the [guides](https://huggingface.co/docs/optimum/onnxruntime/overview) for more info.",
"Hi @NielsRogge \r\n\r\nOptimum does look promising. But my model is a GPT-like decoder, with only a `*ModelForConditionalGeneration` interface. You can find details [here](https://huggingface.co/BAAI/glm-large/blob/main/modeling_glm.py). The signature of the generation interface differs a bit from what Optimum has officially supported. As far as I can tell, even if I managed to convert the model into ONNX following this [guide](https://huggingface.co/docs/transformers/serialization), using Optimum will run into another issue.",
"Hi @un-certainty, not sure if I understood your issue well. Would the [`ORTModelForCustomTasks`](https://github.com/huggingface/optimum/blob/e8f5a955bc40eea8c1382ab29be8f8ac99601817/optimum/onnxruntime/modeling_ort.py#L1585) help you to achieve this?\r\nOtherwise, don't hesitate to open an issue in [Optimum](https://github.com/huggingface/optimum/issues) to see how we can improve the current implementation.",
"Hi @un-certainty ,\r\n\r\nCould you please try to update the code [here](https://github.com/huggingface/transformers/blob/197e7ce911d91d85eb2f91858720957c2d979cd2/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py#L709)?\r\n\r\nPlease set the type explicitly, like \"num_beams: int\".",
"Hi @fatcat-z \r\n\r\nThanks for your suggestion. \r\n\r\nI found that you created two traced decoders [here](https://github.com/huggingface/transformers/blob/197e7ce911d91d85eb2f91858720957c2d979cd2/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py#L222). So will there be two decoders in the converted ONNX graph?",
"> Hi @fatcat-z\r\n> \r\n> Thanks for your suggestion.\r\n> \r\n> I found that you created two traced decoders [here](https://github.com/huggingface/transformers/blob/197e7ce911d91d85eb2f91858720957c2d979cd2/examples/research_projects/onnx/summarization/bart_onnx/generation_onnx.py#L222). So will there be two decoders in the converted ONNX graph?\r\n\r\nIn the final ONNX graph, there will be no decoder ONNX op so there won't be 2 decoders there.\r\n\r\nThe point is: each traced decoder in the model will be converted to a set of ONNX ops in the final ONNX graph.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.25.0
- Platform: Linux-4.19.91-24.1.al7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@fatcat-z
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
With the above env set, just clone the repo and run
```
python run_onnx_exporter.py --model_name_or_path facebook/bart-base
```
### Expected behavior
**Expected**: BART with BeamSearch exported with no error.
BTW, I notice that projects in `research_projects` are not maintained actively. I want to export an `XXXModelForConditionalGeneration` with BeamSearch. And this demo is the best reference I have found so far. So I truly hope someone can fix it.
**Actual**:
```pytb
/home/root/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:232: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
/home/root/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:239: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
/home/root/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:271: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
/home/root/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:915: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
/home/root/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:96: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min))
/home/root/.local/lib/python3.8/site-packages/torch/jit/_trace.py:976: TracerWarning: Encountering a list at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
module._c._create_method_from_trace(
/home/root/.local/lib/python3.8/site-packages/torch/jit/_trace.py:154: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:480.)
if a.grad is not None:
/home/root/.local/lib/python3.8/site-packages/torch/jit/annotations.py:309: UserWarning: TorchScript will treat type annotations of Tensor dtype-specific subtypes as if they are normal Tensors. dtype constraints are not enforced in compilation either.
warnings.warn("TorchScript will treat type annotations of Tensor "
Traceback (most recent call last):
File "run_onnx_exporter.py", line 207, in <module>
main()
File "run_onnx_exporter.py", line 203, in main
export_and_validate_model(model, tokenizer, output_name, num_beams, max_length)
File "run_onnx_exporter.py", line 123, in export_and_validate_model
torch.onnx.export(
File "/home/root/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 504, in export
_export(
File "/home/root/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1529, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/root/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1131, in _model_to_graph
example_outputs = _get_example_outputs(model, args)
File "/home/root/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 1017, in _get_example_outputs
example_outputs = model(*input_args, **input_kwargs)
File "/home/root/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
RuntimeError: forward() Expected a value of type 'Tensor (inferred)' for argument 'num_beams' but instead found type 'int'.
Inferred 'num_beams' to be of type 'Tensor' because it was not annotated with an explicit type.
Position: 3
Value: 4
Declaration: forward(__torch__.bart_onnx.generation_onnx.BARTBeamSearchGenerator self, Tensor input_ids, Tensor attention_mask, Tensor num_beams, Tensor max_length, Tensor decoder_start_token_id) -> Tensor
Cast error details: Unable to cast 4 to Tensor
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21420/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21419
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21419/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21419/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21419/events
|
https://github.com/huggingface/transformers/pull/21419
| 1,567,826,002
|
PR_kwDOCUB6oc5JGRsu
| 21,419
|
Fix Graphormer test suite
|
{
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
Fixes comment in #21367, @ydshieh
Updated shape of model instantiation when calling pretrained in test suite, + updated values in the integration test.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21419/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21419",
"html_url": "https://github.com/huggingface/transformers/pull/21419",
"diff_url": "https://github.com/huggingface/transformers/pull/21419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21419.patch",
"merged_at": 1675351754000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21418
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21418/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21418/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21418/events
|
https://github.com/huggingface/transformers/pull/21418
| 1,567,707,935
|
PR_kwDOCUB6oc5JF4Fi
| 21,418
|
add new model of MGP-STR
|
{
"login": "wdp-007",
"id": 4025053,
"node_id": "MDQ6VXNlcjQwMjUwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4025053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wdp-007",
"html_url": "https://github.com/wdp-007",
"followers_url": "https://api.github.com/users/wdp-007/followers",
"following_url": "https://api.github.com/users/wdp-007/following{/other_user}",
"gists_url": "https://api.github.com/users/wdp-007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wdp-007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wdp-007/subscriptions",
"organizations_url": "https://api.github.com/users/wdp-007/orgs",
"repos_url": "https://api.github.com/users/wdp-007/repos",
"events_url": "https://api.github.com/users/wdp-007/events{/privacy}",
"received_events_url": "https://api.github.com/users/wdp-007/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add new model of MGP-STR.
Fixes https://github.com/huggingface/transformers/issues/18828
## Before submitting
- [√] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [√] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [√] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [√] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [√] Did you write any new necessary tests?
## Who can review?
@amyeroberts and @NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21418/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21418",
"html_url": "https://github.com/huggingface/transformers/pull/21418",
"diff_url": "https://github.com/huggingface/transformers/pull/21418.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21418.patch",
"merged_at": 1678702292000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21417
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21417/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21417/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21417/events
|
https://github.com/huggingface/transformers/issues/21417
| 1,567,532,412
|
I_kwDOCUB6oc5dbqV8
| 21,417
|
(maybe) redundant code in transformers/src/transformers/models/bert/
|
{
"login": "Joqsan",
"id": 6027118,
"node_id": "MDQ6VXNlcjYwMjcxMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6027118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Joqsan",
"html_url": "https://github.com/Joqsan",
"followers_url": "https://api.github.com/users/Joqsan/followers",
"following_url": "https://api.github.com/users/Joqsan/following{/other_user}",
"gists_url": "https://api.github.com/users/Joqsan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Joqsan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Joqsan/subscriptions",
"organizations_url": "https://api.github.com/users/Joqsan/orgs",
"repos_url": "https://api.github.com/users/Joqsan/repos",
"events_url": "https://api.github.com/users/Joqsan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Joqsan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only."
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
Hi,
I was looking at the BERT code and noticed that `BertOutput` and `BertSelfOutput` are almost the same, except for the `in_features` argument to `nn.Linear`?
https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/src/transformers/models/bert/modeling_bert.py#L376-L387
https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/src/transformers/models/bert/modeling_bert.py#L454-L465
I was wondering if there is a reason to have it in this way, instead of just one `Output` class accepting an `in_hidden_size` for when it needs to change.
Something like this:
```
class BertNewOutput(nn.Module):
def __init__(self, config, in_hidden_size):
super().__init__()
self.dense = nn.Linear(in_hidden_size, config.hidden_size)
...
```
The pattern with `BertOutput` and `BertSelfOutput` is repeated in similar parts of the code such as in `src/transformers/models/bert/modeling_tf_bert.py` where
https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/src/transformers/models/bert/modeling_tf_bert.py#L349
is the same code as in
https://github.com/huggingface/transformers/blob/92ce53aab859012f7714dae6d6fce7a7d701e75f/src/transformers/models/bert/modeling_tf_bert.py#L427
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21417/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21416
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21416/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21416/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21416/events
|
https://github.com/huggingface/transformers/pull/21416
| 1,567,496,941
|
PR_kwDOCUB6oc5JFLh5
| 21,416
|
Fixed RAG script which was failing on dummy example
|
{
"login": "kaustubhdhole",
"id": 8274216,
"node_id": "MDQ6VXNlcjgyNzQyMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8274216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaustubhdhole",
"html_url": "https://github.com/kaustubhdhole",
"followers_url": "https://api.github.com/users/kaustubhdhole/followers",
"following_url": "https://api.github.com/users/kaustubhdhole/following{/other_user}",
"gists_url": "https://api.github.com/users/kaustubhdhole/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaustubhdhole/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaustubhdhole/subscriptions",
"organizations_url": "https://api.github.com/users/kaustubhdhole/orgs",
"repos_url": "https://api.github.com/users/kaustubhdhole/repos",
"events_url": "https://api.github.com/users/kaustubhdhole/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaustubhdhole/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi there. This is an unmaintained research project, so we normally don't accept PRs on it. You can try pinging the original authors and see if they accept your suggestion :-)",
"@sgugger looks all good to me.",
"Thanks for having a look @shamanez and thanks for your contribution @kaustubhdhole !"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixed RAG script which was failing on test_epoch_end in dummy example
The dummy example fails when test_epoch_end is called. The prefix="test" should be dynamic in the log metrics too.
Also, test_finetune.sh was failing when test file was not present.
@shamanez would be ideal to review.
Let me know if more information is needed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21416/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21416",
"html_url": "https://github.com/huggingface/transformers/pull/21416",
"diff_url": "https://github.com/huggingface/transformers/pull/21416.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21416.patch",
"merged_at": 1675693654000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21415
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21415/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21415/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21415/events
|
https://github.com/huggingface/transformers/issues/21415
| 1,567,462,904
|
I_kwDOCUB6oc5dbZX4
| 21,415
|
layoutlmv3-base-chinese convert onnx
|
{
"login": "githublsk",
"id": 77612906,
"node_id": "MDQ6VXNlcjc3NjEyOTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/77612906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/githublsk",
"html_url": "https://github.com/githublsk",
"followers_url": "https://api.github.com/users/githublsk/followers",
"following_url": "https://api.github.com/users/githublsk/following{/other_user}",
"gists_url": "https://api.github.com/users/githublsk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/githublsk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/githublsk/subscriptions",
"organizations_url": "https://api.github.com/users/githublsk/orgs",
"repos_url": "https://api.github.com/users/githublsk/repos",
"events_url": "https://api.github.com/users/githublsk/events{/privacy}",
"received_events_url": "https://api.github.com/users/githublsk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"ONNX will only convert the model, not the tokenizer.\r\n\r\nONNX conversion is now moved to the Optimum package, no need to pass --feature anymore as it will infer that automatically based on the checkpoint. Docs here: https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model.",
"Closing this as the issue seems resolved."
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
### Feature request
When use layoutlmv3-base-chinese converting to onnx, the comand line is as follows: python3.7 -m transformers.onnx --model=model path --feature token-classification, we find it lacks of vocab_file and merges_file, can you support us above two files, thank you very much, if not, can you give us some method to solve above problem?
### Motivation
No.
### Your contribution
No.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21415/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21414
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21414/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21414/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21414/events
|
https://github.com/huggingface/transformers/pull/21414
| 1,567,462,465
|
PR_kwDOCUB6oc5JFEA-
| 21,414
|
add support to MPNetForCausalLM
|
{
"login": "jwengr",
"id": 58577380,
"node_id": "MDQ6VXNlcjU4NTc3Mzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/58577380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jwengr",
"html_url": "https://github.com/jwengr",
"followers_url": "https://api.github.com/users/jwengr/followers",
"following_url": "https://api.github.com/users/jwengr/following{/other_user}",
"gists_url": "https://api.github.com/users/jwengr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jwengr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jwengr/subscriptions",
"organizations_url": "https://api.github.com/users/jwengr/orgs",
"repos_url": "https://api.github.com/users/jwengr/repos",
"events_url": "https://api.github.com/users/jwengr/events{/privacy}",
"received_events_url": "https://api.github.com/users/jwengr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi there! MPNet is an encoder model, so there are no checkpoints available that will work for causal LM objective. Why are you interested in adding this model?",
"Hi @sgugger!\r\nI've recently been using sentense-transformers library which relies on transformers for sentence embedding.\r\nand certain features(https://www.sbert.net/docs/package_reference/losses.html#denoisingautoencoderloss) require decoder part of model.\r\nmpnet show good performance in sentence-transformers library and i tried to write a decoder part to further improve this.\r\nthere was a similar issue before [#14737](https://github.com/huggingface/transformers/issues/14737)",
"There is no `DistilBertForCausalLM` either in the library, so the issue you link to doesn't really have a link. Like I said, there are no pretrained checkpoints for MPNet and causal language modeling, so even if we add this architecture, you won't be able to use it in sentence-transformers since you will get garbage outputs."
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes [#21379](https://github.com/huggingface/transformers/issues/21379#issue-1563845759)
Goal: Add support to MPNetForCausalLM
Changes:
Modified [modeling_mpnet.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/mpnet/modeling_mpnet.py): add cross-attention and accept arguments encoder_hidden_states & encoder_attention_mask ; and added the new class MPNetForCausalLM;
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
--> @VictorSanh @thomwolf @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21414/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21414",
"html_url": "https://github.com/huggingface/transformers/pull/21414",
"diff_url": "https://github.com/huggingface/transformers/pull/21414.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21414.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21413
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21413/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21413/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21413/events
|
https://github.com/huggingface/transformers/issues/21413
| 1,567,287,778
|
I_kwDOCUB6oc5dauni
| 21,413
|
Error: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose
|
{
"login": "arastumudgal",
"id": 76526750,
"node_id": "MDQ6VXNlcjc2NTI2NzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/76526750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arastumudgal",
"html_url": "https://github.com/arastumudgal",
"followers_url": "https://api.github.com/users/arastumudgal/followers",
"following_url": "https://api.github.com/users/arastumudgal/following{/other_user}",
"gists_url": "https://api.github.com/users/arastumudgal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arastumudgal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arastumudgal/subscriptions",
"organizations_url": "https://api.github.com/users/arastumudgal/orgs",
"repos_url": "https://api.github.com/users/arastumudgal/repos",
"events_url": "https://api.github.com/users/arastumudgal/events{/privacy}",
"received_events_url": "https://api.github.com/users/arastumudgal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @arastumudgal \r\nIf I am not mistaken, the fix should have been addressed in https://github.com/huggingface/transformers/pull/21347\r\nIf you install `transformers` from the `main` branch `pip install git+https://github.com/huggingface/transformers`, your script should work\r\n",
"Hi @younesbelkada \r\n\r\n`\r\nImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html`\r\n\r\n\r\nI am facing this error on jupyter notebook when loading the models.\r\n<img width=\"582\" alt=\"Screenshot 2023-02-07 at 9 04 04 AM\" src=\"https://user-images.githubusercontent.com/76526750/217141786-2dc0da11-5a16-42c0-9924-e029fe56abe1.png\">\r\n\r\n\r\n```\r\n!pip install -U ipywidgets\r\n!pip install ipywidgets --upgrade\r\n!jupyter nbextension enable --py widgetsnbextension\r\n!pip install -U jupyter\r\n```\r\n\r\n\r\nDid this too but still it isnt working, showing the same error. Any fix? It is working on google colab but not on jupyter notebook. ",
"Hi @arastumudgal \r\nThanks for the issue but this looks like an issue that is independent from `transformers` ",
"> Hi @arastumudgal If I am not mistaken, the fix should have been addressed in #21347 If you install `transformers` from the `main` branch `pip install git+https://github.com/huggingface/transformers`, your script should work\r\n\r\n\r\n\r\nJust tried this, facing the same error though. @younesbelkada ",
"@arastumudgal Maybe an update flag was missing, i.e. `pip install -U git+https://github.com/huggingface/transformers` :) \r\n\r\nPlease note that `galai` [has a pinned transformers version](https://github.com/paperswithcode/galai/blob/e3e34481aefeceff9f239f2121988566382a72b2/requirements.txt#L2), and you might run into other issues.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@gante @ArthurZucker @younesbelkada @sgugger @stevhliu @MKhalusova
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I was setting up and using Galactica language model.
Was facing this error:
**ValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information.**
```
!pip install galai
import galai as gal
from galai.notebook_utils import *
```
`model = gal.load_model("base")
`
`model.generate("The Transformer architecture [START_REF]") ` ----- here the error came up.
Same error came up while running this ----
```
prompt = f"Question: A bat and a ball cost $\\$1.10$ in total. The bat costs $\\$1.00$ more than the ball. How much does the ball cost?\n\nAnswer:"
display_markdown(model.generate(prompt, new_doc=True, max_length=250))
```
### Expected behavior
<img width="1179" alt="Screenshot 2023-02-02 at 9 37 50 AM" src="https://user-images.githubusercontent.com/76526750/216229408-a68fb60b-0c50-4fe9-877a-d0cac693ca46.png">
<img width="1167" alt="Screenshot 2023-02-02 at 9 38 00 AM" src="https://user-images.githubusercontent.com/76526750/216229415-f8d4f67f-8e23-4937-a57e-967c3f37357f.png">
these are the expected results. Please help.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21413/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21412
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21412/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21412/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21412/events
|
https://github.com/huggingface/transformers/issues/21412
| 1,567,173,401
|
I_kwDOCUB6oc5daSsZ
| 21,412
|
[Whisper] Word level and character level timestamps
|
{
"login": "Rishabh-Choudhry",
"id": 28778171,
"node_id": "MDQ6VXNlcjI4Nzc4MTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/28778171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rishabh-Choudhry",
"html_url": "https://github.com/Rishabh-Choudhry",
"followers_url": "https://api.github.com/users/Rishabh-Choudhry/followers",
"following_url": "https://api.github.com/users/Rishabh-Choudhry/following{/other_user}",
"gists_url": "https://api.github.com/users/Rishabh-Choudhry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rishabh-Choudhry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rishabh-Choudhry/subscriptions",
"organizations_url": "https://api.github.com/users/Rishabh-Choudhry/orgs",
"repos_url": "https://api.github.com/users/Rishabh-Choudhry/repos",
"events_url": "https://api.github.com/users/Rishabh-Choudhry/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rishabh-Choudhry/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker and @Narsil ",
"Hi @Rishabh-Choudhry .\r\n\r\nThis is impossible to do with `whisper`. Whisper simply doesn't work in such a way, it output \"timestamp\" tokens, roughly when it feels like. And that's all we can do with them.\r\n\r\nI've seen hybrid approaches where you use `wav2vec2` (and similar) to get those accurate timestamps and solve the potential conflicts. This is however outside of scope for the pipelines in my opinion. (Too complex, and requires running 2 different models, and impossible to align in the general case).\r\n\r\nhttps://github.com/m-bain/whisperX\r\n\r\nWould that work for you ?\r\n\r\n\r\n",
"This approach with DTW is more memory efficient and scalable: https://github.com/linto-ai/whisper-timestamped",
"Just going to bump this. There are several solutions out there and this is a pretty key missing feature from the transformer implementation of Whisper. E.g.\r\nhttps://github.com/jianfch/stable-ts/blob/main/stable_whisper/whisper_word_level.py",
"There's a PR opened for it: https://github.com/huggingface/transformers/pull/21427\r\n\r\nIf you look at it, it actually uncovered some issues with Whisper itself (in non timestamp mode, the default in `transformers`, not the default in `openai`.)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"NB: word level timestamps were added to openai/whisper last week. Tried it out, it seems to work. \r\nhttps://github.com/openai/whisper/commit/500d0fe9668fae5fe2af2b6a3c4950f8a29aa145",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I've investigated adding word-level timestamps to Transformers using the OpenAI approach of using the cross-attention weights. Preliminary results can be found in this Colab: https://colab.research.google.com/drive/1VWbAgzKWQsStdAA1hcumBU2uyFQX7zAB?usp=sharing",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closed by https://github.com/huggingface/transformers/pull/23205",
"@hollance Thanks for adding a nice feature. I know that using the cross attention weight to get the token level timestamp. \r\nThen, I think there is no dependence between doing additional finetuning and getting token level timestamp. What do you think?\r\nIf I want to get token level timestamp from my finetuned model, is there anything I need to be careful about? Timestamp tokens in sentence units will be attached and trained.",
"@upskyy You may need to use different `attention_heads` on the fine-tuned model. See also: https://gist.github.com/hollance/42e32852f24243b748ae6bc1f985b13a"
] | 1,675
| 1,688
| 1,685
|
NONE
| null |
### Feature request
`output = pipe(audio_file,chunk_length_s=30,return_timestamps=True)`
Getting word level and character level timestamps with Whisper model asr pipeline upon using `return_timestamps=True`
### Motivation
The timestamps returned currently are at stride level. For our use case, we want to get accurate timestamps for each word or possibly each character.use case
### Your contribution
With guidance, happy to submit the PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21412/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21411
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21411/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21411/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21411/events
|
https://github.com/huggingface/transformers/issues/21411
| 1,567,123,650
|
I_kwDOCUB6oc5daGjC
| 21,411
|
UnboundLocalError: local variable 'image_processor_class' referenced before assignment
|
{
"login": "shikhartuli",
"id": 40000988,
"node_id": "MDQ6VXNlcjQwMDAwOTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/40000988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shikhartuli",
"html_url": "https://github.com/shikhartuli",
"followers_url": "https://api.github.com/users/shikhartuli/followers",
"following_url": "https://api.github.com/users/shikhartuli/following{/other_user}",
"gists_url": "https://api.github.com/users/shikhartuli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shikhartuli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shikhartuli/subscriptions",
"organizations_url": "https://api.github.com/users/shikhartuli/orgs",
"repos_url": "https://api.github.com/users/shikhartuli/repos",
"events_url": "https://api.github.com/users/shikhartuli/events{/privacy}",
"received_events_url": "https://api.github.com/users/shikhartuli/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I try to add a new model as per the tutorial [here](https://huggingface.co/docs/transformers/add_new_model), I get the following error with the given set of inputs:
```
$ transformers-cli add-new-model-like
What is the model you would like to duplicate? Please provide the lowercase `model_type` (e.g. roberta): roberta
What is the name (with no special casing) for your new model in the paper (e.g. RoBERTa)? NewTransformer
What identifier would you like to use for the `model_type` of this model? [newtransformer]
What lowercase name would you like to use for the module (folder) of this model? [newtransformer]
What prefix (camel-cased) would you like to use for the model classes of this model (e.g. Roberta)? [NewTransformer]
What prefix (upper-cased) would you like to use for the constants relative to this model? [NEWTRANSFORMER]
What will be the name of the config class for this model? [NewTransformerConfig]
Please give a checkpoint identifier (on the model Hub) for this new model (e.g. facebook/roberta-base):
Will your new model use the same processing class as roberta (RobertaTokenizer) (yes/no)? no
What will be the name of the tokenizer class for this model? [NewTransformerTokenizer]
Traceback (most recent call last):
File "/home/stuli/.conda/envs/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/scratch/gpfs/stuli/transformers/src/transformers/commands/transformers_cli.py", line 54, in main
service = args.func(args)
File "/scratch/gpfs/stuli/transformers/src/transformers/commands/add_new_model_like.py", line 1351, in add_new_model_like_command_factory
return AddNewModelLikeCommand(config_file=args.config_file, path_to_repo=args.path_to_repo)
File "/scratch/gpfs/stuli/transformers/src/transformers/commands/add_new_model_like.py", line 1382, in __init__
) = get_user_input()
File "/scratch/gpfs/stuli/transformers/src/transformers/commands/add_new_model_like.py", line 1583, in get_user_input
image_processor_class=image_processor_class,
UnboundLocalError: local variable 'image_processor_class' referenced before assignment
```
### Expected behavior
There should be no error with the given sequence of inputs when creating a new model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21411/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21410
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21410/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21410/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21410/events
|
https://github.com/huggingface/transformers/pull/21410
| 1,567,113,194
|
PR_kwDOCUB6oc5JD47d
| 21,410
|
Fix image_processor_class bug
|
{
"login": "shikhartuli",
"id": 40000988,
"node_id": "MDQ6VXNlcjQwMDAwOTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/40000988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shikhartuli",
"html_url": "https://github.com/shikhartuli",
"followers_url": "https://api.github.com/users/shikhartuli/followers",
"following_url": "https://api.github.com/users/shikhartuli/following{/other_user}",
"gists_url": "https://api.github.com/users/shikhartuli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shikhartuli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shikhartuli/subscriptions",
"organizations_url": "https://api.github.com/users/shikhartuli/orgs",
"repos_url": "https://api.github.com/users/shikhartuli/repos",
"events_url": "https://api.github.com/users/shikhartuli/events{/privacy}",
"received_events_url": "https://api.github.com/users/shikhartuli/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the image_process_class bug.
Fixes #21411
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21410/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21410",
"html_url": "https://github.com/huggingface/transformers/pull/21410",
"diff_url": "https://github.com/huggingface/transformers/pull/21410.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21410.patch",
"merged_at": 1675347653000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21409
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21409/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21409/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21409/events
|
https://github.com/huggingface/transformers/pull/21409
| 1,567,108,639
|
PR_kwDOCUB6oc5JD4CT
| 21,409
|
Fix task guide formatting
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
Fixes formatting for some of the task guides where links to `AutoModelForX` in the Train sections aren't properly rendered because there wasn't a blank line after the `<Tip>` block preceding the text.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21409/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21409",
"html_url": "https://github.com/huggingface/transformers/pull/21409",
"diff_url": "https://github.com/huggingface/transformers/pull/21409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21409.patch",
"merged_at": 1675361187000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21408
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21408/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21408/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21408/events
|
https://github.com/huggingface/transformers/pull/21408
| 1,566,776,922
|
PR_kwDOCUB6oc5JCwUg
| 21,408
|
Refactor model summary
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I don't think a model summary with sections per modality can work. I'm okay with removing the specific of each model currently in the model summary (make sure they are present in the corresponding model pages however as we don't want to lose anything) but I think we need to have a better structure with h2 sections for modalities than h3 sections for different kinds of models (in NLP encoder/decoder/encoder-decoder, in CV transformer/convnet etc.).",
"> but I think we need to have a better structure with h2 sections for modalities than h3 sections for different kinds of models\r\n\r\nGood point, this’ll work better and allow me to include convnets more naturally!\r\n\r\nGreat questions @MKhalusova, let me try and clarify (and also refine the purpose of the doc while doing so)! 🙂\r\n\r\n> My main issue is that it is not entirely clear to me what audience these documents target and what they aim to achieve.\r\n\r\nThe audience is a beginner or someone who is coming from a different modality (say like, from NLP to CV), and the goal is to provide a high-level conceptual overview of the different model types available in each modality.\r\n\r\n> If I know what model I am interested in, its model doc is much more useful, as it has all of the information.\r\n\r\nFor sure, the model docs fulfill the role of providing all the nitty-gritty information. But sometimes, this can be too much detail, and you can't really make connections between models or understand why you should use one over the other because you're lacking context. The model summary doc tries to go up a level and give users an introductory overview instead of all the technical details. If they’re interested in learning more, they can follow the links to the specific model doc page.\r\n\r\n> If I want to learn about the difference between encoders and decoders, the information is in the course.\r\n\r\nThe course only has very general information about encoders and decoders. For example, it doesn’t tell you how BERT and DeBERTa are different.\r\n\r\n> If I want to compare two different models for the same task, I have to jump up and down in the doc and may learn some differences in how they work internally, but what if I’m interested in other aspects such as benchmarks, size of the model, how recent it is, etc.?\r\n\r\nYeah the structure I have now is not the best! 😅 But I think @sgugger's suggestion will improve this quite a bit, where it’ll be more readable, and related sections will be more localized, so you don’t have to jump around as much. The goal though is not to give users all the technical details about a model (size, performance, etc.).\r\n\r\n> So my question is, what are we aiming to achieve with this doc?\r\n\r\nIt can be difficult to approach Transformers when there are so many X-former variants. This doc hopes to provide users with a beginner-friendly guide to them so they can make connections and be like oh wait, this CV model is just like an NLP model, and it's just the input that's different. I think we also want to give more context about the models in terms of design decisions and constraints (e.g., Swin does _x_, unlike ViT because _y_). In a nutshell, I suppose it's to give users the bigger picture of the Transformer model landscape and give them a mental framework to categorize and think about Transformer models.\r\n\r\n> but what if I’m interested in other aspects such as benchmarks\r\n> Are we creating a place where one can compare models on several aspects?\r\n\r\nI think we can boost the impact of this doc even more by addressing those issues you raise above. An embedded Space at the top of the doc that lets users discover models based on certain parameters (longer sequences, tasks, memory-efficiency, multilinguality, etc.) would be very useful and guide users toward selecting a model for their use-case. I can look into this as a next step! 🙂",
"Updated the structure to be:\r\n\r\n```\r\n## Computer vision\r\n\r\n### Encoder\r\n### ConvNet\r\n\r\n## NLP\r\n\r\n### Encoder\r\n### Decoder\r\n...\r\n```\r\n\r\nIf this looks good to everyone, I'll go ahead and fill out the rest of the sections!",
"Ok I think this is ready for review now, time to call in the experts! The goal of the doc is to provide a high-level overview of the model types in each modality, so users have more context and can start making connections.\r\n\r\n@sayakpaul, would you mind reviewing the computer vision and maybe the multimodal sections? @sanchit-gandhi, if you could take a look at the audio section please (I promise it's way shorter this time 😅 )? Thank you both so much! 👏",
"I kept most of the model summary infos that wasn't redundant with what was already on the model doc pages (for example, the original GPT); these have been added under the **Tips** section. I've also split out the attention mechanisms onto their own page, which I'll expand on later in a separate PR with additional attention types."
] | 1,675
| 1,676
| 1,676
|
MEMBER
| null |
This PR refactors the [model summary](https://huggingface.co/docs/transformers/model_summary):
- updated with speech/audio, computer vision, and multimodal models (picked based on the ones with the most doc views, this can be refined to show or hide other models)
- embeds a timeline of when models are released to provide a visual reference
- provides structure and narrative - instead of a list - to discuss the high-level differences between models (users can compare the models, see trends and progression in the larger modelscape)
- removed the [attention section](https://huggingface.co/docs/transformers/model_summary#more-technical-aspects), which will get its own page (and possibly be expanded with more attention types) in the conceptual guide section
Would love to hear what you think about the direction of this doc please! @sgugger @MKhalusova @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21408/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21408",
"html_url": "https://github.com/huggingface/transformers/pull/21408",
"diff_url": "https://github.com/huggingface/transformers/pull/21408.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21408.patch",
"merged_at": 1676486114000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21407
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21407/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21407/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21407/events
|
https://github.com/huggingface/transformers/issues/21407
| 1,566,659,240
|
I_kwDOCUB6oc5dYVKo
| 21,407
|
multi gpu training.
|
{
"login": "iamnmn9",
"id": 41872440,
"node_id": "MDQ6VXNlcjQxODcyNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/41872440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamnmn9",
"html_url": "https://github.com/iamnmn9",
"followers_url": "https://api.github.com/users/iamnmn9/followers",
"following_url": "https://api.github.com/users/iamnmn9/following{/other_user}",
"gists_url": "https://api.github.com/users/iamnmn9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamnmn9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamnmn9/subscriptions",
"organizations_url": "https://api.github.com/users/iamnmn9/orgs",
"repos_url": "https://api.github.com/users/iamnmn9/repos",
"events_url": "https://api.github.com/users/iamnmn9/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamnmn9/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"If you have multiple GPUs it will run on multiple GPUs.",
"\r\nI have 8 gpu's in this machine.\r\n\r\nI think its not taking all 8 gpu's. Already tried changing batch_sizes and with multiple of 8"
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
### Feature request
What we can do if using run_clm.py to train with multi gpu's?
### Motivation
multi gpu training for faster training.
### Your contribution
can we add this in the script?
model = nn.DataParallel(model,device_ids=[i for i in range(torch.cuda.device_count())])
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21407/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21406
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21406/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21406/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21406/events
|
https://github.com/huggingface/transformers/pull/21406
| 1,566,605,514
|
PR_kwDOCUB6oc5JCLES
| 21,406
|
Enable PyTorch/XLA Fully Sharded Data Parallel (FSDP)
|
{
"login": "AlexWertheim",
"id": 90242206,
"node_id": "MDQ6VXNlcjkwMjQyMjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/90242206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlexWertheim",
"html_url": "https://github.com/AlexWertheim",
"followers_url": "https://api.github.com/users/AlexWertheim/followers",
"following_url": "https://api.github.com/users/AlexWertheim/following{/other_user}",
"gists_url": "https://api.github.com/users/AlexWertheim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlexWertheim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlexWertheim/subscriptions",
"organizations_url": "https://api.github.com/users/AlexWertheim/orgs",
"repos_url": "https://api.github.com/users/AlexWertheim/repos",
"events_url": "https://api.github.com/users/AlexWertheim/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlexWertheim/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks so much for the feedback!\r\n\r\n> I think this can all be added via the new `fsdp_config` training arguments instead of adding four new training arguments in a class that already has too many of them (and for which users have complained a lot about). Adding an `xla` boolean inside and relying on the existing `fsdp_min_num_params`, then adding new keys for the two other arguments you want to add would work better.\r\n\r\nThe new `fsdp_config` training argument is great. I envision the strategy would be something like adding four keys to the `fsdp_config` dictionary:\r\n- An `xla` boolean which indicates whether the user is using Fairscale FSDP or XLA FSDP\r\n- An `xla_config` string which points to the location of a JSON file which stores the XLA FSDP configuration parameters\r\n- The arguments `xla_fsdp_transformer_layer_class_to_wrap` and `xla_grad_ckpt` as before\r\n\r\nDoes this make sense to you? One thing I am wondering about is why `fsdp_transformer_layer_class_to_wrap` hasn't been absorbed into `fsdp_config` the same way `fsdp_min_num_params` has. It would be a bit strange to have `xla_fsdp_transformer_layer_class_to_wrap` as part of the `fsdp_config` dictionary and not `fsdp_transformer_layer_class_to_wrap`.\r\n\r\n> The other thing to change is that `self.model` should always be the original model, not the wrapped one, which is one you need the hack on the signature. That attribute should be left as is.\r\n\r\nJust to clarify, do you mean that we should set `self.model = model` before we modify `model`'s forward signature, or that we shouldn't set `self.model = model` at all? The former is no problem, but I think we need to do need to set `self.model = model`; among other things, without it, the program hits a segfault between training and evaluation.\r\n\r\n",
"Your plan for the config sounds sound, cc @pacman100 on the question around `fsdp_transformer_layer_class_to_wrap`.\r\n\r\nRegarding the `self.model` question, it looks like the traditional FSDP also changes `self.model` to use the FSDP model, though it doesn't look like it updates the signature. Is it missing there as well?",
"> Your plan for the config sounds sound, cc @pacman100 on the question around `fsdp_transformer_layer_class_to_wrap`.\r\n\r\nExcellent, thanks. If it helps, I'm happy to add `fsdp_transformer_layer_class_to_wrap` to the`fsdp_config` training argument as part of my PR, and correct a few small typos I noticed.\r\n\r\n> Regarding the `self.model` question, it looks like the traditional FSDP also changes `self.model` to use the FSDP model, though it doesn't look like it updates the signature. Is it missing there as well?\r\n\r\nGreat question. I wasn't sure, so I asked @ronghanghu about it. He pointed out to me that we likely do not need to update the signature for either case anymore, now that XLA FSDP wrapping is occurring in `_wrap_model`. Indeed, `_remove_unused_columns` is called by `get_train_dataloader` before `_wrap_model` is called in`inner_training_loop`, and so it will use the original unwrapped model's signature. Moreover, while the functions `get_eval_dataloader` and `get_test_dataloader` call `_remove_unused_columns` after the model is wrapped, this should be ok because `self.self._signature_columns` is already set in the previous call to `get_train_dataloader`. I still need to test to verify this, but once I do, I can remove the update to the wrapped model signature. In the event that it's necessary for XLA FSDP, it will also be necessary for traditional FSDP. \r\n\r\n",
"> I'm happy to add fsdp_transformer_layer_class_to_wrap to the fsdp_config training argument as part of my PR, and correct a few small typos I noticed.\r\n\r\nHello @AlexWertheim, adding `fsdp_transformer_layer_class_to_wrap` to `fsdp_config` would be great. I had this change in my backlog but it would be great if this PR does that alongside the conterpart XLA changes.\r\n\r\nI feel only boolean `xla` would be enough along with `grad_ckpt`, `compute_dtype` and `buffer_dtype` to the config. The other args `fsdp_min_num_params ` and `fsdp_transformer_layer_class_to_wrap` can be used in either cases and doc mentioning that `grad_ckpt`, `compute_dtype` and `buffer_dtype` is only applicable with `xla` should be enough along with a warning when fsdp is being used without xla with this any of these fields passed.\r\n\r\nI feel `xla_config` as a path to file inside `fsdp_config` which itself is a path to file would be lot of load on user\r\n\r\n\r\n\r\n\r\n\r\n",
"> Hello @AlexWertheim, adding `fsdp_transformer_layer_class_to_wrap` to `fsdp_config` would be great. I had this change in my backlog but it would be great if this PR does that alongside the conterpart XLA changes.\r\n\r\nThanks @pacman100, I'd be happy to do this. One question I had was how we should handle the (unusual) situation when both the deprecated flag `fsdp_transformer_layer_cls_to_wrap` and the option within `fsdp_config` are both specified. With `fsdp_min_num_params`, there is the reasonably simple approach to take the max of the two specified numbers. In the case of `fsdp_transformer_layer_cls_to_wrap`, there are a few approaches that come to mind:\r\n- Raise an error\r\n- Raise a warning and use the `fsdp_config` specified string\r\n- Use both layer classes\r\n\r\nWhat do you think? \r\n\r\nAlso, XLA FSDP supports automatic wrapping based on a set of layer names. I believe Fairscale FSDP also does as well, but currently, `fsdp_transformer_layer_cls_to_wrap` only accepts a single string as input. What do you think about expanding `fsdp_transformer_layer_cls_to_wrap` to accept a list of strings? The modifications to the existing Fairscale FSDP logic will be quite small - right now, I think it just passes a singleton set to the auto wrap policy. \r\n\r\n \r\n> I feel only boolean `xla` would be enough along with `grad_ckpt`, `compute_dtype` and `buffer_dtype` to the config. The other args `fsdp_min_num_params ` and `fsdp_transformer_layer_class_to_wrap` can be used in either cases and doc mentioning that `grad_ckpt`, `compute_dtype` and `buffer_dtype` is only applicable with `xla` should be enough along with a warning when fsdp is being used without xla with this any of these fields passed.\r\n> \r\n> I feel `xla_config` as a path to file inside `fsdp_config` which itself is a path to file would be lot of load on user\r\n\r\nWith appreciation for the confusion that this might cause the user, I think it's still important to allow the user to specify XLA FSDP specific configuration parameters via an `xla_fsdp_settings` flag. The [XLA FSDP parameters](https://github.com/pytorch/xla/blob/master/torch_xla/distributed/fsdp/xla_fully_sharded_data_parallel.py#L122-L240) actually differ quite substantially from [Fairscale FSDP parameters](https://github.com/facebookresearch/fairscale/blob/v0.4.13/fairscale/nn/data_parallel/fully_sharded_data_parallel.py#L170-L291), and so I think it would be valuable for the user to be able to specify the XLA FSDP arguments that they want. Adding a separate flag for each non-redundant XLA FSDP parameter seems like it might bloat the `fsdp_config` file with too many XLA specific arguments. If they don't supply an XLA config file, then no harm done - the XLA FSDP default parameters will be used.\r\n\r\nCompute dtype and buffer dtype are actually already Fairscale FSDP arguments, and so these could be inferred from existing flags if you'd like, though I think it's best for the user to specify them through `xla_fsdp_settings`. (Right now, the trainer sets a mixed precision policy based on whether the `bf16` or `fp16` training args are specified). \r\n",
"Still happy to discuss what the final changes look like, but as a proof of concept, I've pushed some changes which (among other things) implement some of the items discussed in prior comments:\r\n- `fsdp_transformer_layer_cls_to_wrap` is now moved inside `fsdp_config`, and is treated as a list of strings instead of a single string. In the event that the user enters `fsdp_transformer_layer_cls_to_wrap` as a string in the JSON file, the program converts this to a list. \r\n- If the user uses the deprecated version of `fsdp_transformer_layer_cls_to_wrap` outside `fsdp_config`, the program issues a warning and combines it (as a list of strings) with whatever `fsdp_transformer_layer_cls_to_wrap` is set to. \r\n- The PyTorch FSDP logic has been modified to iterate through the elements of `fsdp_transformer_layer_cls_to_wrap` instead of passing a singleton set to the auto-wrap policy.\r\n- I noted that `fsdp_min_num_params` wasn't getting set correctly. This is because `getattr(dict, key, default)` seems to always return the default. I've replaced instances of `getattr(dict, key, default)` with `dict.get(key, default)`, including with `fsdp_min_num_params`. \r\n- XLA FSDP parameters are still loaded into their own separate dictionary from the JSON file specified by `xla_fsdp_settings`, though I am happy to continue discussion over whether this should be modified\r\n- Defaults are now correctly set for the flags `xla` and `xla_fsdp_grad_ckpt`",
"Hello @AlexWertheim, thanks a lot for all the changes and detailed notes:\r\n\r\nAt overall level, what if the fsdp_config looked like below. i.e., xla_fsdp_settings was a dict instead of path to another json. Also, `xla_fsdp_grad_ckpt` could be pushed into `xla_fsdp_settings` as `grad_ckpt` param or something like that as it is unique to xla fsdp. This way user would need to give path to only one json `fsdp_config` while all xla params are in `xla_fsdp_settings` when `xla` is True.\r\n\r\n```json\r\n{\r\n \"fsdp_transformer_layer_cls_to_wrap\": \"T5Block\",\r\n \"xla\": true,\r\n \"xla_fsdp_settings\": {\r\n \"buffer_dtype\": \"bfloat16\",\r\n \"compute_dtype\": \"bfloat16\",\r\n \"grad_ckpt\": true,\r\n }\r\n}\r\n``` ",
"> At overall level, what if the fsdp_config looked like below. i.e., xla_fsdp_settings was a dict instead of path to another json. Also, `xla_fsdp_grad_ckpt` could be pushed into `xla_fsdp_settings` as `grad_ckpt` param or something like that as it is unique to xla fsdp. This way user would need to give path to only one json `fsdp_config` while all xla params are in `xla_fsdp_settings` when `xla` is True.\r\n> \r\n> ```json\r\n> {\r\n> \"fsdp_transformer_layer_cls_to_wrap\": \"T5Block\",\r\n> \"xla\": true,\r\n> \"xla_fsdp_settings\": {\r\n> \"buffer_dtype\": \"bfloat16\",\r\n> \"compute_dtype\": \"bfloat16\",\r\n> \"grad_ckpt\": true,\r\n> }\r\n> }\r\n> ```\r\n\r\n@pacman100 Thanks very much for the feedback. I think the suggestion to turn `xla_fsdp_settings` into a dictionary is a great one, and a good way to resolve the issue of having a separate configuration path within `fsdp_config`. \r\n\r\nAs far as absorbing `xla_fsdp_grad_ckpt` into `xla_fsdp_settings` goes, I agree that it is nice to have all of the XLA FSDP related configuration information in one place from a logical point of view. That being said, I do have a concern, namely that `xla_fsdp_settings` was supposed to keep track of the XLA FSDP wrapping parameters, and `xla_fsdp_grad_ckpt` is not part of the XLA FSDP wrapping arguments. I worry that the user will get confused about what this flag is for, and in particular, that users will not realize that they have to add this gradient checkpointing flag (or, for that matter, any future XLA FSDP related flags that are not themselves XLA FSDP wrapping params). What do you think?\r\n\r\nWould also love to hear your thoughts, @ronghanghu ",
"@sgugger @pacman100 Thanks for taking time to review this pr. As Pytorch 2.0 branch cut just happened and will most likely be release in ~ a month, it will be great if we can enable the FSDP for HF on the master(I believe we will also need to make some other changes, but that can happen in the subsequent pr). This way we can share our benchmark using FSDP with HF more broadly when we doing the announcement/blog post for the 2.0 release.",
"@JackCaoG This should be merged in the next few days :-)\r\n@AlexWertheim could you rebase your branch on main to fix the tests, and run `make style` on your branch to fix the quality jobs?\r\n\r\n@pacman100 let me know when you are happy with all the changes.",
"> @AlexWertheim could you rebase your branch on main to fix the tests, and run `make style` on your branch to fix the quality jobs?\r\n\r\nDone! Please note that all commits including and after [77e99c8](https://github.com/huggingface/transformers/pull/21406/commits/77e99c8452aa9fc7b7ca5b03c0d69b2a28e71b0e) are just concerned with rebasing and formatting/style changes. Apologies for the many commits; I had some difficulties running all of the automatic style checks, but I think I've run all of them now, and the CircleCI checks are now passing. \r\n\r\nCommit [e65dc0b](https://github.com/huggingface/transformers/pull/21406/commits/e65dc0b65a21417a15e30498fa3fd6447334d7ed) modified a check which would have prevented XLA FSDP wrapping when `local_rank = -1`, which is not correct for TPUs. As discussed in prior comments, commit [632318a](https://github.com/huggingface/transformers/pull/21406/commits/632318ac88e3d313843023cf0eff8bb9ac45716f) modified the argument `xla_fsdp_settings` to be a dictionary instead of a path to a JSON file; note that `xla_fsdp_grad_ckpt` was not absorbed into `xla_fsdp_settings`, for the reasons mentioned earlier. \r\n\r\nPlease let me know if there are any additional questions or concerns upon review. Thanks!",
"Hi,\r\nI was able to do parallel training flawlessly without FSDP on xlm-roberta-**base** on the 8 cores of the TPU-v3 VM (because the model and batch fit properly within each core) by following the [run_glue example](https://github.com/huggingface/transformers/tree/main/examples/pytorch#running-on-tpus) for TPUs.\r\n\r\nNow I'm trying to get the XL version (facebook/xlm-roberta-**xl**) to work with XLA FSDP with the Trainer integration (set as \"full_shard\") but I'm getting these memory errors when running the second or first batch:\r\n`0%| | 2/939 [03:05<28:08:11, 108.10s/it]Exception in device=TPU:0: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2 root error(s) found.\r\n(0) RESOURCE_EXHAUSTED: Attempting to reserve 13.20G at the bottom of memory. That was not possible. There are 10.59G free, 0B reserved, and 10.59G reservable.`\r\n\r\nFrom my understanding, the model was supposed to split loaded onto the TPU cores, along with whatever Zero-3 entails, but it doesn't seem to be happening.\r\nI tried setting `per_device_train_batch_size=1`, limited tokenizer `max_length=200` and a played with a bunch of `fsdp_config` parameters but none worked.\r\n\r\nAlso I'm confused if I'm still supposed to use the xla_spawn.py script to run with FSDP or not. I tried both and got the same error though.\r\n\r\nAdditionally, trying to debug by myself I found that this [line](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L617) condition is never True because the conditions on line 601 prevent it.\r\n\r\nMaybe we could have an example on how to use this feature with the Trainer?\r\n\r\nThanks in advance!",
"@AlexWertheim This seems to be a device side oom, any advise you can give?"
] | 1,675
| 1,681
| 1,676
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR enables the user to make use of the [PyTorch/XLA implementation of FSDP](https://github.com/pytorch/xla/tree/master/torch_xla/distributed/fsdp), including the newly added [auto-wrap feature](https://github.com/pytorch/xla/pull/4318). Four arguments have been added to `training_args.py` to facilitate this functionality:
- `xla_fsdp`: this flag is a string containing the location of a `.json` file which specifies the FSDP arguments the user wants to use when wrapping their model.
- `xla_fsdp_min_num_params`: this flag is an int which will set a size-based automatic wrapping policy which automatically FSDP wraps any module with at least `xla_fsdp_min_num_params` many parameters.
- `xla_fsdp_transformer_layer_cls_to_wrap`: this flag is a list of (case-sensitive) strings which will set a layer-class-based automatic wrapping policy which automatically FSDP wraps any module whose name matches one of the listed strings.
- `xla_fsdp_grad_ckpt`: this flag is a bool which determines whether gradient checkpointing is enabled for the automatically wrapped layers.
# Design notes and future work
1) This PR is an updated version of [this closed PR](https://github.com/huggingface/transformers/pull/20774), which enabled FSDP for a more restricted class of models. This PR now enables nested FSDP wrapping via two auto-wrap policies, avoiding the restrictions of the previous PR.
2) For very large model sizes (greater than, say, 128B parameters), users may see host-side OOMs on TPUs during initialization. This can be mitigated by initializing layer weights immediately after construction, wrapping with FSDP, and moving onto the XLA device, as can be seen in [this branch](https://github.com/AlexWertheim/transformers/blob/einsum/src/transformers/models/gpt2/modeling_gpt2.py#L690-L723). We opted to enable FSDP wrapping at the trainer level, since it does not necessitate model-specific changes and does not disrupt the existing architecture for model construction and initialization.
3) Checkpointing support for XLA FSDP is not included as part of this PR. We hope to add it soon via another PR.
4) We have not included testing for XLA FSDP as part of this PR. We would like to add this in a future PR.
Thanks to @ronghanghu for his assistance in the preparation of this PR. Among many other contributions, the observations that one must copy the model's forward method and replace the optimizer step are his.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? -->
## Who can review?
@sgugger @JackCaoG
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21406/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21406/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21406",
"html_url": "https://github.com/huggingface/transformers/pull/21406",
"diff_url": "https://github.com/huggingface/transformers/pull/21406.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21406.patch",
"merged_at": 1676880384000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21405
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21405/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21405/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21405/events
|
https://github.com/huggingface/transformers/pull/21405
| 1,566,404,503
|
PR_kwDOCUB6oc5JBgcm
| 21,405
|
Generate: decoder-only models can generate with `inputs_embeds`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
2-in-1 PR 🔥
### 1 - Experimenting with input embeddings
Accepting `.generate()` calls with `inputs_embeds` on decoder-only models is a long-standing request (#6535) -- see [this recent comment ](https://github.com/huggingface/transformers/issues/6535#issuecomment-1353658984) in particular and its reacts.
It has to be added on a per-model basis, and this PR adds the necessary changes for GPT2. Other models will throw an informative exception if the user passes `inputs_embeds`, asking them to check this PR and implement the same pattern on the model they want to use it with 🤗
Please note that it is still expected that the user passes `input_ids`, i.e.
```
outputs = model.generate(input_ids, inputs_embeds=inputs_embeds)
```
This is because decoder-only models expect the prompt to be present in the output, and this is the only way to preserve it! `input_ids` can also be omitted and, in that case, the output won't contain the prompt.
### 2 - BLIP 2 (cc @NielsRogge)
This change is a soft-requirement for BLIP 2. The alternatives to add BLIP 2 are:
1. Support `.generate()` with `inputs_embeds` on decoder-only models
2. Change OPT's reference implementation to accept a new kwarg (`query_embeds`, that gets appended to the embeddings of the `input_ids`)
Option 1, this PR, seems like a better option according to our library philosophy :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21405/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21405/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21405",
"html_url": "https://github.com/huggingface/transformers/pull/21405",
"diff_url": "https://github.com/huggingface/transformers/pull/21405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21405.patch",
"merged_at": 1675288239000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21404
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21404/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21404/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21404/events
|
https://github.com/huggingface/transformers/pull/21404
| 1,566,218,839
|
PR_kwDOCUB6oc5JA52U
| 21,404
|
Added DagshubCallback
|
{
"login": "jinensetpal",
"id": 52078103,
"node_id": "MDQ6VXNlcjUyMDc4MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/52078103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinensetpal",
"html_url": "https://github.com/jinensetpal",
"followers_url": "https://api.github.com/users/jinensetpal/followers",
"following_url": "https://api.github.com/users/jinensetpal/following{/other_user}",
"gists_url": "https://api.github.com/users/jinensetpal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinensetpal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinensetpal/subscriptions",
"organizations_url": "https://api.github.com/users/jinensetpal/orgs",
"repos_url": "https://api.github.com/users/jinensetpal/repos",
"events_url": "https://api.github.com/users/jinensetpal/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinensetpal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you so much for the lightning review, @sgugger!\r\n\r\nI've made the relevant changes. I also added a check within `get_available_reporting_integrations` to ensure available integrations are reported accurately. \r\n\r\nSince `DagsHubCallback` subclasses `MLFlowCallback`, it's still separate. If both integrations are requested, it does not execute twice. This is what the `get_available_reporting_integrations` looks like when moving packages around:\r\n\r\n\r\n\r\nHope this was helpful! :)",
"Thanks again for your contribution!",
"Thank you for your rapid responses!! Have an awesome rest of day 🙂"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds a Trainer Integration with [DagsHub](https://dagshub.com/). It extends the MLFlow integration that already currently exists to also track and push artifacts using DVC, into DagsHub repositories. The idea is to allow users to integrate an MLOps stack to their current setups with minimal effort.
Here's a colab notebook with a sample integration: https://colab.research.google.com/drive/1KEeQYp3jD8kmkMMhGKR1R1C6cEjnbJLZ
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21404/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21404",
"html_url": "https://github.com/huggingface/transformers/pull/21404",
"diff_url": "https://github.com/huggingface/transformers/pull/21404.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21404.patch",
"merged_at": 1675277507000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21403
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21403/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21403/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21403/events
|
https://github.com/huggingface/transformers/issues/21403
| 1,566,203,110
|
I_kwDOCUB6oc5dWlzm
| 21,403
|
How to sending my request about parameters in inference API?
|
{
"login": "ZJU-Fangyin",
"id": 61076726,
"node_id": "MDQ6VXNlcjYxMDc2NzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/61076726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZJU-Fangyin",
"html_url": "https://github.com/ZJU-Fangyin",
"followers_url": "https://api.github.com/users/ZJU-Fangyin/followers",
"following_url": "https://api.github.com/users/ZJU-Fangyin/following{/other_user}",
"gists_url": "https://api.github.com/users/ZJU-Fangyin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZJU-Fangyin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZJU-Fangyin/subscriptions",
"organizations_url": "https://api.github.com/users/ZJU-Fangyin/orgs",
"repos_url": "https://api.github.com/users/ZJU-Fangyin/repos",
"events_url": "https://api.github.com/users/ZJU-Fangyin/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZJU-Fangyin/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi there, this is the Transformers repository. You can address your questions about the inference API [here](https://github.com/huggingface/api-inference-community)",
"Can I modify parameters of Hosted inference API?\r\n For example, I want the multiple output of this Summarization task rather than one:\r\n<img width=\"459\" alt=\"image\" src=\"https://user-images.githubusercontent.com/61076726/216101165-e26760d7-2163-42b3-bb53-44136ab032da.png\">\r\n\r\nHow can I achieve this? Many Thanks!",
"> Hi there, this is the Transformers repository. You can address your questions about the inference API [here](https://github.com/huggingface/api-inference-community)\r\n\r\nThanks a lot! I want to know if I can modify the parameters of **Hosted inference API** so that the demo of the webpage outputs multiple results?"
] | 1,675
| 1,675
| null |
NONE
| null |
### Model description
<img width="999" alt="image" src="https://user-images.githubusercontent.com/61076726/216067290-7969e29b-8c83-41fa-80a6-6c71bc4bac16.png">
I can't find where I can modify, and, does generating inference not include beam search?
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21403/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/21402
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21402/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21402/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21402/events
|
https://github.com/huggingface/transformers/issues/21402
| 1,566,196,273
|
I_kwDOCUB6oc5dWkIx
| 21,402
|
ModelError while deploying FlanT5-xl
|
{
"login": "RonLek",
"id": 28918901,
"node_id": "MDQ6VXNlcjI4OTE4OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/28918901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonLek",
"html_url": "https://github.com/RonLek",
"followers_url": "https://api.github.com/users/RonLek/followers",
"following_url": "https://api.github.com/users/RonLek/following{/other_user}",
"gists_url": "https://api.github.com/users/RonLek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonLek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonLek/subscriptions",
"organizations_url": "https://api.github.com/users/RonLek/orgs",
"repos_url": "https://api.github.com/users/RonLek/repos",
"events_url": "https://api.github.com/users/RonLek/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonLek/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hello @RonLek \r\n\r\nThanks for the issue! \r\nNote that starting from `flan-t5-xl`, the weights of the model are sharded. \r\nSharded weights loading has been supported after the release of `transformers==4.17.0` (precisely in `transformers==4.18.0`: https://github.com/huggingface/transformers/releases/tag/v4.18.0 ), so I think the fix should be updating the `transformers` version to a more recent one, e.g. `4.26.0` or `4.25.0`.",
"Hi @younesbelkada and @RonLek ! I have the same issue deploying `google/flan-t5-xxl` on SageMaker.\r\n\r\nI've tried to update to `transformers==4.26.0` by providing `code/requirements.txt` through `s3://sagemaker-eu-north-1-***/model.tar.gz`:\r\n\r\n```python\r\n# Hub Model configuration. https://huggingface.co/models\r\nhub: dict = {\"HF_MODEL_ID\": \"google/flan-t5-xxl\", \"HF_TASK\": \"text2text-generation\"}\r\n\r\n# Create Hugging Face Model Class\r\nhuggingface_model = HuggingFaceModel(\r\n transformers_version=\"4.17.0\",\r\n pytorch_version=\"1.10.2\",\r\n py_version=\"py38\",\r\n model_data=\"s3://sagemaker-eu-north-1-***/model.tar.gz\",\r\n env=hub,\r\n role=role,\r\n)\r\n```\r\n\r\nObserving the AWS logs I can see that `transformers==4.26.0` was installed:\r\n```\r\nThis is an experimental beta features, which allows downloading model from the Hugging Face Hub on start up. It loads the model defined in the env var `HF_MODEL_ID`\r\n/opt/conda/lib/python3.8/site-packages/huggingface_hub/file_download.py:588: FutureWarning: `cached_download` is the legacy way to download files from the HF hub, please consider upgrading to `hf_hub_download` warnings.warn(\r\n#015Downloading: 0%\\| \\| 0.00/11.0k [00:00<?, ?B/s]#015Downloading: 100%\\|██████████\\| 11.0k/11.0k [00:00<00:00, 5.49MB/s]\r\n#015Downloading: 0%\\| \\| 0.00/674 [00:00<?, ?B/s]#015Downloading: 100%\\|██████████\\| 674/674 [00:00<00:00, 663kB/s]\r\n#015Downloading: 0%\\| \\| 0.00/2.20k [00:00<?, ?B/s]#015Downloading: 100%\\|██████████\\| 2.20k/2.20k [00:00<00:00, 2.24MB/s]\r\n#015Downloading: 0%\\| \\| 0.00/792k [00:00<?, ?B/s]#015Downloading: 100%\\|██████████\\| 792k/792k [00:00<00:00, 43.5MB/s]\r\n#015Downloading: 0%\\| \\| 0.00/2.42M [00:00<?, ?B/s]#015Downloading: 0%\\| \\| 4.10k/2.42M [00:00<01:04, 37.5kB/s]#015Downloading: 1%\\| \\| 28.7k/2.42M [00:00<00:16, 147kB/s] #015Downloading: 4%\\|▎ \\| 86.0k/2.42M [00:00<00:07, 318kB/s]#015Downloading: 9%\\|▊ \\| 209k/2.42M [00:00<00:03, 633kB/s] #015Downloading: 18%\\|█▊ \\| 438k/2.42M [00:00<00:01, 1.16MB/s]#015Downloading: 37%\\|███▋ \\| 897k/2.42M [00:00<00:00, 2.18MB/s]#015Downloading: 76%\\|███████▌ \\| 1.83M/2.42M [00:00<00:00, 4.24MB/s]#015Downloading: 100%\\|██████████\\| 2.42M/2.42M [00:00<00:00, 3.12MB/s]\r\n#015Downloading: 0%\\| \\| 0.00/2.54k [00:00<?, ?B/s]#015Downloading: 100%\\|██████████\\| 2.54k/2.54k [00:00<00:00, 2.62MB/s]\r\nWARNING - Overwriting /.sagemaker/mms/models/google__flan-t5-xxl ...\r\nCollecting transformers==4.26.0 Downloading transformers-4.26.0-py3-none-any.whl (6.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.3/6.3 MB 65.9 MB/s eta 0:00:00\r\nRequirement already satisfied: requests in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (2.28.1)\r\nCollecting huggingface-hub<1.0,>=0.11.0 Downloading huggingface_hub-0.12.0-py3-none-any.whl (190 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 190.3/190.3 kB 46.0 MB/s eta 0:00:00\r\nRequirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (1.23.3)\r\nRequirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (0.13.0)\r\nRequirement already satisfied: packaging>=20.0 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (21.3)\r\nRequirement already satisfied: tqdm>=4.27 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (4.64.1)\r\nRequirement already satisfied: pyyaml>=5.1 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (6.0)\r\nRequirement already satisfied: filelock in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (3.8.0)\r\nRequirement already satisfied: regex!=2019.12.17 in /opt/conda/lib/python3.8/site-packages (from transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (2022.9.13)\r\nRequirement already satisfied: typing-extensions>=3.7.4.3 in /opt/conda/lib/python3.8/site-packages (from huggingface-hub<1.0,>=0.11.0->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (4.3.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /opt/conda/lib/python3.8/site-packages (from packaging>=20.0->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (3.0.9)\r\nRequirement already satisfied: charset-normalizer<3,>=2 in /opt/conda/lib/python3.8/site-packages (from requests->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (2.0.12)\r\nRequirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.8/site-packages (from requests->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (3.4)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.8/site-packages (from requests->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (1.26.11)\r\nRequirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.8/site-packages (from requests->transformers==4.26.0->-r /opt/ml/model/code/requirements.txt (line 1)) (2022.9.24)\r\nInstalling collected packages: huggingface-hub, transformers Attempting uninstall: huggingface-hub Found existing installation: huggingface-hub 0.10.0 Uninstalling huggingface-hub-0.10.0: Successfully uninstalled huggingface-hub-0.10.0 Attempting uninstall: transformers Found existing installation: transformers 4.17.0 Uninstalling transformers-4.17.0: Successfully uninstalled transformers-4.17.0\r\nSuccessfully installed huggingface-hub-0.12.0 transformers-4.26.0\r\nWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\r\n[notice] A new release of pip available: 22.2.2 -> 23.0\r\n[notice] To update, run: pip install --upgrade pip\r\nWarning: MMS is using non-default JVM parameters: -XX:-UseContainerSupport\r\n2023-02-01T15:46:06,090 [INFO ] main com.amazonaws.ml.mms.ModelServer -\r\nMMS Home: /opt/conda/lib/python3.8/site-packages\r\nCurrent directory: /\r\nTemp directory: /home/model-server/tmp\r\nNumber of GPUs: 0\r\nNumber of CPUs: 4\r\nMax heap size: 3461 M\r\nPython executable: /opt/conda/bin/python3.8\r\nConfig file: /etc/sagemaker-mms.properties\r\nInference address: http://0.0.0.0:8080\r\nManagement address: http://0.0.0.0:8080\r\nModel Store: /.sagemaker/mms/models\r\nInitial Models: ALL\r\nLog dir: null\r\nMetrics dir: null\r\nNetty threads: 0\r\nNetty client threads: 0\r\nDefault workers per model: 4\r\nBlacklist Regex: N/A\r\nMaximum Response Size: 6553500\r\nMaximum Request Size: 6553500\r\nPreload model: false\r\nPrefer direct buffer: false\r\n2023-02-01T15:46:06,140 [WARN ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-9000-google__flan-t5-xxl\r\n2023-02-01T15:46:06,204 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - model_service_worker started with args: --sock-type unix --sock-name /home/model-server/tmp/.mms.sock.9000 --handler sagemaker_huggingface_inference_toolkit.handler_service --model-path /.sagemaker/mms/models/google__flan-t5-xxl --model-name google__flan-t5-xxl --preload-model false --tmp-dir /home/model-server/tmp\r\n2023-02-01T15:46:06,205 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.mms.sock.9000\r\n2023-02-01T15:46:06,205 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - [PID] 47\r\n2023-02-01T15:46:06,206 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - MMS worker started.\r\n2023-02-01T15:46:06,206 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Python runtime: 3.8.10\r\n2023-02-01T15:46:06,206 [INFO ] main com.amazonaws.ml.mms.wlm.ModelManager - Model google__flan-t5-xxl loaded.\r\n2023-02-01T15:46:06,210 [INFO ] main com.amazonaws.ml.mms.ModelServer - Initialize Inference server with: EpollServerSocketChannel.\r\n2023-02-01T15:46:06,218 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000\r\n2023-02-01T15:46:06,218 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000\r\n2023-02-01T15:46:06,219 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000\r\n2023-02-01T15:46:06,226 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.mms.sock.9000\r\n2023-02-01T15:46:06,278 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.\r\n2023-02-01T15:46:06,281 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.\r\n2023-02-01T15:46:06,284 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.\r\n2023-02-01T15:46:06,290 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.mms.sock.9000.\r\n2023-02-01T15:46:06,298 [INFO ] main com.amazonaws.ml.mms.ModelServer - Inference API bind to: http://0.0.0.0:8080\r\nModel server started.\r\n2023-02-01T15:46:06,302 [WARN ] pool-3-thread-1 com.amazonaws.ml.mms.metrics.MetricCollector - worker pid is not available yet.\r\n2023-02-01T15:46:08,478 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model google__flan-t5-xxl loaded io_fd=3abd6afffe6261f4-0000001d-00000000-084f36d4c5a81b10-639dfd41\r\n2023-02-01T15:46:08,491 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 2081\r\n2023-02-01T15:46:08,493 [WARN ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-google__flan-t5-xxl-1\r\n2023-02-01T15:46:08,499 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model google__flan-t5-xxl loaded io_fd=3abd6afffe6261f4-0000001d-00000001-c96df6d4c5a81b10-276a10eb\r\n2023-02-01T15:46:08,500 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 2089\r\n2023-02-01T15:46:08,500 [WARN ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-google__flan-t5-xxl-3\r\n2023-02-01T15:46:08,512 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model google__flan-t5-xxl loaded io_fd=3abd6afffe6261f4-0000001d-00000004-12e7f154c5a81b12-fe262c46\r\n2023-02-01T15:46:08,512 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 2101\r\n2023-02-01T15:46:08,513 [WARN ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-google__flan-t5-xxl-4\r\n2023-02-01T15:46:08,561 [INFO ] W-9000-google__flan-t5-xxl-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Model google__flan-t5-xxl loaded io_fd=3abd6afffe6261f4-0000001d-00000003-6582f154c5a81b12-273338b8\r\n2023-02-01T15:46:08,561 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 2150\r\n2023-02-01T15:46:08,561 [WARN ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerLifeCycle - attachIOStreams() threadName=W-google__flan-t5-xxl-2\r\n2023-02-01T15:46:10,450 [INFO ] pool-2-thread-6 ACCESS_LOG - /169.254.178.2:59002 \"GET /ping HTTP/1.1\" 200 7\r\n2023-02-01T15:46:15,412 [INFO ] pool-2-thread-6 ACCESS_LOG - /169.254.178.2:59002 \"GET /ping HTTP/1.1\" 200 0\r\n2023-02-01T15:46:20,411 [INFO ] pool-2-thread-6 ACCESS_LOG - /169.254.178.2:59002 \"GET /ping HTTP/1.1\" 200 0\r\n```\r\n\r\nBut I got the same error when trying to do an inference:\r\n```\r\nbotocore.errorfactory.ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message \"{\r\n \"code\": 400,\r\n \"type\": \"InternalServerException\",\r\n \"message\": \"Could not load model /.sagemaker/mms/models/google__flan-t5-xxl with any of the following classes: (\\u003cclass \\u0027transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM\\u0027\\u003e, \\u003cclass \\u0027transformers.models.t5.modeling_t5.T5ForConditionalGeneration\\u0027\\u003e).\"\r\n}\r\n```\r\n\r\nAWS logs:\r\n```\r\n2023-02-01T15:49:59,831 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Prediction error\r\n2023-02-01T15:49:59,832 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last):\r\n2023-02-01T15:49:59,832 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File \"/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py\", line 219, in handle\r\n2023-02-01T15:49:59,832 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - self.initialize(context)\r\n2023-02-01T15:49:59,832 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File \"/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py\", line 77, in initialize\r\n2023-02-01T15:49:59,832 [INFO ] W-9000-google__flan-t5-xxl com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 1\r\n2023-02-01T15:49:59,833 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - self.model = self.load(self.model_dir)\r\n2023-02-01T15:49:59,833 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File \"/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py\", line 104, in load\r\n2023-02-01T15:49:59,833 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - hf_pipeline = get_pipeline(task=os.environ[\"HF_TASK\"], model_dir=model_dir, device=self.device)\r\n2023-02-01T15:49:59,833 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File \"/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/transformers_utils.py\", line 272, in get_pipeline\r\n2023-02-01T15:49:59,833 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - hf_pipeline = pipeline(task=task, model=model_dir, device=device, **kwargs)\r\n2023-02-01T15:49:59,834 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File \"/opt/conda/lib/python3.8/site-packages/transformers/pipelines/__init__.py\", line 754, in pipeline\r\n2023-02-01T15:49:59,834 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - framework, model = infer_framework_load_model(\r\n2023-02-01T15:49:59,834 [INFO ] W-9000-google__flan-t5-xxl ACCESS_LOG - /169.254.178.2:59002 \"POST /invocations HTTP/1.1\" 400 13\r\n2023-02-01T15:49:59,834 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File \"/opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 266, in infer_framework_load_model\r\n2023-02-01T15:49:59,834 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise ValueError(f\"Could not load model {model} with any of the following classes: {class_tuple}.\")\r\n2023-02-01T15:49:59,835 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ValueError: Could not load model /.sagemaker/mms/models/google__flan-t5-xxl with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM'>, <class 'transformers.models.t5.modeling_t5.T5ForConditionalGeneration'>).\r\n2023-02-01T15:49:59,835 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -\r\n2023-02-01T15:49:59,835 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - During handling of the above exception, another exception occurred:\r\n2023-02-01T15:49:59,835 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -\r\n2023-02-01T15:49:59,836 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last):\r\n2023-02-01T15:49:59,836 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File \"/opt/conda/lib/python3.8/site-packages/mms/service.py\", line 108, in predict\r\n2023-02-01T15:49:59,836 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ret = self._entry_point(input_batch, self.context)\r\n2023-02-01T15:49:59,836 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File \"/opt/conda/lib/python3.8/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py\", line 243, in handle\r\n2023-02-01T15:49:59,836 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise PredictionException(str(e), 400)\r\n2023-02-01T15:49:59,837 [INFO ] W-google__flan-t5-xxl-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mms.service.PredictionException: Could not load model /.sagemaker/mms/models/google__flan-t5-xxl with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM'>, <class 'transformers.models.t5.modeling_t5.T5ForConditionalGeneration'>). : 400\r\n```",
"Hello @valentinboyanov \r\n\r\nI can see in your script that: \r\n\r\n```python\r\nHuggingFaceModel(\r\n transformers_version=\"4.17.0\",\r\n pytorch_version=\"1.10.2\",\r\n py_version=\"py38\",\r\n model_data=\"s3://sagemaker-eu-north-1-***/model.tar.gz\",\r\n env=hub,\r\n role=role,\r\n)\r\n```\r\ncan you update `transformers_version` with the correct value? I suspect this is causing the issue",
"@younesbelkada if I change it, I'm unable to deploy at all:\r\n\r\n```\r\n raise ValueError(\r\nValueError: Unsupported huggingface version: 4.26.0. You may need to upgrade your SDK version (pip install -U sagemaker) for newer huggingface versions. Supported huggingface version(s): 4.6.1, 4.10.2, 4.11.0, 4.12.3, 4.17.0, 4.6, 4.10, 4.11, 4.12, 4.17.\r\n```\r\n\r\nThis is why I've followed the instructions by [Heiko Hotz (marshmellow77) in this comment](https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/5) to provide a `requirements.txt` file that will let me specify dependencies I want to be installed in the container.",
"@valentinboyanov what is the content for your ` model_data=\"s3://sagemaker-eu-north-1-***/model.tar.gz\"`? Could you please share the folder structure. ",
"@philschmid yes, here it goes:\r\n```\r\n➜ model tree .\r\n.\r\n└── code\r\n └── requirements.txt\r\n\r\n1 directory, 1 file\r\n```\r\n\r\n```\r\n➜ model cat code/requirements.txt \r\ntransformers==4.26.0% \r\n```",
"When you provide a `model_data` key word you also have to include the `inference.py` and the model weights. ",
"@philschmid what should be the contents of the `inference.py` in case of the flan-t5-xl model? Can this be an empty file if I don't intend to change anything from the hub model? There doesn't seem to be such a file included within the [Hugging Face repository](https://huggingface.co/google/flan-t5-xl/tree/main).\r\n\r\n@valentinboyanov I confirm getting the same as well. From the CW logs it seems that `4.17.0` is un-installed and replaced with the latest version specified in the `requirements.txt` file.\r\n\r\n> @younesbelkada if I change it, I'm unable to deploy at all:\r\n> \r\n> ```\r\n> raise ValueError(\r\n> ValueError: Unsupported huggingface version: 4.26.0. You may need to upgrade your SDK version (pip install -U sagemaker) for newer huggingface versions. Supported huggingface version(s): 4.6.1, 4.10.2, 4.11.0, 4.12.3, 4.17.0, 4.6, 4.10, 4.11, 4.12, 4.17.\r\n> ```\r\n> \r\n> This is why I've followed the instructions by [Heiko Hotz (marshmellow77) in this comment](https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/5) to provide a `requirements.txt` file that will let me specify dependencies I want to be installed in the container.\r\n\r\n",
"I'm having the same `Could not load model error with any of the following classes: AutoModelForSeq2SeqLM and T5ForConditionalGeneration` when using a docker for inference of a `flan-t5-xxl-sharded-fp16` model:\r\n\r\nCode works without Docker, but If I build and run `docker run --gpus all -p 7080:7080 flan-t5-xxl-sharded-fp16:latest`, error is the following:\r\n```\r\n[2023-02-05 21:33:53 +0000] [1] [INFO] Starting gunicorn 20.1.0\r\n[2023-02-05 21:33:53 +0000] [1] [INFO] Listening at: http://0.0.0.0:7080 (1)\r\n[2023-02-05 21:33:53 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker\r\n[2023-02-05 21:33:53 +0000] [7] [INFO] Booting worker with pid: 7\r\n[2023-02-05 21:34:01 +0000] [7] [INFO] Is CUDA available: True\r\n[2023-02-05 21:34:01 +0000] [7] [INFO] CUDA device: NVIDIA A100-SXM4-40GB\r\n[2023-02-05 21:34:01 +0000] [7] [INFO] Loading model\r\n[2023-02-05 21:34:02 +0000] [7] [ERROR] Exception in worker process\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/gunicorn/arbiter.py\", line 589, in spawn_worker\r\n worker.init_process()\r\n File \"/usr/local/lib/python3.9/site-packages/uvicorn/workers.py\", line 66, in init_process\r\n super(UvicornWorker, self).init_process()\r\n File \"/usr/local/lib/python3.9/site-packages/gunicorn/workers/base.py\", line 134, in init_process\r\n self.load_wsgi()\r\n File \"/usr/local/lib/python3.9/site-packages/gunicorn/workers/base.py\", line 146, in load_wsgi\r\n self.wsgi = self.app.wsgi()\r\n File \"/usr/local/lib/python3.9/site-packages/gunicorn/app/base.py\", line 67, in wsgi\r\n self.callable = self.load()\r\n File \"/usr/local/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py\", line 58, in load\r\n return self.load_wsgiapp()\r\n File \"/usr/local/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py\", line 48, in load_wsgiapp\r\n return util.import_app(self.app_uri)\r\n File \"/usr/local/lib/python3.9/site-packages/gunicorn/util.py\", line 359, in import_app\r\n mod = importlib.import_module(module)\r\n File \"/usr/local/lib/python3.9/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\r\n File \"/app/main.py\", line 29, in <module>\r\n pipe_flan = pipeline(\"text2text-generation\", model=\"../flan-t5-xxl-sharded-fp16\", model_kwargs={\"load_in_8bit\":True, \"device_map\": \"auto\"})\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/pipelines/__init__.py\", line 754, in pipeline\r\n framework, model = infer_framework_load_model(\r\n File \"/usr/local/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 266, in infer_framework_load_model\r\n raise ValueError(f\"Could not load model {model} with any of the following classes: {class_tuple}.\")\r\nValueError: Could not load model ../flan-t5-xxl-sharded-fp16 with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM'>, <class 'transformers.models.t5.modeling_t5.T5ForConditionalGeneration'>).\r\n[2023-02-05 21:34:02 +0000] [7] [INFO] Worker exiting (pid: 7)\r\n[2023-02-05 21:34:04 +0000] [1] [INFO] Shutting down: Master\r\n[2023-02-05 21:34:04 +0000] [1] [INFO] Reason: Worker failed to boot.\r\n\r\n```\r\nDockerfile is the following:\r\n```\r\nFROM tiangolo/uvicorn-gunicorn-fastapi:python3.9\r\n\r\n# install dependencies\r\nRUN python3 -m pip install --upgrade pip\r\nRUN pip3 install torch==1.13.0 transformers==4.26.0 sentencepiece torchvision torchaudio accelerate==0.15.0 bitsandbytes-cuda113\r\n\r\nCOPY ./app /app\r\nCOPY ./flan-t5-xxl-sharded-fp16/ /flan-t5-xxl-sharded-fp16\r\n\r\nEXPOSE 7080\r\n\r\n# Start the app\r\nCMD [\"gunicorn\", \"-b\", \"0.0.0.0:7080\", \"main:app\",\"--workers\",\"1\",\"--timeout\",\"180\",\"-k\",\"uvicorn.workers.UvicornWorker\"]\r\n```\r\n\r\nThe code of `app/main.py` is the following:\r\n```py\r\nfrom fastapi import FastAPI, Request\r\nfrom fastapi.logger import logger\r\n\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer, T5ForConditionalGeneration \r\n\r\nimport json\r\nimport logging\r\nimport numpy as np\r\nimport os\r\nimport torch\r\n\r\nfrom transformers import pipeline\r\n\r\napp = FastAPI()\r\n\r\ngunicorn_logger = logging.getLogger('gunicorn.error')\r\nlogger.handlers = gunicorn_logger.handlers\r\n\r\nif __name__ != \"main\":\r\n logger.setLevel(gunicorn_logger.level)\r\nelse:\r\n logger.setLevel(logging.INFO)\r\n\r\nlogger.info(f\"Is CUDA available: {torch.cuda.is_available()}\")\r\nlogger.info(f\"CUDA device: {torch.cuda.get_device_name(torch.cuda.current_device())}\")\r\n\r\nlogger.info(\"Loading model\")\r\n\r\n# error is in this line\r\npipe_flan = pipeline(\"text2text-generation\", model=\"../flan-t5-xxl-sharded-fp16\", model_kwargs={\"load_in_8bit\":True, \"device_map\": \"auto\"}) \r\n\r\n# extra code removed\r\n```",
"@philschmid @younesbelkada just wanted to follow up on this.\r\n\r\n> @philschmid what should be the contents of the `inference.py` in case of the flan-t5-xl model? There doesn't seem to be such a file included within the [Hugging Face repository](https://huggingface.co/google/flan-t5-xl/tree/main).\r\n> \r\n> @valentinboyanov I confirm getting the same as well. From the CW logs it seems that `4.17.0` is un-installed and replaced with the latest version specified in the `requirements.txt` file.\r\n> \r\n> > @younesbelkada if I change it, I'm unable to deploy at all:\r\n> > ```\r\n> > raise ValueError(\r\n> > ValueError: Unsupported huggingface version: 4.26.0. You may need to upgrade your SDK version (pip install -U sagemaker) for newer huggingface versions. Supported huggingface version(s): 4.6.1, 4.10.2, 4.11.0, 4.12.3, 4.17.0, 4.6, 4.10, 4.11, 4.12, 4.17.\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > This is why I've followed the instructions by [Heiko Hotz (marshmellow77) in this comment](https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/5) to provide a `requirements.txt` file that will let me specify dependencies I want to be installed in the container.\r\n\r\n",
"@RonLek i am planning to create an example. I ll post it here once it is ready. ",
"@RonLek done: https://www.philschmid.de/deploy-flan-t5-sagemaker",
"This works! Thanks a ton @philschmid for the prompt response :rocket: ",
"@philschmid just curious. Would there be a similar sharded model repo for flan-t5-xl?",
"If you check this blog post: https://www.philschmid.de/deploy-t5-11b There is a code snippet on how to do this, for `t5-11b` https://www.philschmid.de/deploy-t5-11b\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelWithLMHead\r\nfrom huggingface_hub import HfApi\r\n\r\n# load model as float16\r\nmodel = AutoModelWithLMHead.from_pretrained(\"t5-11b\", torch_dtype=torch.float16, low_cpu_mem_usage=True)\r\n# shard model an push to hub\r\nmodel.save_pretrained(\"sharded\", max_shard_size=\"2000MB\")\r\n```",
"Thanks! This worked :fire: ",
"@philschmid thanks for the guidance here. While deploying your solution on SageMaker i noticed that it works great on g5 instances but not on p3 instances( p3.8xlarge). Also, do we know when the the direct deploy from HF hub would work out of the box? \r\nError below - \r\n```\r\nModel fails to load, the reason being that the library bitsandbytes that is required \"The installed version of bitsandbytes was compiled without GPU support. \" on p3 instance and that leads to the below error when you invoke the model-\r\n2023-02-25T01:24:28,714 [INFO ] W-model-3-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mms.service.PredictionException: 'NoneType' object has no attribute 'cget_col_row_stats' : 400\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,681
| 1,681
|
NONE
| null |
### System Info
transformers_version==4.17.0
Plaform = Sagemaker Notebook
python==3.9.0
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Amazon Sagemaker deployment script in AWS for flant5-xl
```python
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'google/flan-t5-xl',
'HF_TASK':'text2text-generation'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
predictor.predict({
'inputs': "The answer to the universe is"
})
```
Results in
```bash
---------------------------------------------------------------------------
ModelError Traceback (most recent call last)
/tmp/ipykernel_20116/1338286066.py in <cell line: 26>()
24 )
25
---> 26 predictor.predict({
27 'inputs': "The answer to the universe is"
28 })
~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/predictor.py in predict(self, data, initial_args, target_model, target_variant, inference_id)
159 data, initial_args, target_model, target_variant, inference_id
160 )
--> 161 response = self.sagemaker_session.sagemaker_runtime_client.invoke_endpoint(**request_args)
162 return self._handle_response(response)
163
~/anaconda3/envs/python3/lib/python3.10/site-packages/botocore/client.py in _api_call(self, *args, **kwargs)
528 )
529 # The "self" in this scope is referring to the BaseClient.
--> 530 return self._make_api_call(operation_name, kwargs)
531
532 _api_call.__name__ = str(py_operation_name)
~/anaconda3/envs/python3/lib/python3.10/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params)
958 error_code = parsed_response.get("Error", {}).get("Code")
959 error_class = self.exceptions.from_code(error_code)
--> 960 raise error_class(parsed_response, operation_name)
961 else:
962 return parsed_response
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "Could not load model /.sagemaker/mms/models/google__flan-t5-xl with any of the following classes: (\u003cclass \u0027transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM\u0027\u003e, \u003cclass \u0027transformers.models.t5.modeling_t5.T5ForConditionalGeneration\u0027\u003e)."
}
"
```
From [an existing issue](https://github.com/huggingface/transformers/issues/20038), I suspected this might be due to the use of `transformers==4.17.0`, however, when I use the exact same script to deploy flant5-large model, it works without any issues.
### Expected behavior
The model should get deployed on AWS Sagemaker without any issues.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21402/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21402/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21401
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21401/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21401/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21401/events
|
https://github.com/huggingface/transformers/pull/21401
| 1,566,054,983
|
PR_kwDOCUB6oc5JAWO1
| 21,401
|
Fix some pipeline tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Let's stop adding tests of new models in the pipelines until the metaclass is removed and we have a mixin class that all model testers inherit from (I think that is the plan, right?) \r\n\r\nYes, that's the plan! For now, no new models will be added in pipeline testing (because we need to create and upload the tiny model on the Hub, and I guess no one knows how to do it except me, and I definitely will keep my life easier :-)\r\n\r\n> as we can't add a succession of tests in the pipeline common tests to change behavior for this or that model.\r\n\r\nSure!\r\n\r\n",
"Failing test is irrelevant to this PR.",
"For transparancy: I need to add back the following block in `src/transformers/pipelines/__init__.py`:\r\n```python\r\n # If `model` (instance of `PretrainedModel` instead of `str`) is passed (and/or same for config), while\r\n # `image_processor` or `feature_extractor` is `None`, the loading will fail. This happens particularly for some\r\n # vision tasks when calling `pipeline()` with `model` and only one of the `image_processor` and `feature_extractor`.\r\n # TODO: we need to make `NO_IMAGE_PROCESSOR_TASKS` and `NO_FEATURE_EXTRACTOR_TASKS` more robust to avoid such issue.\r\n # This block is only temporarily to make CI green.\r\n if load_image_processor and load_feature_extractor:\r\n load_feature_extractor = False\r\n```\r\n\r\n The issue comes from the fact of calling `pipeline()` with:\r\n\r\n - a model (not string)\r\n - one of `image_processor` or `feature_extractor` being specified, but another one is `None`\r\n - tasks involving vision models, so both `load_image_processor` and `load_feature_extractor` are `True`\r\n\r\nthen it will fail around https://github.com/huggingface/transformers/blob/c2f623cf53a9a8b2e192135b03ae211ba1ce3695/src/transformers/pipelines/__init__.py#L863\r\n\r\nWithout this change, the following 2 tests just fails\r\n- DocumentQuestionAnsweringPipelineTests::test_pt_LayoutLMv2Config_LayoutLMv2ForQuestionAnswering_LayoutLMv2TokenizerFast_LayoutLMv2ImageProcessor (not tested before)\r\n- tests/pipelines/test_pipelines_image_segmentation.py::ImageSegmentationPipelineTests::test_maskformer (already failed since the PR #20851 one week ago)\r\n\r\n**We need to improve this `pipeline.__init__.py` to make it more robust regarding the feature_extractor/image_processor while we want to keep backward compatibility**\r\n\r\nThis should go in a separate PR though, I will merge this PR as it is, unless you strongly against the changes in the last 2 commits."
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
- change `feature_extractor=...` to `image_processor=...` in `test_pipeline_XXX` files, if `XXX` is vision tasks.
- this doesn't affect any backward compatibility - they are just testing files
- change `self.feature_extractor(...)` to `self.image_processor()` in some vision pipeline files
- the backward compatibility is ensured by the change in `base.py` (parent class `Pipeline.__init__`):
```python
if self.image_processor is None and self.feature_extractor is not None:
# Backward compatible change, if users called
# ImageSegmentationPipeline(.., feature_extractor=MyFeatureExtractor())
# then we should keep working
self.image_processor = self.feature_extractor
```
- A few other fixes, see review comments
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21401/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21401",
"html_url": "https://github.com/huggingface/transformers/pull/21401",
"diff_url": "https://github.com/huggingface/transformers/pull/21401.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21401.patch",
"merged_at": 1675361012000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21400
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21400/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21400/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21400/events
|
https://github.com/huggingface/transformers/pull/21400
| 1,565,934,288
|
PR_kwDOCUB6oc5I_727
| 21,400
|
Add Pix2Struct
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@younesbelkada @ArthurZucker 👋 how is this PR going? Do you need some help to get it over the finish line? Happy to collab if helpful.",
"Hi @ankrgyl \r\nThanks so much for proposing your help on this PR! \r\n\r\nI fixed now few tests related to batched generation and addressed most of @ArthurZucker 's comments. The architecture is completely ready to use if someone wants to perform conditional and unconditional image captionning!\r\nI wanted to work on a fine-tuning notebook similar as this one: https://colab.research.google.com/drive/1lbqiSiA0sDF7JDWPeS0tccrM85LloVha?usp=sharing as it boosts quite a lot the usage of the model ! \r\nIMO the things that are left are:\r\n1- Making a notebook for Pix2Struct using the base model (that is currently pushed here: https://huggingface.co/ybelkada/pix2struct-textcaps-base \r\n2- Address the last comments\r\n3- Push the correct conversion script \r\n4- Push the remaining weights (I can do that only after one approval)\r\nIf you want, you can help me on 1, if you have some doubts about your modification you can just run the integration tests: \r\n```bash\r\nRUN_SLOW=1 pytest tests/models/pix2struct/test_modeling_pix2struct.py::Pix2StructIntegrationTest\r\n```\r\nand make sure they pass!\r\n\r\nI am aiming to merge this at most by beginning of next week ! Let me know if you want to help on those, otherwise happy to continue the PR 💪 \r\n",
"It looks like you've got it under control so I'll bow out, but happy to test!",
"I think I have addressed most of the comments! \r\nI also updated the PR description and would love to have a round of review! \r\ncc @amyeroberts @ArthurZucker ",
"Thanks @amyeroberts for the extensive review! Should have addressed most of them and left some open questions\r\nRegarding the new naming `patches`, I am not 100% convinced about that, users needs to see these input as a new paradigm that is equivalent to text tokens (as there are also attention masks in this new input) but applied to images, and I am afraid `patches` will confuse users as the shape of this input would be hard to interpret `bs x seq_len x hidden_dim` (with `hidden_dim=num_channels x patch_width x patch_height`.)",
"As disccused offline, let's stick for `flattened_patches` ! I should have fixed your comments by now and added support for `vqa` models in `Pix2struct` as they require a specific format / way of inferring",
"Thanks a mile for the extensive review! 🚀 So from what I have got from your comment: https://github.com/huggingface/transformers/pull/21400#discussion_r1137690655 I removed the `data_format` argument\r\nWould love to have a last round of review 💪 "
] | 1,675
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #20663
Paper: https://arxiv.org/pdf/2210.03347.pdf
Code: https://github.com/google-research/pix2struct
`Pix2Struct` is a series of Image-text models that has been fine-tuned on various datasets and tasks.

This integration will offer users variety of models and potential use cases
`Pix2Struct` is a model that combines vision encoder and text decoder, similar as T5. The method heavily relies on its image processing procedure. The image pre-proccessing differs from classic Vision Transformers by being able to handle images of variable resolution, thus being able to keep the aspect ratio of the original image, that seems to be essential and crucial for image understanding.

Therefore I decided to change the current paradigm for getting `pixel_values` differently. Now the pixel values should be seen as tokens that are directly processed by the `ImageProcessor`. Hence, I decided to change `pixel_values` to `pixel_embeds` , as in fact they correspond to the image embeddings. We now obtain the patch embeddings directly from the processor, that is also responsible of also computing the pixel embeds attention mask.
I will update all the weights (18 in total) after I get 1 approval
### TODO
- FIne-tuning notebook
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21400/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21400/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21400",
"html_url": "https://github.com/huggingface/transformers/pull/21400",
"diff_url": "https://github.com/huggingface/transformers/pull/21400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21400.patch",
"merged_at": 1679500433000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21399
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21399/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21399/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21399/events
|
https://github.com/huggingface/transformers/issues/21399
| 1,565,787,003
|
I_kwDOCUB6oc5dVAN7
| 21,399
|
How to resume training with a different learning rate or else TrainingArguments?
|
{
"login": "Bardbo",
"id": 44111034,
"node_id": "MDQ6VXNlcjQ0MTExMDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/44111034?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bardbo",
"html_url": "https://github.com/Bardbo",
"followers_url": "https://api.github.com/users/Bardbo/followers",
"following_url": "https://api.github.com/users/Bardbo/following{/other_user}",
"gists_url": "https://api.github.com/users/Bardbo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bardbo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bardbo/subscriptions",
"organizations_url": "https://api.github.com/users/Bardbo/orgs",
"repos_url": "https://api.github.com/users/Bardbo/repos",
"events_url": "https://api.github.com/users/Bardbo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bardbo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for surch questions as we keep issues for bugs and feature requests only.The resume training functionality is intended in case of instance crash and you should use the same hyperparameters.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,678
| 1,678
|
NONE
| null |
I tried the following settings, which will be covered by the learning rate of checkpoint
```
training_args = TrainingArguments(
learning_rate=5e-4, # larger than before 5e-5
)
...
trainer.train(resume_from_checkpoint=True)```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21399/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21398
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21398/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21398/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21398/events
|
https://github.com/huggingface/transformers/pull/21398
| 1,565,688,959
|
PR_kwDOCUB6oc5I_Gk2
| 21,398
|
Fix the issue of using only inputs_embeds in convbert model
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
Fixes #21395
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21398/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21398",
"html_url": "https://github.com/huggingface/transformers/pull/21398",
"diff_url": "https://github.com/huggingface/transformers/pull/21398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21398.patch",
"merged_at": 1675262845000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21397
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21397/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21397/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21397/events
|
https://github.com/huggingface/transformers/pull/21397
| 1,565,245,260
|
PR_kwDOCUB6oc5I9nOg
| 21,397
|
Parallelize LongT5
|
{
"login": "JamesDeAntonis",
"id": 33379057,
"node_id": "MDQ6VXNlcjMzMzc5MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/33379057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamesDeAntonis",
"html_url": "https://github.com/JamesDeAntonis",
"followers_url": "https://api.github.com/users/JamesDeAntonis/followers",
"following_url": "https://api.github.com/users/JamesDeAntonis/following{/other_user}",
"gists_url": "https://api.github.com/users/JamesDeAntonis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamesDeAntonis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamesDeAntonis/subscriptions",
"organizations_url": "https://api.github.com/users/JamesDeAntonis/orgs",
"repos_url": "https://api.github.com/users/JamesDeAntonis/repos",
"events_url": "https://api.github.com/users/JamesDeAntonis/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamesDeAntonis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, just pass along `device_map=\"auto\"` or `device_map=\"balanced\"` in your call to `from_pretrained` to have the model be parallelized. It will work for training and inference.",
"Oh sweet! Didn't know about this. I just tried training with the longt5 3b model using accelerate and it didn't work well; GPU0 got most of the workload and the training run crashed quickly. I tried both \"auto\" and \"balanced\". If I use my code it works. I realize that I could specify my own device map, but that's pretty tedious. Is there a better way to debug this? Thanks!"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds parallelization to LongT5
Fixes # (issue)
[21396](https://github.com/huggingface/transformers/issues/21396)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21397/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21397",
"html_url": "https://github.com/huggingface/transformers/pull/21397",
"diff_url": "https://github.com/huggingface/transformers/pull/21397.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21397.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21396
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21396/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21396/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21396/events
|
https://github.com/huggingface/transformers/issues/21396
| 1,565,237,724
|
I_kwDOCUB6oc5dS6Hc
| 21,396
|
Parallelize LongT5
|
{
"login": "JamesDeAntonis",
"id": 33379057,
"node_id": "MDQ6VXNlcjMzMzc5MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/33379057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamesDeAntonis",
"html_url": "https://github.com/JamesDeAntonis",
"followers_url": "https://api.github.com/users/JamesDeAntonis/followers",
"following_url": "https://api.github.com/users/JamesDeAntonis/following{/other_user}",
"gists_url": "https://api.github.com/users/JamesDeAntonis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamesDeAntonis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamesDeAntonis/subscriptions",
"organizations_url": "https://api.github.com/users/JamesDeAntonis/orgs",
"repos_url": "https://api.github.com/users/JamesDeAntonis/repos",
"events_url": "https://api.github.com/users/JamesDeAntonis/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamesDeAntonis/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nWe've just deprecated the parallelize API as it can now be done using the `from_pretrained` method.\r\n\r\nSee https://github.com/huggingface/transformers/pull/21448"
] | 1,675
| 1,675
| 1,675
|
CONTRIBUTOR
| null |
### Feature request
Similar to regular T5, it'd be nice if LongT5 had parallelization support
### Motivation
This allows for training larger models / input sizes by using several gpus
### Your contribution
I have a pull request ready for this
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21396/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21395
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21395/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21395/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21395/events
|
https://github.com/huggingface/transformers/issues/21395
| 1,565,144,183
|
I_kwDOCUB6oc5dSjR3
| 21,395
|
UnboundLocalError: local variable 'seq_length' referenced before assignment
|
{
"login": "zhuzihan728",
"id": 55835587,
"node_id": "MDQ6VXNlcjU1ODM1NTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/55835587?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuzihan728",
"html_url": "https://github.com/zhuzihan728",
"followers_url": "https://api.github.com/users/zhuzihan728/followers",
"following_url": "https://api.github.com/users/zhuzihan728/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuzihan728/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuzihan728/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuzihan728/subscriptions",
"organizations_url": "https://api.github.com/users/zhuzihan728/orgs",
"repos_url": "https://api.github.com/users/zhuzihan728/repos",
"events_url": "https://api.github.com/users/zhuzihan728/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuzihan728/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I think it might be a bug, request you to post a sample script of your usage to reproduce the error.",
"> I think it might be a bug, request you to post a sample script of your usage to reproduce the error.\r\n\r\n```\r\nimport torch\r\nfrom transformers import ConvBertConfig, ConvBertForTokenClassification\r\nembeddings = torch.tensor([1])\r\nmask = torch.tensor([1])\r\nconvbert_model_config = ConvBertConfig()\r\nconvbert_model = ConvBertForTokenClassification(convbert_model_config)\r\noutputs = convbert_model(inputs_embeds=embeddings, attention_mask=mask)\r\n```",
"Thanks for providing a reproducing script and thanks @raghavanone for jumping on this. Your fix looks good! "
] | 1,675
| 1,675
| 1,675
|
NONE
| null |
### System Info
Hi, I am using the `ConvBertForTokenClassification` model in models.convbert and encountered the bug when passing only `input_embeds` to `forward()`.
The traceback says that at line 833 in modeling_convbert.py
```
if token_type_ids is None:
if hasattr(self.embeddings, "token_type_ids"):
buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
```
The seq_length is unassigned.
I noticed just above this piece of code in
```
elif input_ids is not None:
input_shape = input_ids.size()
batch_size, seq_length = input_shape
elif inputs_embeds is not None:
input_shape = inputs_embeds.size()[:-1]
```
seq_length is not assigned if the program enters `elif inputs_embeds is not None`.
Not sure if it is the `batch_size, seq_length = input_shape` missing for `inputs_embeds` or I am not using the model correctly?
### Who can help?
text models: @ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
passing only `inputs_embeds` and `attention_mask` to `ConvBertForTokenClassification` model.
### Expected behavior
There should be no error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21395/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21394
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21394/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21394/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21394/events
|
https://github.com/huggingface/transformers/issues/21394
| 1,565,102,411
|
I_kwDOCUB6oc5dSZFL
| 21,394
|
[WHISPER] - ValueError: Malformed soundfile
|
{
"login": "altryne",
"id": 463317,
"node_id": "MDQ6VXNlcjQ2MzMxNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/463317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/altryne",
"html_url": "https://github.com/altryne",
"followers_url": "https://api.github.com/users/altryne/followers",
"following_url": "https://api.github.com/users/altryne/following{/other_user}",
"gists_url": "https://api.github.com/users/altryne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/altryne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/altryne/subscriptions",
"organizations_url": "https://api.github.com/users/altryne/orgs",
"repos_url": "https://api.github.com/users/altryne/repos",
"events_url": "https://api.github.com/users/altryne/events{/privacy}",
"received_events_url": "https://api.github.com/users/altryne/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @altryne! Indeed, it should be possible to load up any audio file using transformers `pipeline` alone. Could you share the full traceback of the error? This will help in pinpointing its exact nature! It would be great if you are able to share the `.mp3` file! My email is `sanchit@huggingface.co`. Thanks!",
"Thanks Sanchit! \r\n\r\nShared! (it's a video file, but whisper doesn't mind) ",
"any news on this case ?",
"> (it's a video file, but whisper doesn't mind)\r\n\r\nIt was an mp4 video file that needed to be converted to an mp3 or wav audio file first. HF's audio pipeline works with any audio file format, but not video.",
"Thanks for the clarification",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
### System Info
I had a few files with this error.
In @openai/whisper the same files worked, so I figured this was a bug.
Digging a little bit, this has to do with some ffmpeg parameters inside :
`File "/usr/local/lib/python3.9/site-packages/transformers/pipelines/audio_utils.py", line 41, in ffmpeg_read`
Instead of
```
file_name = "audio.mp3"
output = pipe(file_name,
generate_kwargs=props,
return_timestamps=True,
chunk_length_s=30,
stride_length_s=[6, 0],
batch_size=32,
ignore_warning=True)
```
The workaround I used, is to load the audio with whisper.load_audio instead, and pass that into the pipeline (until this if fixed)
```
audio = whisper.load_audio(source_file)
output = pipe(audio,
generate_kwargs=props,
return_timestamps=True,
chunk_length_s=30,
stride_length_s=[6, 0],
batch_size=32,
ignore_warning=True)
```
This does require a whisper dependency, but doesn't load it's model, just uses a more up to date load_audio than the one in transformers.
### Who can help?
@ArthurZucker @sanchit-gandhi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The only file I can reproduce this on is a video of my kid, so I can share via DM, don't want to paste the link here.
### Expected behavior
File is valid, and whisper runs properly
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21394/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21394/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21393
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21393/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21393/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21393/events
|
https://github.com/huggingface/transformers/pull/21393
| 1,564,828,717
|
PR_kwDOCUB6oc5I8Ob4
| 21,393
|
Moved LiLT under multimodal models in TOC
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,676
| 1,675
|
CONTRIBUTOR
| null |
LiLT was listed under text models, however, it should be listed under multimodal. The PR fixes this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21393/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21393",
"html_url": "https://github.com/huggingface/transformers/pull/21393",
"diff_url": "https://github.com/huggingface/transformers/pull/21393.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21393.patch",
"merged_at": 1675256581000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21392
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21392/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21392/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21392/events
|
https://github.com/huggingface/transformers/pull/21392
| 1,564,805,206
|
PR_kwDOCUB6oc5I8Jdi
| 21,392
|
Remove more unused attributes in config classes
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Remove another set of unused attributes in config classes.
There are still 20~30 things to check, probably I will open the PR of the new test and merge it first (skipping some failing cases), and continue to clean them up later.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21392/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21392",
"html_url": "https://github.com/huggingface/transformers/pull/21392",
"diff_url": "https://github.com/huggingface/transformers/pull/21392.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21392.patch",
"merged_at": 1675428076000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21391
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21391/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21391/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21391/events
|
https://github.com/huggingface/transformers/issues/21391
| 1,564,751,179
|
I_kwDOCUB6oc5dRDVL
| 21,391
|
T5/Flan-T5 text generation with `load_in_8bit=True` gives error `expected scalar type Float but found Half`
|
{
"login": "steve-marmalade",
"id": 85196623,
"node_id": "MDQ6VXNlcjg1MTk2NjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/85196623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steve-marmalade",
"html_url": "https://github.com/steve-marmalade",
"followers_url": "https://api.github.com/users/steve-marmalade/followers",
"following_url": "https://api.github.com/users/steve-marmalade/following{/other_user}",
"gists_url": "https://api.github.com/users/steve-marmalade/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steve-marmalade/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steve-marmalade/subscriptions",
"organizations_url": "https://api.github.com/users/steve-marmalade/orgs",
"repos_url": "https://api.github.com/users/steve-marmalade/repos",
"events_url": "https://api.github.com/users/steve-marmalade/events{/privacy}",
"received_events_url": "https://api.github.com/users/steve-marmalade/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @steve-marmalade \r\nThanks for the issue and your interest in 8bit models\r\nThis issue has been flagged in https://github.com/huggingface/transformers/pull/21281 and fixed :-) \r\nPlease use the `main` branch of `transformers` - I ran your script on the `main` branch and it worked fine\r\n`pip install git+https://github.com/huggingface/transformers.git`\r\nMaybe worth it to make a patch release @sgugger as this issue has been also flagged internally? ",
"Will include the fix in the next patch release (probably tomorrow).",
"Sounds good thank you! ",
"Thanks very much for the quick response @younesbelkada !\r\n\r\nI just tested again to make sure, and am still seeing the issue even on the `main` branch of `transformers` (I see the fix referenced in that issue in the `modeling_t5.py` file in my environment). I will double check my environment to ensure I haven't made a mistake somewhere, but wanted to note that I also see `apex` and `accelerate` in the `Traceback` -- could there be any interaction there?",
"You are right, the issue is be related to `apex`\r\nI just installed `apex` from source and encountered the issue you are describing \r\nHowever I get the same issue even without 8-bit:\r\n```python\r\nimport torch\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"google/flan-t5-base\")\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"google/flan-t5-base\", device_map=\"auto\", torch_dtype=torch.float16)\r\n\r\ninput_text = \"translate English to German: How old are you?\"\r\ninput_ids = tokenizer(input_text, return_tensors=\"pt\").input_ids.to(\"cuda\")\r\n\r\noutputs = model.generate(input_ids)\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\nThis is because the LayerNorm is replace by apex's LayerNorm in case you have `apex` installed. Is having `apex` crucial in your case? I can investigate this a bit more meanwhile! \r\n\r\nAlternatively, can you try the snippet below : \r\n```python\r\n\r\nimport torch\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\n\r\nT5ForConditionalGeneration._keep_in_fp32_modules = None\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"google/flan-t5-base\")\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"google/flan-t5-base\", device_map=\"auto\", torch_dtype=torch.float16)\r\n\r\ninput_text = \"translate English to German: How old are you?\"\r\ninput_ids = tokenizer(input_text, return_tensors=\"pt\").input_ids.to(\"cuda\")\r\n\r\noutputs = model.generate(input_ids)\r\nprint(tokenizer.decode(outputs[0]))\r\n```",
"Ok, I was able to reproduce the error once again by running the NVIDIA container `nvcr.io/nvidia/pytorch:22.12-py3` (which includes apex) and then the following:\r\n\r\n```bash\r\npip install sentencepiece accelerate bitsandbytes\r\npip install git+https://github.com/huggingface/transformers.git\r\n```\r\nAnd then the above python snippet.\r\n\r\nUninstalling apex resolves the crash. Trying to build it from source now to see whether that helps.",
"To answer your questions:\r\n\r\n1. I can confirm that `model = T5ForConditionalGeneration.from_pretrained(\"google/flan-t5-base\", device_map=\"auto\", torch_dtype=torch.float16)` fails with the same error. I had previously tried `model = T5ForConditionalGeneration.from_pretrained(\"google/flan-t5-base\", device_map=\"auto\")`, which works _and_ translates the input more or less correctly (but I assume uses more GPU memory than the other approaches).\r\n2. Running the second snippet you shared with `T5ForConditionalGeneration._keep_in_fp32_modules = None` does not crash, but the input is not translated (it just repeats back \"How old are you?\").\r\n\r\n> Is having apex crucial in your case?\r\n\r\nNo, not crucial. I am not an expert here, but thought that running the NVIDIA images (with apex) would improve inference efficiency on the A100, which is definitely nice to have if true.\r\n\r\n> I can investigate this a bit more meanwhile!\r\n\r\nThank you!\r\n\r\n",
"Actually, @younesbelkada I'd be curious to get your opinion on `apex` -- is it your impression that it speeds up training and/or inference significantly? From a quick scan of the [README](https://github.com/NVIDIA/apex) it looks like many of the features (aside from the fused layers that are causing the problem in this Issue) are already integrated into PyTorch so maybe it's not worth the hassle to get it working?",
"@steve-marmalade\r\nI did a bit of testing with flan-t5-xl with Apex and without (with float32) and observed a small approx ~5% inference speed improvement with Apex.",
"I'm working in the Docker image `nvcr.io/nvidia/pytorch:22.10-py3` and encountered this error. As suggested by @steve-marmalade, the error disappeared after `pip uninstall apex`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,675
| 1,679
| 1,679
|
NONE
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.14.0a0+410ce96 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Start a container with the latest [NVIDIA PyTorch Docker Image](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-22-12.html#rel-22-12) and an A100 GPU
2. Install the latest `transformers` from this github repo
3. Run the snippet from [the official example](https://huggingface.co/google/flan-t5-base)
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Throws
```
RuntimeError Traceback (most recent call last)
Cell In[23], line 9
6 input_text = "translate English to German: How old are you?"
7 input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
----> 9 outputs = model.generate(input_ids)
10 print(tokenizer.decode(outputs[0]))
File /usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs)
24 @functools.wraps(func)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
File /usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py:1255, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs)
1247 logger.warning(
1248 "A decoder-only architecture is being used, but right-padding was detected! For correct "
1249 "generation results, please set `padding_side='left'` when initializing the tokenizer."
1250 )
1252 if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs:
1253 # if model is encoder decoder encoder_outputs are created
1254 # and added to `model_kwargs`
-> 1255 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
1256 inputs_tensor, model_kwargs, model_input_name
1257 )
1259 # 5. Prepare `input_ids` which will be used for auto-regressive generation
1260 if self.config.is_encoder_decoder:
File /usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py:617, in GenerationMixin._prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name)
615 encoder_kwargs["return_dict"] = True
616 encoder_kwargs[model_input_name] = inputs_tensor
--> 617 model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
619 return model_kwargs
File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1423, in Module._call_impl(self, *input, **kwargs)
1418 # If we don't have any hooks, we want to skip the rest of the logic in
1419 # this function, and just call forward.
1420 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1421 or _global_backward_pre_hooks or _global_backward_hooks
1422 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1423 return forward_call(*input, **kwargs)
1424 # Do not call functions when jit is used
1425 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:158, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
156 output = old_forward(*args, **kwargs)
157 else:
--> 158 output = old_forward(*args, **kwargs)
159 return module._hf_hook.post_forward(module, output)
File /usr/local/lib/python3.8/dist-packages/transformers/models/t5/modeling_t5.py:1055, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
1042 layer_outputs = checkpoint(
1043 create_custom_forward(layer_module),
1044 hidden_states,
(...)
1052 None, # past_key_value is always None with gradient checkpointing
1053 )
1054 else:
-> 1055 layer_outputs = layer_module(
1056 hidden_states,
1057 attention_mask=extended_attention_mask,
1058 position_bias=position_bias,
1059 encoder_hidden_states=encoder_hidden_states,
1060 encoder_attention_mask=encoder_extended_attention_mask,
1061 encoder_decoder_position_bias=encoder_decoder_position_bias,
1062 layer_head_mask=layer_head_mask,
1063 cross_attn_layer_head_mask=cross_attn_layer_head_mask,
1064 past_key_value=past_key_value,
1065 use_cache=use_cache,
1066 output_attentions=output_attentions,
1067 )
1069 # layer_outputs is a tuple with:
1070 # hidden-states, key-value-states, (self-attention position bias), (self-attention weights), (cross-attention position bias), (cross-attention weights)
1071 if use_cache is False:
File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1423, in Module._call_impl(self, *input, **kwargs)
1418 # If we don't have any hooks, we want to skip the rest of the logic in
1419 # this function, and just call forward.
1420 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1421 or _global_backward_pre_hooks or _global_backward_hooks
1422 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1423 return forward_call(*input, **kwargs)
1424 # Do not call functions when jit is used
1425 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:158, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
156 output = old_forward(*args, **kwargs)
157 else:
--> 158 output = old_forward(*args, **kwargs)
159 return module._hf_hook.post_forward(module, output)
File /usr/local/lib/python3.8/dist-packages/transformers/models/t5/modeling_t5.py:687, in T5Block.forward(self, hidden_states, attention_mask, position_bias, encoder_hidden_states, encoder_attention_mask, encoder_decoder_position_bias, layer_head_mask, cross_attn_layer_head_mask, past_key_value, use_cache, output_attentions, return_dict)
684 else:
685 self_attn_past_key_value, cross_attn_past_key_value = None, None
--> 687 self_attention_outputs = self.layer[0](
688 hidden_states,
689 attention_mask=attention_mask,
690 position_bias=position_bias,
691 layer_head_mask=layer_head_mask,
692 past_key_value=self_attn_past_key_value,
693 use_cache=use_cache,
694 output_attentions=output_attentions,
695 )
696 hidden_states, present_key_value_state = self_attention_outputs[:2]
697 attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs and relative position weights
File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1423, in Module._call_impl(self, *input, **kwargs)
1418 # If we don't have any hooks, we want to skip the rest of the logic in
1419 # this function, and just call forward.
1420 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1421 or _global_backward_pre_hooks or _global_backward_hooks
1422 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1423 return forward_call(*input, **kwargs)
1424 # Do not call functions when jit is used
1425 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:158, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
156 output = old_forward(*args, **kwargs)
157 else:
--> 158 output = old_forward(*args, **kwargs)
159 return module._hf_hook.post_forward(module, output)
File /usr/local/lib/python3.8/dist-packages/transformers/models/t5/modeling_t5.py:592, in T5LayerSelfAttention.forward(self, hidden_states, attention_mask, position_bias, layer_head_mask, past_key_value, use_cache, output_attentions)
582 def forward(
583 self,
584 hidden_states,
(...)
590 output_attentions=False,
591 ):
--> 592 normed_hidden_states = self.layer_norm(hidden_states)
593 attention_output = self.SelfAttention(
594 normed_hidden_states,
595 mask=attention_mask,
(...)
600 output_attentions=output_attentions,
601 )
602 hidden_states = hidden_states + self.dropout(attention_output[0])
File /usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1423, in Module._call_impl(self, *input, **kwargs)
1418 # If we don't have any hooks, we want to skip the rest of the logic in
1419 # this function, and just call forward.
1420 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1421 or _global_backward_pre_hooks or _global_backward_hooks
1422 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1423 return forward_call(*input, **kwargs)
1424 # Do not call functions when jit is used
1425 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.8/dist-packages/accelerate/hooks.py:158, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)
156 output = old_forward(*args, **kwargs)
157 else:
--> 158 output = old_forward(*args, **kwargs)
159 return module._hf_hook.post_forward(module, output)
File /usr/local/lib/python3.8/dist-packages/apex/normalization/fused_layer_norm.py:386, in FusedRMSNorm.forward(self, input)
383 return manual_rms_norm(input, self.normalized_shape, self.weight, self.eps)
385 if self.elementwise_affine:
--> 386 return fused_rms_norm_affine(input, self.weight, self.normalized_shape, self.eps)
387 else:
388 return fused_rms_norm(input, self.normalized_shape, self.eps)
File /usr/local/lib/python3.8/dist-packages/apex/normalization/fused_layer_norm.py:189, in fused_rms_norm_affine(input, weight, normalized_shape, eps)
187 args = _cast_if_autocast_enabled(input, weight, normalized_shape, eps)
188 with torch.cuda.amp.autocast(enabled=False):
--> 189 return FusedRMSNormAffineFunction.apply(*args)
File /usr/local/lib/python3.8/dist-packages/apex/normalization/fused_layer_norm.py:69, in FusedRMSNormAffineFunction.forward(ctx, input, weight, normalized_shape, eps)
67 input_ = input.contiguous()
68 weight_ = weight.contiguous()
---> 69 output, invvar = fused_layer_norm_cuda.rms_forward_affine(
70 input_, ctx.normalized_shape, weight_, ctx.eps)
71 ctx.save_for_backward(input_, weight_, invvar)
72 return output
RuntimeError: expected scalar type Float but found Half
```
### Expected behavior
The model to generate a translation of the input
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21391/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21390
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21390/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21390/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21390/events
|
https://github.com/huggingface/transformers/pull/21390
| 1,564,644,824
|
PR_kwDOCUB6oc5I7nQg
| 21,390
|
Skip batches fast with accelerate
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for implementing this, Sylvain,\r\n\r\nIs this PR going to sit for a bit until the new Accelerate comes out? \r\n\r\nCurrently I have to work on some other urgent things and SLURM makes it hard to quickly test things, but would be happy to test once I get the opportunity. ",
"The new release is just out actually, so this should be merged as soon as positively reviewed :-)",
"ok, great! congrats!\r\n\r\nlet me then just do a quick test on my desktop.",
"Hmm, on a small test it worked fine, but I see it fails on the real training. It appears to be restarting on resume rather than fast-forwarding. Perhaps when I did the small test I only checked that it printed the fast-forwarding message and thought it was doing the right thing.\r\n\r\nHere is the TB:\r\n\r\n\r\n\r\nThe straight line is a similar training before this change, the restarting one is after. So as you can see it restarts the iteration counter rather than continuing (it's hard to tell what was done to the data). Both trainings were run in 2 parts each with resume due to the slurm environment.\r\n\r\nI was training with `--log_level warning` so didn't get the info log.\r\n\r\nAny thoughts to what might have gone wrong?\r\n\r\nThe example `run_clm.py` script that was running with this config: https://github.com/huggingface/m4/pull/922/files#diff-27452e2e8d112cbfd59f7900ef3b39dc35a4a0faf2d967fd86e399fe7ccb1ba2R192-R217\r\n\r\nDuring the problematic run I used `accelerate==0.16.0` and `transformers@main` (from Feb 1)\r\nThe good one was `accelerate==0.15.0` and some recent `transformers`",
"On the other hand it finished training at the same iteration as before, so the dataloader issues `StopIteration` at the same iteration as before.\r\n\r\nwhich means some accounting was off - that is epoch isn't calculated correctly.",
"The training should finish at the same loss (this is tested with and without randomness in the model for small training), so normally you're good with the data. I think the problem lies with [this line](https://github.com/huggingface/transformers/blob/182afb7dc6f40aea5f5bb41710cb5207d187b022/src/transformers/trainer.py#L1909) which uses the `step` variable which now goes from 0 to len(data_loader) - num_batches_skipped (instead of num_batch_skipped to len(data_loader) before my PR). Will push a fix today or tomorrow!",
"super! thank you, Sylvain.",
"Should be fixed by the PR mentioned above."
] | 1,675
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
This PR uses the lastest from `Accelerate` to quickly skip batches when resuming training (it's only going to be quicker for a regular dataset, iterable datasets will still require a manual pass though the first batches).
Note that the RNG seeds can't be reloaded before we have started the iteration of the data loader because the random sampler used by the data laoder needs to use the seed at this stage, so there is a small path to load it after the iteration has started
cc @stas00 if you want to experiment (needs to use Accelerate main until v0.16 is out).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21390/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21390/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21390",
"html_url": "https://github.com/huggingface/transformers/pull/21390",
"diff_url": "https://github.com/huggingface/transformers/pull/21390.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21390.patch",
"merged_at": 1675264925000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21389
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21389/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21389/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21389/events
|
https://github.com/huggingface/transformers/pull/21389
| 1,564,513,531
|
PR_kwDOCUB6oc5I7K-0
| 21,389
|
Generate: fix TF XLA tests on models with `max_position_embeddings` or `max_target_positions`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,675
| 1,675
| 1,675
|
MEMBER
| null |
# What does this PR do?
Extracted from #20901 -- the lines added in this PR were incorrectly removed [here](https://github.com/huggingface/transformers/commit/0f78529f982eceb79c5855d0466c287ec8a18df1), causing some XLA tests to fail.
These changes fixes 8 slow TF tests on `test_xla_generate_slow`. They were also approved in the PR linked above, which was redone as part of the discussion (and closed).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21389/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21389",
"html_url": "https://github.com/huggingface/transformers/pull/21389",
"diff_url": "https://github.com/huggingface/transformers/pull/21389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21389.patch",
"merged_at": 1675180174000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21388
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21388/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21388/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21388/events
|
https://github.com/huggingface/transformers/pull/21388
| 1,564,431,115
|
PR_kwDOCUB6oc5I65HA
| 21,388
|
Added: links from model docs to respective model checkpoints on Hub
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"> Note that we're not changing the copyright years of the files: they are copyrighted from the year they were created.\r\n\r\nI can revert them back, but I'd love to know why?",
"Because there are many many many files in the library ;-) I also think it is bad practice to remove the year of the creation of the file, so if update there was, it would need to be something like 2020-20203 (for a file created in 2020), but I don't think the update is necessary at all.\r\n\r\n@CarlosMFerr might have more insight on this!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21388). All of your documentation changes will be reflected on that endpoint.",
"Since model summary rework, this PR is no longer relevant. "
] | 1,675
| 1,677
| 1,676
|
CONTRIBUTOR
| null |
To aid navigation and discoverability, this PR adds links to respective model checkpoints on Hub to model docs (one link per model, in a format `https://huggingface.co/models?sort=downloads&search=YOSO`).
A couple of maintenance changes are also included (copyright update and removal of obsolete disclaimers).
UPD: reverted copyright to the original date
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21388/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21388",
"html_url": "https://github.com/huggingface/transformers/pull/21388",
"diff_url": "https://github.com/huggingface/transformers/pull/21388.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21388.patch",
"merged_at": null
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.