url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/18174
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18174/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18174/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18174/events
|
https://github.com/huggingface/transformers/issues/18174
| 1,307,626,599
|
I_kwDOCUB6oc5N8Mxn
| 18,174
|
NLLB model file for the 600M model
|
{
"login": "Olubayode",
"id": 76165310,
"node_id": "MDQ6VXNlcjc2MTY1MzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/76165310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Olubayode",
"html_url": "https://github.com/Olubayode",
"followers_url": "https://api.github.com/users/Olubayode/followers",
"following_url": "https://api.github.com/users/Olubayode/following{/other_user}",
"gists_url": "https://api.github.com/users/Olubayode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Olubayode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Olubayode/subscriptions",
"organizations_url": "https://api.github.com/users/Olubayode/orgs",
"repos_url": "https://api.github.com/users/Olubayode/repos",
"events_url": "https://api.github.com/users/Olubayode/events{/privacy}",
"received_events_url": "https://api.github.com/users/Olubayode/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nThe 600 million parameter model can be found here: https://huggingface.co/facebook/nllb-200-distilled-600M.\r\n\r\nThe weights of the model can be found in the \"files and versions\" tab (check the \"pytorch_model.bin\" file): https://huggingface.co/facebook/nllb-200-distilled-600M/tree/main.",
"Closing this issue as I feel like this has been answered, feel free to reopen."
] | 1,658
| 1,658
| 1,658
|
NONE
| null |
1. Please where can I locate the MODEL_FILE i.e the Path to python file containing the model architecture. I believe the model architecture file will contain only one class definition extended from torch.nn.modules.
2. Please where can I locate the Handler file that can be use for TorchServe inference logic.
Please me out with the location to the model_File and handler file
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18174/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18173
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18173/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18173/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18173/events
|
https://github.com/huggingface/transformers/issues/18173
| 1,307,559,837
|
I_kwDOCUB6oc5N78ed
| 18,173
|
Can't Run UL2
|
{
"login": "cliangyu",
"id": 45140242,
"node_id": "MDQ6VXNlcjQ1MTQwMjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/45140242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cliangyu",
"html_url": "https://github.com/cliangyu",
"followers_url": "https://api.github.com/users/cliangyu/followers",
"following_url": "https://api.github.com/users/cliangyu/following{/other_user}",
"gists_url": "https://api.github.com/users/cliangyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cliangyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cliangyu/subscriptions",
"organizations_url": "https://api.github.com/users/cliangyu/orgs",
"repos_url": "https://api.github.com/users/cliangyu/repos",
"events_url": "https://api.github.com/users/cliangyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cliangyu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Looks like lm_head.weight is causing the problem. But I thought this item was in _keys_to_ignore_on_load_missing of T5ForConditionalGeneration. How could this happen?",
"google/ul2 · Splitting the model of multiple GPU's\r\nhttps://huggingface.co/google/ul2/discussions/4\r\n"
] | 1,658
| 1,658
| 1,658
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.31
- Python version: 3.10.4
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: I guess I tried :)
### Who can help?
@patrickvonplaten , led porting UL2.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
with init_empty_weights():
model = OPTModel.from_pretrained("facebook/opt-30b", device_map="auto", offload_folder='./offload_folder').to('cuda')
```
Error message
```python
Exception has occurred: ValueError
weight is on the meta device, we need a `value` to put in on 1.
File "/media/ntu/volume1/home/s121md302_06/workspace/code/yalb/ul2_test.py", line 25, in <module>
model = T5ForConditionalGeneration.from_pretrained("google/ul2", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16, device_map='auto').to('cuda')
```

### Expected behavior
I expect the model to be successfully loaded with sharded parameters.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18173/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18172
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18172/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18172/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18172/events
|
https://github.com/huggingface/transformers/pull/18172
| 1,307,373,912
|
PR_kwDOCUB6oc47iZjJ
| 18,172
|
FIX: set save state in EarlyStoppingCallback
|
{
"login": "Richar-Du",
"id": 55051961,
"node_id": "MDQ6VXNlcjU1MDUxOTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/55051961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Richar-Du",
"html_url": "https://github.com/Richar-Du",
"followers_url": "https://api.github.com/users/Richar-Du/followers",
"following_url": "https://api.github.com/users/Richar-Du/following{/other_user}",
"gists_url": "https://api.github.com/users/Richar-Du/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Richar-Du/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Richar-Du/subscriptions",
"organizations_url": "https://api.github.com/users/Richar-Du/orgs",
"repos_url": "https://api.github.com/users/Richar-Du/repos",
"events_url": "https://api.github.com/users/Richar-Du/events{/privacy}",
"received_events_url": "https://api.github.com/users/Richar-Du/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18172). All of your documentation changes will be reflected on that endpoint.",
"As mentioned in the issue you link to, this is not the right fix. This callback is not responsible for saving, only for interrupting training.",
"control.should_save = True not work for me",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16620 (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18172/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18172",
"html_url": "https://github.com/huggingface/transformers/pull/18172",
"diff_url": "https://github.com/huggingface/transformers/pull/18172.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18172.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18171
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18171/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18171/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18171/events
|
https://github.com/huggingface/transformers/pull/18171
| 1,307,344,725
|
PR_kwDOCUB6oc47iTJy
| 18,171
|
add ONNX support for swin transformer
|
{
"login": "bibhabasumohapatra",
"id": 68384968,
"node_id": "MDQ6VXNlcjY4Mzg0OTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/68384968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bibhabasumohapatra",
"html_url": "https://github.com/bibhabasumohapatra",
"followers_url": "https://api.github.com/users/bibhabasumohapatra/followers",
"following_url": "https://api.github.com/users/bibhabasumohapatra/following{/other_user}",
"gists_url": "https://api.github.com/users/bibhabasumohapatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bibhabasumohapatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bibhabasumohapatra/subscriptions",
"organizations_url": "https://api.github.com/users/bibhabasumohapatra/orgs",
"repos_url": "https://api.github.com/users/bibhabasumohapatra/repos",
"events_url": "https://api.github.com/users/bibhabasumohapatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/bibhabasumohapatra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18171). All of your documentation changes will be reflected on that endpoint.",
"Hey, @bibhabasumohapatra, Thanks for contributing to ONNX Config support. The PR looks almost good.\r\n\r\nCould you run `make fix-copies` to fix the CI, as stated in the CI error statement?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"To avoid your work from falling into limbo, I will ping @lewtun and @sgugger.",
"@lewtun could have a look at this PR?",
"Hi @bibhabasumohapatra, sorry just getting back to this PR - would you mind rebasing on `main` to resolve the merge conflicts and then pushing again to check the CI is green?\r\n\r\nAfter that, I think this will be good to go!",
"Sorry for the closed PR, actually while rebasing by mistake I clicked \"update branch\" on my repo, which deleted the commits and automatically closed the PR, I will do it again quickly with other PR @lewtun "
] | 1,658
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your great contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same person ---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Addresses #16308
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18171/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18171/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18171",
"html_url": "https://github.com/huggingface/transformers/pull/18171",
"diff_url": "https://github.com/huggingface/transformers/pull/18171.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18171.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18170
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18170/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18170/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18170/events
|
https://github.com/huggingface/transformers/pull/18170
| 1,307,336,441
|
PR_kwDOCUB6oc47iRVj
| 18,170
|
Allow loading pretrained shared Pytorch checkpoints into flax models
|
{
"login": "Sea-Snell",
"id": 6655321,
"node_id": "MDQ6VXNlcjY2NTUzMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6655321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sea-Snell",
"html_url": "https://github.com/Sea-Snell",
"followers_url": "https://api.github.com/users/Sea-Snell/followers",
"following_url": "https://api.github.com/users/Sea-Snell/following{/other_user}",
"gists_url": "https://api.github.com/users/Sea-Snell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sea-Snell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sea-Snell/subscriptions",
"organizations_url": "https://api.github.com/users/Sea-Snell/orgs",
"repos_url": "https://api.github.com/users/Sea-Snell/repos",
"events_url": "https://api.github.com/users/Sea-Snell/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sea-Snell/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18170). All of your documentation changes will be reflected on that endpoint.",
"Oops! Thanks, just added that import.",
"Now you'll need to run `make style` to fix the formatting issues :-)",
"done!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
Motivation: Sharded pytorch checkpoints cannot currently be loaded into flax models; this may be desirable in some cases (e.g. "google/ul2").
Changes: I added an few lines to `modeling_flax_utils.py` to support this behavior. The behavior of the added code exactly matches how sharded checkpoints are loaded in `modeling_utils.py` for pytorch models.
@patrickvonplaten, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18170/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18170",
"html_url": "https://github.com/huggingface/transformers/pull/18170",
"diff_url": "https://github.com/huggingface/transformers/pull/18170.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18170.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18169
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18169/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18169/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18169/events
|
https://github.com/huggingface/transformers/pull/18169
| 1,307,334,790
|
PR_kwDOCUB6oc47iRAH
| 18,169
|
Update translation.mdx
|
{
"login": "gorkemozkaya",
"id": 6454229,
"node_id": "MDQ6VXNlcjY0NTQyMjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6454229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gorkemozkaya",
"html_url": "https://github.com/gorkemozkaya",
"followers_url": "https://api.github.com/users/gorkemozkaya/followers",
"following_url": "https://api.github.com/users/gorkemozkaya/following{/other_user}",
"gists_url": "https://api.github.com/users/gorkemozkaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gorkemozkaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gorkemozkaya/subscriptions",
"organizations_url": "https://api.github.com/users/gorkemozkaya/orgs",
"repos_url": "https://api.github.com/users/gorkemozkaya/repos",
"events_url": "https://api.github.com/users/gorkemozkaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/gorkemozkaya/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/18166
- [ n/a] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ n/a] Did you write any new necessary tests?
## Who can review? t5: @patrickvonplaten, @patil-suraj
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18169/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18169",
"html_url": "https://github.com/huggingface/transformers/pull/18169",
"diff_url": "https://github.com/huggingface/transformers/pull/18169.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18169.patch",
"merged_at": 1658836600000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18168
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18168/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18168/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18168/events
|
https://github.com/huggingface/transformers/pull/18168
| 1,307,314,360
|
PR_kwDOCUB6oc47iMny
| 18,168
|
[DRAFT] Update group_texts in run_clm.py
|
{
"login": "spanglies",
"id": 6833217,
"node_id": "MDQ6VXNlcjY4MzMyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6833217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spanglies",
"html_url": "https://github.com/spanglies",
"followers_url": "https://api.github.com/users/spanglies/followers",
"following_url": "https://api.github.com/users/spanglies/following{/other_user}",
"gists_url": "https://api.github.com/users/spanglies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spanglies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spanglies/subscriptions",
"organizations_url": "https://api.github.com/users/spanglies/orgs",
"repos_url": "https://api.github.com/users/spanglies/repos",
"events_url": "https://api.github.com/users/spanglies/events{/privacy}",
"received_events_url": "https://api.github.com/users/spanglies/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
NONE
| null |
Had to make this change to prevent error's when fine-tuning on SageMaker. Without this change, there would be text groups that were too short.
# What does this PR do?
I made a small change to run_clm.py that fixed an error I received on amazon sagemaker during finetuning.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18168/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18168",
"html_url": "https://github.com/huggingface/transformers/pull/18168",
"diff_url": "https://github.com/huggingface/transformers/pull/18168.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18168.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18167
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18167/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18167/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18167/events
|
https://github.com/huggingface/transformers/issues/18167
| 1,307,306,699
|
I_kwDOCUB6oc5N6-rL
| 18,167
|
Group_texts in run_clm.py will add shorter than block_size groups on intermediately sized training sets.
|
{
"login": "spanglies",
"id": 6833217,
"node_id": "MDQ6VXNlcjY4MzMyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6833217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spanglies",
"html_url": "https://github.com/spanglies",
"followers_url": "https://api.github.com/users/spanglies/followers",
"following_url": "https://api.github.com/users/spanglies/following{/other_user}",
"gists_url": "https://api.github.com/users/spanglies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spanglies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spanglies/subscriptions",
"organizations_url": "https://api.github.com/users/spanglies/orgs",
"repos_url": "https://api.github.com/users/spanglies/repos",
"events_url": "https://api.github.com/users/spanglies/events{/privacy}",
"received_events_url": "https://api.github.com/users/spanglies/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"If you remove that line, you will get no training at all, since there is going to be 0 batches left. It is there to ensure a small dataset yields exactly one batch.",
"It looks like I must have hit some sort of an edge case. I think what happened was that it got to the end of my file and no longer had 1000 examples to provide the function `group_texts`. So in the case where the dataset lines up wrong, it won't work.\r\n\r\nPerhaps the `drop_last_batch` flag should be set by default during the group_texts phase and a parse_flag for \"small datasets\" should be introduced?",
"Using run_clm.py from the commit by @spanglies solved this issue (I am forever grateful, this has been frustrating). You have to remove references to telemetrics and to check_min_version() to make it work on Sagemaker with estimator/fit.\r\n\r\nHowever the same error shows up in evaluation, which I am disabling for now. Happy to finally have made it to a saved model, as that was my goal for this summer...",
"Hey glad that helped you @nittonfemton It wouldn't be hard to apply the same filter to the evaluation data. I hadn't because it was also my goal to have a trained model and I was less concerned with validating it. The way I had gotten it to work on sage maker was to apply the commit to v4.17.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,662
| 1,662
|
NONE
| null |
### System Info
SageMaker using transformers 4.17 and attempting to fine-tune GPT2 and GPT-Neo.
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1: Use a dataset of about 10MB
2: Run run_clm.py on SageMaker using the latest supported version (4.17 at the time of writing)
3: Part way through training receive error stating: [ValueError: expected sequence of length 1024 at dim 1 (got 507)](https://discuss.huggingface.co/t/valueerror-expected-sequence-of-length-1024-at-dim-1-got-507/20390)
### Expected behavior
group_texts in run_clm should drop all sequences that are not the correct blocksize to prevent such an error.
Additional context: This bug appears to be introduced by commit: 6f1adc43344a4ebe6fb1ecc018df9d6c092370cf
Removing the check for total_length < block_size resolves the issue and training completes without issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18167/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18166
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18166/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18166/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18166/events
|
https://github.com/huggingface/transformers/issues/18166
| 1,307,217,783
|
I_kwDOCUB6oc5N6o93
| 18,166
|
Getting error following the official docs for T5 translation fine-tuning: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'new_zeros'
|
{
"login": "gorkemozkaya",
"id": 6454229,
"node_id": "MDQ6VXNlcjY0NTQyMjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6454229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gorkemozkaya",
"html_url": "https://github.com/gorkemozkaya",
"followers_url": "https://api.github.com/users/gorkemozkaya/followers",
"following_url": "https://api.github.com/users/gorkemozkaya/following{/other_user}",
"gists_url": "https://api.github.com/users/gorkemozkaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gorkemozkaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gorkemozkaya/subscriptions",
"organizations_url": "https://api.github.com/users/gorkemozkaya/orgs",
"repos_url": "https://api.github.com/users/gorkemozkaya/repos",
"events_url": "https://api.github.com/users/gorkemozkaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/gorkemozkaya/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I noticed the issue is due to using the PT model instead of the TF, but the documentation still needs to be fixed, because it is using the `model` for creating the `data_collator` before the model is actually loaded, for both TF and PT. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"https://github.com/huggingface/transformers/pull/18169#issuecomment-1186713812 solves the issue"
] | 1,658
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I follow the TF steps on [huggingface.co/docs/transformers/tasks/translation](https://huggingface.co/docs/transformers/tasks/translation), I get the error `'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'new_zeros'`. I created a Colab notebook that reproduces the issue
https://github.com/gorkemozkaya/Data-Science-Notes/blob/master/reproducing_bugs/Error_with_the_translation_fine_tuning_example.ipynb
### Expected behavior
Getting the data pipeline in the TF-dataset form without getting an error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18166/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18165
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18165/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18165/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18165/events
|
https://github.com/huggingface/transformers/pull/18165
| 1,307,161,684
|
PR_kwDOCUB6oc47htXn
| 18,165
|
Fix beam search computing wrong `next_indices`
|
{
"login": "m43",
"id": 17498813,
"node_id": "MDQ6VXNlcjE3NDk4ODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/17498813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m43",
"html_url": "https://github.com/m43",
"followers_url": "https://api.github.com/users/m43/followers",
"following_url": "https://api.github.com/users/m43/following{/other_user}",
"gists_url": "https://api.github.com/users/m43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m43/subscriptions",
"organizations_url": "https://api.github.com/users/m43/orgs",
"repos_url": "https://api.github.com/users/m43/repos",
"events_url": "https://api.github.com/users/m43/events{/privacy}",
"received_events_url": "https://api.github.com/users/m43/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18165). All of your documentation changes will be reflected on that endpoint."
] | 1,658
| 1,658
| 1,658
|
NONE
| null |
# What does this PR do?
Beam search used `next_indices = (next_tokens / vocab_size).long()` to compute the indices of the best beams. This, however, uses [torch.true_divide](https://pytorch.org/docs/stable/generated/torch.true_divide.html#torch-true-divide) which would lead to numerical errors and make the following snippet fail:
```py
import torch
next_tokens = torch.tensor([[0, 50257]], dtype=torch.int64, device='cuda:0')
vocab_size = 50257
expected_next_indices = torch.tensor([[0,1]], dtype=torch.int64, device='cuda:0')
next_indices = (next_tokens / vocab_size).long()
print(next_indices)
assert torch.all(next_indices == expected_next_indices) # Fails
```
The simple fix in this PR uses floor division to avoid the aforementioned problem:
```py
# ...
next_indices = torch.div(next_tokens, vocab_size, rounding_mode='floor').long()
print(next_indices)
assert torch.all(next_indices == expected_next_indices) # Passes
```
I have dug out this bug while recomputing the beam search scores by hand for gpt2. If needed, I can add an end-to-end high-level reproducibility test with gpt2 and the tricky input_ids, and possibly add it as a unit test.
## Who can review?
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18165/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18165",
"html_url": "https://github.com/huggingface/transformers/pull/18165",
"diff_url": "https://github.com/huggingface/transformers/pull/18165.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18165.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18164
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18164/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18164/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18164/events
|
https://github.com/huggingface/transformers/issues/18164
| 1,307,102,980
|
I_kwDOCUB6oc5N6M8E
| 18,164
|
Cannot save TFSwinForImageClassification as SavedModel
|
{
"login": "ahmedlone127",
"id": 66001253,
"node_id": "MDQ6VXNlcjY2MDAxMjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/66001253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmedlone127",
"html_url": "https://github.com/ahmedlone127",
"followers_url": "https://api.github.com/users/ahmedlone127/followers",
"following_url": "https://api.github.com/users/ahmedlone127/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmedlone127/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmedlone127/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmedlone127/subscriptions",
"organizations_url": "https://api.github.com/users/ahmedlone127/orgs",
"repos_url": "https://api.github.com/users/ahmedlone127/repos",
"events_url": "https://api.github.com/users/ahmedlone127/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmedlone127/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @gante and @amyeroberts ",
"@amyeroberts is this related to what you've been investigating? If not, I can have a go at it :)",
"@gante Yep - I believe so. I've opened a PR here: https://github.com/huggingface/transformers/pull/18153",
"Hey @amyeroberts I updated my transformer code locally with the changes you made in the PR for swin but I still get the same result\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nOperatorNotAllowedInGraphError Traceback (most recent call last)\r\n[<ipython-input-4-637c488e6341>](https://localhost:8080/#) in <module>()\r\n----> 1 model.save_pretrained(\"test\",saved_model=True)\r\n\r\n2 frames\r\n[/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py](https://localhost:8080/#) in autograph_handler(*args, **kwargs)\r\n 1145 except Exception as e: # pylint:disable=broad-except\r\n 1146 if hasattr(e, \"ag_error_metadata\"):\r\n-> 1147 raise e.ag_error_metadata.to_exception(e)\r\n 1148 else:\r\n 1149 raise\r\n\r\nOperatorNotAllowedInGraphError: in user code:\r\n\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py\", line 979, in serving *\r\n output = self.call(inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py\", line 1457, in run_call_with_unpacked_inputs *\r\n return func(self, **unpacked_inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py\", line 1470, in call *\r\n outputs = self.swin(\r\n File \"/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py\", line 67, in error_handler **\r\n raise e.with_traceback(filtered_tb) from None\r\n\r\n OperatorNotAllowedInGraphError: Exception encountered when calling layer \"swin\" (type TFSwinMainLayer).\r\n \r\n in user code:\r\n \r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py\", line 1457, in run_call_with_unpacked_inputs *\r\n return func(self, **unpacked_inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py\", line 1160, in call *\r\n encoder_outputs = self.encoder(\r\n File \"/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py\", line 67, in error_handler **\r\n raise e.with_traceback(filtered_tb) from None\r\n \r\n OperatorNotAllowedInGraphError: Exception encountered when calling layer \"encoder\" (type TFSwinEncoder).\r\n \r\n in user code:\r\n \r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py\", line 905, in call *\r\n layer_outputs = layer_module(\r\n File \"/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py\", line 67, in error_handler **\r\n raise e.with_traceback(filtered_tb) from None\r\n \r\n OperatorNotAllowedInGraphError: Exception encountered when calling layer \"layers.0\" (type TFSwinStage).\r\n \r\n in user code:\r\n \r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py\", line 837, in call *\r\n layer_outputs = layer_module(\r\n File \"/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py\", line 67, in error_handler **\r\n raise e.with_traceback(filtered_tb) from None\r\n \r\n OperatorNotAllowedInGraphError: Exception encountered when calling layer \"blocks.0\" (type TFSwinLayer).\r\n \r\n in user code:\r\n \r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py\", line 732, in call *\r\n self.set_shift_and_window_size(input_dimensions)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py\", line 671, in set_shift_and_window_size *\r\n if min(input_resolution) <= self.window_size:\r\n \r\n OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.\r\n \r\n \r\n Call arguments received:\r\n • hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32)\r\n • input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)')\r\n • head_mask=None\r\n • output_attentions=False\r\n • training=False\r\n \r\n \r\n Call arguments received:\r\n • hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32)\r\n • input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)')\r\n • head_mask=None\r\n • output_attentions=False\r\n • training=False\r\n \r\n \r\n Call arguments received:\r\n • hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32)\r\n • input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)')\r\n • head_mask=['None', 'None', 'None', 'None']\r\n • output_attentions=False\r\n • output_hidden_states=False\r\n • return_dict=True\r\n • training=False\r\n \r\n \r\n Call arguments received:\r\n • self=tf.Tensor(shape=(None, None, None, None), dtype=float32)\r\n • pixel_values=None\r\n • bool_masked_pos=None\r\n • head_mask=None\r\n • output_attentions=False\r\n • output_hidden_states=False\r\n • return_dict=True\r\n • training=False\r\n```\r\n\r\n\r\n",
"OK @ahmedlone127. Thanks for letting me know. I'll dig into this some more.",
"Thanks ",
"Following merging of #18153 the reproduction snippet runs on main without error. "
] | 1,658
| 1,658
| 1,658
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?:No
### Who can help?
@Rocketknight1 @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoFeatureExtractor, TFSwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained(swinModel)
model = TFSwinForImageClassification.from_pretrained(swinModel)
inputs = feature_extractor(images=image, return_tensors="tf")
outputs = model(inputs.pixel_values)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = tf.math.argmax(logits,-1).numpy()[0]
print("Predicted class:", model.config.id2label[predicted_class_idx])
class MySwin(TFSwinForImageClassification):
@tf.function(
input_signature=[
{
"pixel_values": tf.TensorSpec((None, None,None,None), tf.float32, name="serving1_pixel_values"),
}
]
)
def serving1(self, inputs):
outputs = self.call(pixel_values=inputs["pixel_values"])
return self.serving_output(outputs)
myswin = MySwin.from_pretrained(swinModel)
tf.saved_model.save(myswin, swin_EXPORT_PATH, signatures={
"serving1": myswin.serving1,
# "serving2": mygpt2.serving2
})
```
```
All model checkpoint layers were used when initializing MySwin.
All the layers of MySwin were initialized from the model checkpoint at microsoft/swin-tiny-patch4-window7-224.
If your task is similar to the task the model of the checkpoint was trained on, you can already use MySwin for predictions without further training.
---------------------------------------------------------------------------
OperatorNotAllowedInGraphError Traceback (most recent call last)
[<ipython-input-13-b219bb00369a>](https://localhost:8080/#) in <module>()
1 myswin = MySwin.from_pretrained(swinModel)
2 tf.saved_model.save(myswin, swin_EXPORT_PATH, signatures={
----> 3 "serving1": myswin.serving1,
4 # "serving2": mygpt2.serving2
5 })
14 frames
[/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py](https://localhost:8080/#) in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
OperatorNotAllowedInGraphError: in user code:
File "<ipython-input-11-84a42b1aca69>", line 10, in serving1 *
outputs = self.call(pixel_values=inputs["pixel_values"])
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 1426, in run_call_with_unpacked_inputs *
return func(self, **unpacked_inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 1439, in call *
outputs = self.swin(
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler **
raise e.with_traceback(filtered_tb) from None
OperatorNotAllowedInGraphError: Exception encountered when calling layer "swin" (type TFSwinMainLayer).
in user code:
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 1426, in run_call_with_unpacked_inputs *
return func(self, **unpacked_inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 1142, in call *
encoder_outputs = self.encoder(
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler **
raise e.with_traceback(filtered_tb) from None
OperatorNotAllowedInGraphError: Exception encountered when calling layer "encoder" (type TFSwinEncoder).
in user code:
File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 906, in call *
layer_outputs = layer_module(
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler **
raise e.with_traceback(filtered_tb) from None
OperatorNotAllowedInGraphError: Exception encountered when calling layer "layers.0" (type TFSwinStage).
in user code:
File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 838, in call *
layer_outputs = layer_module(
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler **
raise e.with_traceback(filtered_tb) from None
OperatorNotAllowedInGraphError: Exception encountered when calling layer "blocks.0" (type TFSwinLayer).
in user code:
File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 733, in call *
self.set_shift_and_window_size(input_dimensions)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/swin/modeling_tf_swin.py", line 672, in set_shift_and_window_size *
if min(input_resolution) <= self.window_size:
OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
Call arguments received:
• hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32)
• input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)')
• head_mask=None
• output_attentions=False
• training=False
Call arguments received:
• hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32)
• input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)')
• head_mask=None
• output_attentions=False
• training=False
Call arguments received:
• hidden_states=tf.Tensor(shape=(None, None, 96), dtype=float32)
• input_dimensions=('tf.Tensor(shape=(), dtype=int32)', 'tf.Tensor(shape=(), dtype=int32)')
• head_mask=['None', 'None', 'None', 'None']
• output_attentions=False
• output_hidden_states=False
• return_dict=True
• training=False
Call arguments received:
• self=tf.Tensor(shape=(None, None, None, None), dtype=float32)
• pixel_values=None
• bool_masked_pos=None
• head_mask=None
• output_attentions=False
• output_hidden_states=False
• return_dict=True
• training=False
```
### Expected behavior
It is supposed to make a SavedModel but instead, I get this error mentioned above. The SavedModel is needed for TensorFlow Serving .
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18164/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18163
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18163/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18163/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18163/events
|
https://github.com/huggingface/transformers/issues/18163
| 1,307,034,447
|
I_kwDOCUB6oc5N58NP
| 18,163
|
Fine tune TrOCR for persian language
|
{
"login": "PersianSpock",
"id": 16386426,
"node_id": "MDQ6VXNlcjE2Mzg2NDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/16386426?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PersianSpock",
"html_url": "https://github.com/PersianSpock",
"followers_url": "https://api.github.com/users/PersianSpock/followers",
"following_url": "https://api.github.com/users/PersianSpock/following{/other_user}",
"gists_url": "https://api.github.com/users/PersianSpock/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PersianSpock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PersianSpock/subscriptions",
"organizations_url": "https://api.github.com/users/PersianSpock/orgs",
"repos_url": "https://api.github.com/users/PersianSpock/repos",
"events_url": "https://api.github.com/users/PersianSpock/events{/privacy}",
"received_events_url": "https://api.github.com/users/PersianSpock/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nI explain how to train TrOCR on a different language here: https://github.com/huggingface/transformers/issues/14195#issuecomment-1039204836",
"Hi Niels! Thank you for your response. the thing is that I use:\r\n ```\r\nfrom transformers import VisionEncoderDecoderModel\r\n\r\n device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(\"google/vit-base-patch16-224-in21k\", \"xlm-roberta-base\")\r\n model.to(device)\r\n ```\r\n\r\nAnd I use your Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_native_PyTorch code and at the end of the code my last result is this error:\r\n\r\nValueError: Input image size (384*384) doesn't match model (224*224).\r\n\r\nWhat's wrong?\r\n\r\n",
"It seems that the images you provide are of size 384x384, but the model (the ViT encoder) expects them to be of size 224x224.",
"I changed the images size but it still says that\r\n",
"```\r\nimport os, sys\r\n\r\npath = '/content/drive/MyDrive/data_test/image/'\r\nnew_path = '/content/drive/MyDrive/data_test/newimage/'\r\ndirs = os.listdir( path )\r\n\r\ndef resize():\r\n for item in dirs:\r\n source = path + item\r\n newsource = new_path + item\r\n im = Image.open(source)\r\n f, e = os.path.splitext(source)\r\n imResize = im.resize((224,224), Image.ANTIALIAS)\r\n imResize.save(newsource)\r\n```",
"this part still gives 384, 384:\r\n```\r\nencoding = train_dataset[0]\r\nfor k,v in encoding.items():\r\n print(k, v.shape)\r\nencoding = eval_dataset[0]\r\nfor k,v in encoding.items():\r\n print(k, v.shape)`\r\n```",
"the problem in error seems to be because of the ViT and in your own code the training set is 384*384 as the last piece of code I commented shows \r\nwhat's wrong?\r\n",
"The model I'm fine-tuning in my [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_native_PyTorch.ipynb) expects images to be of size 384, as seen [here](https://huggingface.co/microsoft/trocr-base-printed/blob/main/config.json#L104).",
"I used \"google/vit-base-patch16-224-in21k\" and \"xlm-roberta-base\". the first one you suggested in https://github.com/huggingface/transformers/issues/14195#issuecomment-1039204836\r\nwhat is the issue that says the model has the picture of size 224*224?",
"Yes, `google/vit-base-patch16-224-in21k` expects images to be of size 224, but you're resizing the images to 384.",
"Thank you it got solved! How much should be my validation CER at the end? what range is good enough?",
"I'm fine tuning trocr for Farsi language and I did it once using your code and it was ok and now with another larger dataset I get different label sizes and it's a problem. \r\nafter this part:\r\n`encoding = train_dataset[0]\r\nfor k,v in encoding.items():\r\n print(k, v.shape)\r\nencoding = eval_dataset[0]\r\nfor k,v in encoding.items():\r\n print(k, v.shape)`\r\n\r\nI get:\r\n\r\n> pixel_values torch.Size([3, 224, 224])\r\n> labels torch.Size([261])\r\n> pixel_values torch.Size([3, 224, 224])\r\n> labels torch.Size([272])\r\n\r\nlabel torch sizes are not the same although I'm using https://github.com/NielsRogge/Transformers-Tutorials/tree/master/TrOCR\r\nand it the code it says that the max_length for labels should be 128. how can I change the code so it'll be the same size for all of the data?",
"> How much should be my validation CER at the end?\r\n\r\nCER (character error rate) is a number between 0 and 1, the closer to 0 the better.\r\n\r\nRegarding the labels, you need to make sure each target sequence gets padded/truncated to the same length, to make batching possible.\r\n",
"I'm using your own code. it has:\r\n`labels = self.processor.tokenizer(text, \r\n padding=\"max_length\", \r\n max_length=self.max_target_length).input_ids`\r\n\r\nand \r\n`self.max_target_length = 128`\r\n\r\nhow am I getting different numbers?",
"Yes it doesn't have `truncation=True`, which you need to add.",
"Note that the sequence length of 128 was just a choice, you can set it to whatever you think is needed for the language you're training on. If you're training on very long sentences, you might need to increase it.",
"Thank you so much it worked out.",
"@PersianSpock which processor do you use for training on an other language ? \r\n\r\ndo you use a processor which is build up of the same encoders and decoders, or do you use the handwritten stage 1 processor, which is pre-trained already ? \r\n\r\nit would really help, If you could post your Model and processor initialization. And maybe also your config. \r\nThank you! ",
"@jonas-da it says here: https://huggingface.co/docs/transformers/main/model_doc/trocr#transformers.TrOCRProcessor\r\n\r\nsince I am using xlm-roberta-large I do it like this:\r\n\r\n```\r\nfeature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')\r\ntokenizer = AutoTokenizer.from_pretrained(\"xlm-roberta-base\")\r\n\r\nprocessor = TrOCRProcessor(feature_extractor = feature_extractor, tokenizer = tokenizer\r\n```\r\n\r\n",
"Ah thank you! @PersianSpock \r\n\r\nand as above mentioned you use \r\n\r\n`model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(\"google/vit-base-patch16-224-in21k\", \"xlm-roberta-base\")`\r\n\r\nas model or ? \r\n\r\nOne more question. How much Training Data do you use and what CER did you achieved ? \r\n\r\nThank you very much!",
"I used base and large both and for the data I used 7000 data and it still wasn't enough and I think I should use more.",
"Closing this issue as it seems resolved.",
"@PersianSpock how did you prepare dataset for train trocr on other language ? ",
"> Thank you it got solved! How much should be my validation CER at the end? what range is good enough?\r\nhi, may I ask how did you solve it? I have the same problem but I got stuck and don't know what to do"
] | 1,658
| 1,706
| 1,666
|
NONE
| null |
### Model description
Hello!
I'm a newbie and I am trying to use TrOCR for recognizing Persian digital text(like PDF) from image. I don't know what will be the requirements if I want to fine tune pre-trained TrOCR model but with decoder of multilingual cased. I've followed this post https://github.com/huggingface/transformers/issues/15823 but it doesn't work out for Persian with the info they gave.
Please guide me on how should I proceed? I've seen that there are some models in https://huggingface.co/models?language=fa&sort=downloads but I can't figure out how to use them. Please guide me.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18163/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18162
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18162/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18162/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18162/events
|
https://github.com/huggingface/transformers/issues/18162
| 1,306,910,673
|
I_kwDOCUB6oc5N5d_R
| 18,162
|
longt5 error in step 13 when torch.distributed.launch
|
{
"login": "whaleloops",
"id": 31370581,
"node_id": "MDQ6VXNlcjMxMzcwNTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/31370581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whaleloops",
"html_url": "https://github.com/whaleloops",
"followers_url": "https://api.github.com/users/whaleloops/followers",
"following_url": "https://api.github.com/users/whaleloops/following{/other_user}",
"gists_url": "https://api.github.com/users/whaleloops/gists{/gist_id}",
"starred_url": "https://api.github.com/users/whaleloops/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/whaleloops/subscriptions",
"organizations_url": "https://api.github.com/users/whaleloops/orgs",
"repos_url": "https://api.github.com/users/whaleloops/repos",
"events_url": "https://api.github.com/users/whaleloops/events{/privacy}",
"received_events_url": "https://api.github.com/users/whaleloops/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @whaleloops and sorry for the late response. There exists a hotfix if you filter out sequences shorter than 16 tokens (to avoid examples with empty tglobal attn).\n\nLemme know if it helps. :)\n\nUnfortunately, I don't have enough spare time to look at more proper fix and send PR:/",
"That's a tricky one it seems @whaleloops ! An alternative to filtering out short sequences could also be to always pad until max length\r\n\r\n",
"Thanks @stancld , I confirmed that issue resolved after hotfix.\r\nThough this filter excludes a few examples\r\nsplit before after\r\ntrain 119924 117108\r\nvalid 6633 6631\r\ntest 6658 6658"
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
### System Info
Hardware:
6 Quadro RTX 6000 and 2 A100-40GB gpus, but I only used 2 A100-40GB gpus for this task.
Env:
transformers==4.20.1
torch==1.11.0+cu113
### Who can help?
@patrickvonplaten @stancld
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
[@Stancld](https://huggingface.co/Stancld)
When ddp, I encountered the following error when I try to run your code in the middle of an epoch. The task is pubmed-summarization and I used [run_summarization.py](https://github.com/huggingface/transformers/blob/v4.20.1/examples/pytorch/summarization/run_summarization.py). Is there anything particular about step 13? Any suggestion for solving this error?
```
1%|█ | 13/1872 [16:51<43:45:35, 84.74s/it]Traceback (most recent call last):
File "run_summarization.py", line 737, in
main()
File "run_summarization.py", line 656, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/transformers/trainer.py", line 1409, in train
return inner_training_loop(
File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/transformers/trainer.py", line 1649, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/transformers/trainer.py", line 2345, in training_step
loss = self.compute_loss(model, inputs)
File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/transformers/trainer.py", line 2377, in compute_loss
outputs = model(**inputs)
File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/miniconda3/envs/t5long/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 947, in forward
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by
making sure all forward function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 0: 6
```
To Reproduce: almost same as [here](https://huggingface.co/Stancld/longt5-tglobal-large-16384-pubmed-3k_steps) and your [wb](https://wandb.ai/stancld/LongT5/runs/1lwncl8a/overview?workspace=user-stancld).
```
CUDA_VISIBLE_DEVICES=4,5 python -m torch.distributed.launch --nproc_per_node 2 --master_port 56666 run_summarization.py
--model_name_or_path Stancld/longt5-tglobal-large-16384-pubmed-3k_steps
--do_train --do_eval --do_predict
--dataset_name ccdv/pubmed-summarization
--max_source_length 16384 --max_target_length 512
--per_device_train_batch_size 1 --gradient_accumulation_steps 64
--optim adafactor --learning_rate 0.001 --lr_scheduler_type constant --num_train_epochs 1 --gradient_checkpointing
--bf16=True --per_device_eval_batch_size 2 --predict_with_generate --generation_num_beams 1 --generation_max_length 512
--output_dir ./tmp/longt5_pubmed --run_name LongT5-pubmed-16k-512-bs_128 --report_to all
--logging_steps 100 --eval_steps 2000 --evaluation_strategy steps --ddp_find_unused_parameters=False --no_cuda=False
```
Here is the failed [wandb](https://wandb.ai/whaleloops/pubmed_sum/runs/19h5mp66/overview?workspace=).
I tried to run with 1 GPU, and it works for 50+ steps without the error above. I also verified that ddp works for LED.
### Expected behavior
ddp training like 1GPU training without error above
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18162/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18161
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18161/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18161/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18161/events
|
https://github.com/huggingface/transformers/pull/18161
| 1,306,896,228
|
PR_kwDOCUB6oc47g86Q
| 18,161
|
Fix incorrect type hint for lang in run_summarization.py
|
{
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes an incorrect type hint for the `lang` argument. It was `str` but it should be `Optional[str]` as its default value is `None`. This was reported as an error by `mypy`:
```bash
examples/pytorch/summarization/run_summarization.py:127: error: Incompatible types in assignment (expression has type "None", variable has type "str")
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18161/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18161",
"html_url": "https://github.com/huggingface/transformers/pull/18161",
"diff_url": "https://github.com/huggingface/transformers/pull/18161.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18161.patch",
"merged_at": 1658130798000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18160
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18160/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18160/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18160/events
|
https://github.com/huggingface/transformers/issues/18160
| 1,306,822,835
|
I_kwDOCUB6oc5N5Iiz
| 18,160
|
Stride in BERT fast tokenizer doesn't work as I expected
|
{
"login": "mjeensung",
"id": 44629366,
"node_id": "MDQ6VXNlcjQ0NjI5MzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/44629366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mjeensung",
"html_url": "https://github.com/mjeensung",
"followers_url": "https://api.github.com/users/mjeensung/followers",
"following_url": "https://api.github.com/users/mjeensung/following{/other_user}",
"gists_url": "https://api.github.com/users/mjeensung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mjeensung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mjeensung/subscriptions",
"organizations_url": "https://api.github.com/users/mjeensung/orgs",
"repos_url": "https://api.github.com/users/mjeensung/repos",
"events_url": "https://api.github.com/users/mjeensung/events{/privacy}",
"received_events_url": "https://api.github.com/users/mjeensung/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @mjeensung,\r\n\r\nFrom my point of view, the result returned by the tokenizer is the one expected given the arguments you specified to the tokenizer, and the stride is of length 128.\r\n\r\n In your example, the tokenized example is composed of a text pair:\r\n- text: 'Big Little Lies (TV series)'\r\n- text pair: ' Despite originally being billed as a miniseries, HBO renewed the series for a second season. Production on the second season began in March 2018 and is set to premiere in 2019. All seven episodes are being written by Kelley and directed by Andrea Arnold. On August 6, 2014, it was announced Nicole Kidman and Reese Witherspoon had optioned the screen rights to Liane Moriarty\\'s novel \"Big Little Lies\". The actresses were expected to develop the project as a film in which they would both star. Bruna Papandrea and Per Saari were set to executive produce alongside Kidman and Witherspoon. Moriarty was also expected to produce as well. On **[MASK]** 25, 2014, it was announced that Kidman and Witherspoon had decided to develop the project into a limited television series instead of the originally planned film. Additionally, it was announced that television series would be written by David E. Kelley. On May 8, 2015, it was announced that HBO had given the production a series order and that in addition to writing, Kelley would also executive produce. On October 23, 2015, it was reported that Jean-Marc Vallée was in talks to direct the first episode of the series with the potential to direct more. On December 17, 2015, it was announced that Vallée would direct all seven episodes of the series. On November 28, 2016, it was announced that the series would premiere on February 19, 2017.'\r\n\r\nYou ask for only the second sentence to be truncated: the stride will have to be taken from the text pair only. Schematically, the first 2 inputs returned will be:\r\n\r\n\r\n\r\nSo, to know the stride (based on the mask tokens of your input), I think the formula is : \r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-cased', do_lower_case=False, use_fast=True)\r\nstride = 128\r\n\r\ntokenized_examples = tokenizer(\r\n ['Big Little Lies (TV series)'],\r\n [' Despite originally being billed as a miniseries, HBO renewed the series for a second season. Production on the second season began in March 2018 and is set to premiere in 2019. All seven episodes are being written by Kelley and directed by Andrea Arnold. On August 6, 2014, it was announced Nicole Kidman and Reese Witherspoon had optioned the screen rights to Liane Moriarty\\'s novel \"Big Little Lies\". The actresses were expected to develop the project as a film in which they would both star. Bruna Papandrea and Per Saari were set to executive produce alongside Kidman and Witherspoon. Moriarty was also expected to produce as well. On [MASK] 25, 2014, it was announced that Kidman and Witherspoon had decided to develop the project into a limited television series instead of the originally planned film. Additionally, it was announced that television series would be written by David E. Kelley. On May 8, 2015, it was announced that HBO had given the production a series order and that in addition to writing, Kelley would also executive produce. On October 23, 2015, it was reported that Jean-Marc Vallée was in talks to direct the first episode of the series with the potential to direct more. On December 17, 2015, it was announced that Vallée would direct all seven episodes of the series. On November 28, 2016, it was announced that the series would premiere on February 19, 2017.'],\r\n truncation=\"only_second\" ,\r\n max_length=192,\r\n stride=stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\",\r\n)\r\n\r\ntokens_ex_0 = tokenizer.convert_ids_to_tokens(tokenized_examples.input_ids[0])\r\ntokens_ex_1 = tokenizer.convert_ids_to_tokens(tokenized_examples.input_ids[1])\r\n\r\nsep_position = tokens_ex_0.index(\"[SEP]\")\r\nlen_sent_0 = sep_position + 1\r\npos_mask_window0 = tokens_ex_0.index('[MASK]')\r\npos_mask_window1 = tokens_ex_1.index('[MASK]')\r\n\r\nactual_stride = len(tokens_ex_0) - 1 - pos_mask_window0 + pos_mask_window1 - len_sent_0\r\n```\r\nand the result is 128.\r\n\r\nDon't hesitate to tell me if this doesn't answer your question!",
"Thanks @SaulLu !\r\n\r\nI was confused because the tokenized results was different from the ones processed by [squad.py](https://github.com/huggingface/transformers/blob/main/src/transformers/data/processors/squad.py#L187). But I found that [squad.py](https://github.com/huggingface/transformers/blob/main/src/transformers/data/processors/squad.py#L187) re-defines the stride based on max_seq_length and len(truncated_query). \r\n<img width=\"803\" alt=\"Screen Shot 2022-07-18 at 11 15 04 PM\" src=\"https://user-images.githubusercontent.com/44629366/179656783-892c2d88-e99c-42d1-a5c1-e69be90f8c61.png\">\r\n\r\nThanks for correcting my misunderstanding. Now it's clear to me.\r\n\r\n"
] | 1,657
| 1,660
| 1,660
|
NONE
| null |
### System Info
Hi,
I'm using 'bert-base-cased' and 'fast tokenizer'.
I set the stride value as 128, but I found that the stride from tokenized results wasn't 128. In the following reproduction script, the stride between two windows was only 54. Is this a bug or intentional?
### Who can help?
@LysandreJik @SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased', do_lower_case=False, use_fast=True)
stride = 128
tokenized_examples = tokenizer(
['Big Little Lies (TV series)'],
[' Despite originally being billed as a miniseries, HBO renewed the series for a second season. Production on the second season began in March 2018 and is set to premiere in 2019. All seven episodes are being written by Kelley and directed by Andrea Arnold. On August 6, 2014, it was announced Nicole Kidman and Reese Witherspoon had optioned the screen rights to Liane Moriarty\'s novel "Big Little Lies". The actresses were expected to develop the project as a film in which they would both star. Bruna Papandrea and Per Saari were set to executive produce alongside Kidman and Witherspoon. Moriarty was also expected to produce as well. On [MASK] 25, 2014, it was announced that Kidman and Witherspoon had decided to develop the project into a limited television series instead of the originally planned film. Additionally, it was announced that television series would be written by David E. Kelley. On May 8, 2015, it was announced that HBO had given the production a series order and that in addition to writing, Kelley would also executive produce. On October 23, 2015, it was reported that Jean-Marc Vallée was in talks to direct the first episode of the series with the potential to direct more. On December 17, 2015, it was announced that Vallée would direct all seven episodes of the series. On November 28, 2016, it was announced that the series would premiere on February 19, 2017.'],
truncation="only_second" ,
max_length=192,
stride=stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
pos_mask_window0 = tokenizer.convert_ids_to_tokens(tokenized_examples['input_ids'][0]).index('[MASK]')
pos_mask_window1 = tokenizer.convert_ids_to_tokens(tokenized_examples['input_ids'][1]).index('[MASK]')
print('expected stride: ', stride)
print('actual stride: ', pos_mask_window0 - pos_mask_window1)
>> expected stride: 128
>> actual stride: 54
```
The library versions I'm using are as follows:
transformers 4.13.0
tokenizers 0.10.1
### Expected behavior
From the reproduction script above, I expect the observed stride is the same as the defined stride (i.e., 128)
```
>> expected stride: 128
>> actual stride: 128
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18160/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18159
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18159/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18159/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18159/events
|
https://github.com/huggingface/transformers/issues/18159
| 1,306,799,230
|
I_kwDOCUB6oc5N5Cx-
| 18,159
|
pretrain longT5
|
{
"login": "Arij-Aladel",
"id": 68355048,
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arij-Aladel",
"html_url": "https://github.com/Arij-Aladel",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"https://arxiv.org/pdf/2112.07916.pdf\r\n\r\nAll the requested information is on the paper. I would recommend sending questions over to https://github.com/google-research/longt5 since the authors will be far more able to answer specific questions.",
"@reelmath thanks alot! I already know that but the steps are in c++ script and no clear documentation to generate the corpus and using it to calculate loss. Since in Pegasus these are two losses. what is not clear do they calculate just one loss or two losses in longt5 during pretraining?\r\nbeside I want to pretrain the model from huggingface.\r\nhttps://github.com/google-research/longt5/issues/7#issue-1319140379",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
Could you please provide us with the steps of pretraining the longT5 on both MLM and PSG objectives?
Denoising rates and other details and steps?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18159/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18158
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18158/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18158/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18158/events
|
https://github.com/huggingface/transformers/issues/18158
| 1,306,772,068
|
I_kwDOCUB6oc5N48Jk
| 18,158
|
LongT5 Summarization Example Not Working
|
{
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"update transformer to 4.20.0+ should solve this issue.\r\n\r\nAlso I don't think you need --source_prefix \"summarize: \" according to the paper.",
"@whaleloops It works, thanks."
] | 1,657
| 1,658
| 1,658
|
NONE
| null |
### System Info
- OS: Ubuntu 20.04.4 LTS focal
- Conda: 4.12.0
- Python: 3.7.14
- Pip: 22.1.2
- Torch: 1.12.0
- Transfromers: 4.16.2
- NVIDIA-SMI: 510.73.05
- Nvcc -V: cuda V11.3.109
### Who can help?
@patrickvonplaten, @ydshieh, @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`git clone --branch v4.16.2-release https://github.com/huggingface/transformers`
Example from [Transformers/examples/pytorch/summarization](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization), the only change is the `--model_name_or_path`
```
python transformers/examples/pytorch/summarization/run_summarization.py \
--model_name_or_path google/long-t5-tglobal-base \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
**Error:**
```
[INFO|configuration_utils.py:644] 2022-07-16 13:23:14,077 >> loading configuration file https://huggingface.co/google/long-t5-tglobal-base/resolve/main/config.json from cache at /home/good/.cache/huggingface/transformers/1b9067139467923bb0ea7749ceb5694acb0950b479ad1ebe47d9014180af8c31.69c5bfb92a1a084ead5ef0d9c9c9f09bac4f07cfd875433aa8fab59199208a7f
Traceback (most recent call last):
File "transformers/examples/pytorch/summarization/run_summarization.py", line 698, in <module>
main()
File "transformers/examples/pytorch/summarization/run_summarization.py", line 371, in main
use_auth_token=True if model_args.use_auth_token else None,
File "/home/good/anaconda3/envs/gpt1/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 632, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/home/good/anaconda3/envs/gpt1/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 347, in __getitem__
raise KeyError(key)
KeyError: 'longt5'
```
### Expected behavior
LongT5 model from [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) to start training like a normal T5 model ([T5-base](https://huggingface.co/t5-base)).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18158/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18157
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18157/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18157/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18157/events
|
https://github.com/huggingface/transformers/issues/18157
| 1,306,743,974
|
I_kwDOCUB6oc5N41Sm
| 18,157
|
MaskFormer documentation - `is_thing_map`
|
{
"login": "morrisalp",
"id": 8263996,
"node_id": "MDQ6VXNlcjgyNjM5OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8263996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morrisalp",
"html_url": "https://github.com/morrisalp",
"followers_url": "https://api.github.com/users/morrisalp/followers",
"following_url": "https://api.github.com/users/morrisalp/following{/other_user}",
"gists_url": "https://api.github.com/users/morrisalp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morrisalp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morrisalp/subscriptions",
"organizations_url": "https://api.github.com/users/morrisalp/orgs",
"repos_url": "https://api.github.com/users/morrisalp/repos",
"events_url": "https://api.github.com/users/morrisalp/events{/privacy}",
"received_events_url": "https://api.github.com/users/morrisalp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @NielsRogge ",
"@LysandreJik @NielsRogge I updated the MaskFormer docs to reflect the current code and added a [PR](https://github.com/huggingface/transformers/pull/18423)\r\n\r\n",
"I merged the PR, closing this issue!"
] | 1,657
| 1,659
| 1,659
|
NONE
| null |
The MaskFormer documentation states: "Both tasks can be solved using [MaskFormerForInstanceSegmentation](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/maskformer#transformers.MaskFormerForInstanceSegmentation) output, the latter needs an additional is_thing_map to know which instances must be merged together.."
However, `is_thing_map` does not appear in the source code, and it looks like this was replaced with `label_ids_to_fuse`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18157/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18156
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18156/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18156/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18156/events
|
https://github.com/huggingface/transformers/pull/18156
| 1,306,673,741
|
PR_kwDOCUB6oc47gTN6
| 18,156
|
FIX: Typo
|
{
"login": "ayansengupta17",
"id": 14333284,
"node_id": "MDQ6VXNlcjE0MzMzMjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/14333284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayansengupta17",
"html_url": "https://github.com/ayansengupta17",
"followers_url": "https://api.github.com/users/ayansengupta17/followers",
"following_url": "https://api.github.com/users/ayansengupta17/following{/other_user}",
"gists_url": "https://api.github.com/users/ayansengupta17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayansengupta17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayansengupta17/subscriptions",
"organizations_url": "https://api.github.com/users/ayansengupta17/orgs",
"repos_url": "https://api.github.com/users/ayansengupta17/repos",
"events_url": "https://api.github.com/users/ayansengupta17/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayansengupta17/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18156/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18156",
"html_url": "https://github.com/huggingface/transformers/pull/18156",
"diff_url": "https://github.com/huggingface/transformers/pull/18156.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18156.patch",
"merged_at": 1658151968000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18155
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18155/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18155/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18155/events
|
https://github.com/huggingface/transformers/pull/18155
| 1,306,658,812
|
PR_kwDOCUB6oc47gQU9
| 18,155
|
Fix check for falsey inputs in run_summarization
|
{
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In the PyTorch version of `run_summarization.py`, there is a check that excludes examples where the source document and target summary are both `None`.
https://github.com/huggingface/transformers/blob/ccc089780415445768bcfd3ac4418cec20353484/examples/pytorch/summarization/run_summarization.py#L516-L520
I think this should be relaxed to check for _falsey_ inputs instead. I think this because some datasets, like MultiNews, contain examples with empty strings:
```python
from datasets import load_dataset
multi_news = load_dataset("multi_news", split="validation")
assert not multi_news[4850]["document"]
```
and these are not caught by the `is not None` check.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18155/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18155",
"html_url": "https://github.com/huggingface/transformers/pull/18155",
"diff_url": "https://github.com/huggingface/transformers/pull/18155.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18155.patch",
"merged_at": 1658130632000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18154
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18154/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18154/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18154/events
|
https://github.com/huggingface/transformers/pull/18154
| 1,306,405,721
|
PR_kwDOCUB6oc47fY8I
| 18,154
|
add ONNX support for LeViT
|
{
"login": "gcheron",
"id": 12097018,
"node_id": "MDQ6VXNlcjEyMDk3MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/12097018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gcheron",
"html_url": "https://github.com/gcheron",
"followers_url": "https://api.github.com/users/gcheron/followers",
"following_url": "https://api.github.com/users/gcheron/following{/other_user}",
"gists_url": "https://api.github.com/users/gcheron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gcheron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gcheron/subscriptions",
"organizations_url": "https://api.github.com/users/gcheron/orgs",
"repos_url": "https://api.github.com/users/gcheron/repos",
"events_url": "https://api.github.com/users/gcheron/events{/privacy}",
"received_events_url": "https://api.github.com/users/gcheron/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Pinging @lewtun and @sgugger for approval.\r\nOne CI test failed but it doesn't seem to be related to this PR.",
"You are welcome, thank you for your work!\r\nYes, following the doc I have already run this command and it passed all (slow) tests ;)"
] | 1,657
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds ONNX support for LeViT. Linked to [#16308](https://github.com/huggingface/transformers/issues/16308).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18154/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18154/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18154",
"html_url": "https://github.com/huggingface/transformers/pull/18154",
"diff_url": "https://github.com/huggingface/transformers/pull/18154.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18154.patch",
"merged_at": 1658150228000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18153
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18153/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18153/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18153/events
|
https://github.com/huggingface/transformers/pull/18153
| 1,306,377,202
|
PR_kwDOCUB6oc47fS6Y
| 18,153
|
Update serving code to enable `saved_model=True`
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@amyeroberts thank you for digging into this one. An important one albeit. \r\n\r\n> We can't hard code the channel dimensions as e.g. 3 as we want to support both RGB and greyscale images (although testing this locally does work).\r\n\r\nCould you shed more details on the locally testing you're referring to here?",
"> Could you shed more details on the locally testing you're referring to here?\r\n\r\n@sayakpaul Sure :) In the test `test_prepare_serving_output`, if serving outputs are got by calling `serving` directly i.e. \r\n```\r\n inputs = self._prepare_for_class(inputs_dict, model_class)\r\n serving_outputs = model.serving(inputs)\r\n```\r\nThis will fail for vision models with the channel dim error I posted above. If I changed the input signature such that channel dimension is hard coded then it runs e.g. \r\n\r\n```\r\n @tf.function(\r\n input_signature=[\r\n {\r\n \"pixel_values\": tf.TensorSpec((None, 3, None, None), tf.float32, name=\"pixel_values\"),\r\n }\r\n ]\r\n )\r\n```\r\n\r\nI run the test with \r\n```pytest tests/models/{model_name}/test_modeling_tf_{model_name}.py::TF{ModelName}ModelTest::test_prepare_serving_output```",
"Thanks. A follow-up question. So if you run with (`pytest tests/models/{model_name}/test_modeling_tf_{model_name}.py::TF{ModelName}ModelTest::test_prepare_serving_output`) how does it get the hardcoded value for channels? Or do you first hard-code it and then run it? ",
"> Thanks. A follow-up question. So if you run with (`pytest tests/models/{model_name}/test_modeling_tf_{model_name}.py::TF{ModelName}ModelTest::test_prepare_serving_output`) how does it get the hardcoded value for channels? Or do you first hard-code it and then run it?\r\n\r\n@sayakpaul Yes, I hardcode then run the tests. In this case, they pass. ",
"Before merge, could you measure the timing for the tests `test_saved_model_creation` on **CPU**? You can run like\r\n\r\n```python\r\npython -m pytest -v tests -k \"test_saved_model_creation\" --durations=0 --make-reports=tests_timing\r\n```\r\nand copy-paste the results from `reports/tests_timing/durations.txt `",
"@ydshieh We don't run the slow tests on CPU, only on GPU/multi-GPU.",
"> @ydshieh We don't run the slow tests on CPU, only on GPU/multi-GPU.\r\n\r\nI should double check the latest version. My memory was in a previoius commit `Add in tests (505cb774b1b7eb5c9a6c8e2bc63f12061824b8bd)` while I asked the question 😢 ",
"@amyeroberts The `attention_mask` and `token_type_ids` in TFHubert / TFWav2Vec2 should be `int32` I believe. I think we don't put this clearly in their docstrings in TF models, but we have this information in their PyTorch model files.",
"@ydshieh I know from @sgugger's comment, we don't run on CPU, but I ran the tests for reference (Macbook Pro 2021 M1 Max)\r\nBased on this I disabled the test for Swin. The slowest tests - `test_saved_model_creation_extended` - are independent of this PR. \r\n\r\n```\r\nslowest durations\r\n242.92s call tests/models/convbert/test_modeling_tf_convbert.py::TFConvBertModelTest::test_saved_model_creation_extended\r\n202.38s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_saved_model_creation_extended\r\n177.82s call tests/models/gptj/test_modeling_tf_gptj.py::TFGPTJModelTest::test_saved_model_creation_extended\r\n150.56s call tests/models/bart/test_modeling_tf_bart.py::TFBartModelTest::test_saved_model_creation_extended\r\n82.65s call tests/models/gpt2/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_saved_model_creation_extended\r\n61.12s call tests/models/lxmert/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_saved_model_creation_extended\r\n26.88s call tests/models/swin/test_modeling_tf_swin.py::TFSwinModelTest::test_saved_model_creation\r\n20.14s call tests/models/clip/test_modeling_tf_clip.py::TFCLIPTextModelTest::test_saved_model_creation_extended\r\n19.51s call tests/models/convbert/test_modeling_tf_convbert.py::TFConvBertModelTest::test_saved_model_creation\r\n19.40s call tests/models/gptj/test_modeling_tf_gptj.py::TFGPTJModelTest::test_saved_model_creation\r\n18.54s call tests/models/deberta_v2/test_modeling_tf_deberta_v2.py::TFDebertaModelTest::test_saved_model_creation\r\n16.77s call tests/models/speech_to_text/test_modeling_tf_speech_to_text.py::TFSpeech2TextModelTest::test_saved_model_creation\r\n16.72s call tests/models/clip/test_modeling_tf_clip.py::TFCLIPVisionModelTest::test_saved_model_creation_extended\r\n14.51s call tests/models/roformer/test_modeling_tf_roformer.py::TFRoFormerModelTest::test_saved_model_creation\r\n14.37s call tests/models/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_saved_model_creation\r\n13.15s call tests/models/wav2vec2/test_modeling_tf_wav2vec2.py::TFWav2Vec2ModelTest::test_saved_model_creation\r\n13.12s call tests/models/hubert/test_modeling_tf_hubert.py::TFHubertModelTest::test_saved_model_creation\r\n12.79s call tests/models/mpnet/test_modeling_tf_mpnet.py::TFMPNetModelTest::test_saved_model_creation\r\n12.47s call tests/models/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_saved_model_creation\r\n12.16s call tests/models/layoutlm/test_modeling_tf_layoutlm.py::TFLayoutLMModelTest::test_saved_model_creation\r\n12.16s call tests/models/electra/test_modeling_tf_electra.py::TFElectraModelTest::test_saved_model_creation\r\n12.12s call tests/models/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_saved_model_creation\r\n11.92s call tests/models/flaubert/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_saved_model_creation\r\n11.54s call tests/models/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_saved_model_creation\r\n11.42s call tests/models/bert/test_modeling_tf_bert.py::TFBertModelTest::test_saved_model_creation\r\n11.36s call tests/models/rembert/test_modeling_tf_rembert.py::TFRemBertModelTest::test_saved_model_creation\r\n11.12s call tests/models/distilbert/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_saved_model_creation\r\n10.85s call tests/models/ctrl/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_saved_model_creation\r\n10.77s call tests/models/roberta/test_modeling_tf_roberta.py::TFRobertaModelTest::test_saved_model_creation\r\n10.51s call tests/models/wav2vec2/test_modeling_tf_wav2vec2.py::TFWav2Vec2RobustModelTest::test_saved_model_creation\r\n10.44s call tests/models/gpt2/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_saved_model_creation\r\n10.30s call tests/models/hubert/test_modeling_tf_hubert.py::TFHubertRobustModelTest::test_saved_model_creation\r\n10.23s call tests/models/deit/test_modeling_tf_deit.py::TFDeiTModelTest::test_saved_model_creation\r\n10.20s call tests/models/t5/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation\r\n10.14s call tests/models/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_saved_model_creation\r\n10.14s call tests/models/clip/test_modeling_tf_clip.py::TFCLIPTextModelTest::test_saved_model_creation\r\n9.53s call tests/models/openai/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_saved_model_creation\r\n9.11s call tests/models/vit_mae/test_modeling_tf_vit_mae.py::TFViTMAEModelTest::test_saved_model_creation\r\n8.71s call tests/models/vit/test_modeling_tf_vit.py::TFViTModelTest::test_saved_model_creation\r\n8.68s call tests/models/clip/test_modeling_tf_clip.py::TFCLIPVisionModelTest::test_saved_model_creation\r\n8.44s call tests/models/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_saved_model_creation\r\n8.14s call tests/models/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_saved_model_creation\r\n7.54s call tests/models/data2vec/test_modeling_tf_data2vec_vision.py::TFData2VecVisionModelTest::test_saved_model_creation\r\n6.70s call tests/models/convnext/test_modeling_tf_convnext.py::TFConvNextModelTest::test_saved_model_creation\r\n6.21s call tests/models/regnet/test_modeling_tf_regnet.py::TFRegNetModelTest::test_saved_model_creation\r\n4.62s call tests/models/resnet/test_modeling_tf_resnet.py::ResNetModelTest::test_saved_model_creation\r\n```",
"I noticed a cool thing -- assuming the models with `@tooslow` also pass the tests (which I'm assuming they do, from [this](https://github.com/huggingface/transformers/pull/18153#issuecomment-1191706099) comment), this PR fixes:\r\n1. https://github.com/huggingface/transformers/issues/17233, as the problematic line was rewritten with TF code in this PR \r\n2. https://github.com/huggingface/transformers/issues/17285, as we know we can create a `SavedModel` (which contains a graph) for all models. \r\n\r\n@amyeroberts can you add these issues to your `Fixes` list above? :D",
"@gante Nice spot :D Yep - I just double checked, and all the models with the @tooslow decorator can be saved with `saved_model=True` - and so their graph can be built. Added the fixes. \r\n\r\nThe only model we can't save at the moment is CLIP, due to the nested dict of outputs. \r\n\r\n",
"@ydshieh @Rocketknight1 Am I OK to merge? "
] | 1,657
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
Fixes and adds any missing`serving` and `serving_output` code to our TF models to enable
`model.save_pretrained(path, saved_model=True)`
I've added comments throughout the code to explain any areas where the TF refactor might no be obvious.
I'm aware the diff of this PR is quite big, but most of it repetitive changes to enable passing of tests, so I hope acceptable.
## Specifically:
**1. Adds missing serving logic to: ResNet, Swin, TAPAS**
**2. Adds missing `serving_output` logic to models**
Some vision models didn't have `serving_output` implemented - `serving` returned the model outputs directly. This was to enable testing (see 4.) and to keep consistent with the rest of the library,
**3. Update or add `input_signature` decorator for models**
**4. Adds a test to check `serving_output` is implemented and return types are as expected**
We can't test `model.serving` directly i.e. this is not possible:
```
model = model_class(config)
inputs = self._prepare_for_class(inputs_dict, model_class)
serving_outputs = model.serving(inputs)
```
Running this on vision models raises the following:
```
E ValueError: The channel dimension of the inputs should be defined. The input_shape received is (None, None, None, None), where axis -1 (0-based) is the channel dimension, which found to be `None`.
```
This is because the input signature defined in the `tf.function` decorator for the `serving` method has all of the input dimensions defined as `None`:
```
@tf.function(input_signature=[{
"pixel_values": tf.TensorSpec((None, None, None, None), tf.float32, name="pixel_values"),
}]
)
def serving(self, inputs):
....
```
We can't hard code the channel dimensions as e.g. `3` as we want to support both RGB and greyscale images (although testing this locally does work).
**5. Moves `test_saved_model_creation` back into `test_modeling_tf_common` and add explicit skips**
There were quite a few models that couldn't be saved with `model.save_pretrained(path, saved_model=True)` and quite a few whose input signature or return types from `serving_output` were broken or inconsistent with the model.
I think this is in part because the relevant test was moved to only be applied to certain core models, and those models didn't explicitly skip. See [#14415](https://github.com/huggingface/transformers/pull/14415), [#9478](https://github.com/huggingface/transformers/pull/9478)
I've:
* added it back to common so that it's added to models by default. CI currently running and passing,
* added `unittest.skip` decorator so it's counted as a skipped rather than passed test on all models that were previously skipping.
**6. Update logic in models such that their graph can be created and saved.**
Adds serving logic to enable saving of models and ensures their outputs are transformed in line with the rest of the library
## Fixes
https://github.com/huggingface/transformers/issues/18179
https://github.com/huggingface/transformers/issues/18164
https://github.com/huggingface/transformers/issues/17233
https://github.com/huggingface/transformers/issues/17285
https://discuss.huggingface.co/t/tfresnetforimageclassification-fails-with-save-pretrained-when-saved-model-is-true/20404
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18153/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18153",
"html_url": "https://github.com/huggingface/transformers/pull/18153",
"diff_url": "https://github.com/huggingface/transformers/pull/18153.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18153.patch",
"merged_at": 1658509538000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18152
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18152/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18152/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18152/events
|
https://github.com/huggingface/transformers/pull/18152
| 1,306,347,261
|
PR_kwDOCUB6oc47fMdd
| 18,152
|
dalle mega
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"@patil-suraj - I can take over the PR if you want :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,664
| null |
MEMBER
| null |
# What does this PR do?
This PR adds the DalleMega model from [dalle-mini](https://github.com/borisdayma/dalle-mini) for text-2-image generation.
The VQGAN model required for converting the tokens to image is in this PR #18150
- [ ] override the `sample` method for classifier-free guidance.
- [ ] port and upload weights on the hub
- [ ] add tests
- [ ] add docs
- [ ] boom!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18152/reactions",
"total_count": 9,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 4,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18152/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18152",
"html_url": "https://github.com/huggingface/transformers/pull/18152",
"diff_url": "https://github.com/huggingface/transformers/pull/18152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18152.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18151
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18151/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18151/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18151/events
|
https://github.com/huggingface/transformers/issues/18151
| 1,306,156,425
|
I_kwDOCUB6oc5N2l2J
| 18,151
|
Confusing documentation for argument class_labels in MaskFormerForInstanceSegmentation.forward()
|
{
"login": "morrisalp",
"id": 8263996,
"node_id": "MDQ6VXNlcjgyNjM5OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8263996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morrisalp",
"html_url": "https://github.com/morrisalp",
"followers_url": "https://api.github.com/users/morrisalp/followers",
"following_url": "https://api.github.com/users/morrisalp/following{/other_user}",
"gists_url": "https://api.github.com/users/morrisalp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morrisalp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morrisalp/subscriptions",
"organizations_url": "https://api.github.com/users/morrisalp/orgs",
"repos_url": "https://api.github.com/users/morrisalp/repos",
"events_url": "https://api.github.com/users/morrisalp/events{/privacy}",
"received_events_url": "https://api.github.com/users/morrisalp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
I'm trying to fine-tune MaskFormer for an instance segmentation problem. The documentation for MaskFormerForInstanceSegmentation.forward() lists the following optional parameters:
* mask_labels (List[torch.Tensor], optional) — List of mask labels of shape (num_labels, height, width) to be fed to a model
* class_labels (List[torch.LongTensor], optional) — list of target class labels of shape (num_labels, height, width) to be fed to a model. They identify the labels of mask_labels, e.g. the label of mask_labels[i][j] if class_labels[i][j].
The wording is confusing, especially at the end -- "the label of mask_labels[i][j] if class_labels[i][j]" is missing a verb.
Additionally, other MaskFormer classes in the API accept `class_labels` of shape (labels) - one class label for each mask - e.g. the forward() method of MaskFormerLoss. It's not clear why this is different in this case.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18151/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18150
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18150/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18150/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18150/events
|
https://github.com/huggingface/transformers/pull/18150
| 1,306,121,684
|
PR_kwDOCUB6oc47ebet
| 18,150
|
Add VQGAN
|
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"@patil-suraj note that you can use the current main version of diffusers as a reference of how the code should look like and you can use the conversion script to covert the official weights",
"Taking over this PR "
] | 1,657
| 1,664
| null |
MEMBER
| null |
# What does this PR do?
Adds the VQGAN model, first step for adding the Dallemega model in transformers.
- This model is different from most the models available in `Transformers`, it's an U-Net like encoder-decoder architecture with vector quantizer bottleneck.
- This is only the generator part of the GAN, intended only for inference.
- It does not have common transformer style embeddings, blocks and other attributes.
- Currently it does not support `output_hidden_states` and `output_attentions`, since this is complex architecture and it's not clear which `hidden_states` to return. Would love to hear your thoughts if we should support this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18150/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18150/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18150",
"html_url": "https://github.com/huggingface/transformers/pull/18150",
"diff_url": "https://github.com/huggingface/transformers/pull/18150.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18150.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18149
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18149/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18149/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18149/events
|
https://github.com/huggingface/transformers/issues/18149
| 1,305,999,181
|
I_kwDOCUB6oc5N1_dN
| 18,149
|
Inference for TFMarianMTModel (en to Romance language translation) is slow and inaccurate
|
{
"login": "danielenricocahall",
"id": 33044223,
"node_id": "MDQ6VXNlcjMzMDQ0MjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/33044223?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danielenricocahall",
"html_url": "https://github.com/danielenricocahall",
"followers_url": "https://api.github.com/users/danielenricocahall/followers",
"following_url": "https://api.github.com/users/danielenricocahall/following{/other_user}",
"gists_url": "https://api.github.com/users/danielenricocahall/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danielenricocahall/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danielenricocahall/subscriptions",
"organizations_url": "https://api.github.com/users/danielenricocahall/orgs",
"repos_url": "https://api.github.com/users/danielenricocahall/repos",
"events_url": "https://api.github.com/users/danielenricocahall/events{/privacy}",
"received_events_url": "https://api.github.com/users/danielenricocahall/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Sorry for missing this! Could you take a look at this, @gante, @Rocketknight1, @ydshieh?",
"Let me take a look on the quality issue. And possibly @gante or @Rocketknight1 for the speed issue, let's discuss it :-)",
"Actually, the performance issue comes from the quality issue. The TF version didn't stop the generation until 512 tokens.\r\n\r\n```bash\r\n[[65000 25 2092 7 179 15 276 185 7 227 32 9\r\n 2 2538 15 5716 2 2538 15 5716 2 2538 15 15\r\n 15 15 15 15 15 15 15 15 15 15 15 15 ............\r\n 0]], shape=(1, 512), dtype=int32)\r\n```",
"I believe the current PT / TF checkpoints for \"Helsinki-NLP/opus-mt-en-ROMANCE\" doesn't contain the same weight.\r\nAs if I change from\r\n\r\n```\r\nmodel = TFMarianMTModel.from_pretrained(model_name)\r\n```\r\nto\r\n```\r\nmodel = TFMarianMTModel.from_pretrained(model_name, from_pt=True)\r\n```\r\nI could get \r\n```\r\n[[65000 21959 3 0 65000 65000 65 ....] (still `512` tokens)\r\n```\r\nwhile the PyTorch version gives\r\n```\r\ntensor([[65000, 21959, 3, 0]])\r\n```\r\n\r\nSo:\r\n - we probably need to check which checkpoint is the correct one, and uploaded the new checkpoint\r\n - investigate why `TFMarianMTModel` doesn't stop earlier.",
"After a double check (see code below, where I use `from_pt=True`), I believe the current PT checkpoint is the correct one, but not the TF checkpoint.\r\n\r\n@gante Would you like to have a look too, upload a new TF checkpoint, and see why `TFMarianMTModel` doesn't stop the generation earlier as `MarianMTModel` does?\r\n\r\n```\r\nfrom transformers import MarianMTModel, MarianTokenizer, TFMarianMTModel\r\n\r\nmodel_name = \"Helsinki-NLP/opus-mt-en-ROMANCE\"\r\ntokenizer = MarianTokenizer.from_pretrained(model_name)\r\n\r\n# text_in = ['>>fr<< hello']\r\ntext_in = ['>>fr<< Hello, I am a student.']\r\n\r\nmodel = MarianMTModel.from_pretrained(model_name)\r\n\r\nbatch = tokenizer(text_in, return_tensors='pt', padding=True)\r\ntranslated = model.generate(**batch)\r\no = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\nprint(translated)\r\nprint(o)\r\n\r\nmodel = TFMarianMTModel.from_pretrained(model_name, from_pt=True)\r\n\r\nbatch = tokenizer(text_in, return_tensors='tf', padding=True)\r\ntranslated = model.generate(**batch)\r\no = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\nprint(translated)\r\nprint(o)\r\n\r\n\r\ntext_in = ['>>it<< I love dogs and cats.']\r\n\r\n\r\nmodel = MarianMTModel.from_pretrained(model_name)\r\n\r\nbatch = tokenizer(text_in, return_tensors='pt', padding=True)\r\ntranslated = model.generate(**batch)\r\no = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\nprint(translated)\r\nprint(o)\r\n\r\nmodel = TFMarianMTModel.from_pretrained(model_name, from_pt=True)\r\n\r\nbatch = tokenizer(text_in, return_tensors='tf', padding=True)\r\ntranslated = model.generate(**batch)\r\no = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\nprint(translated)\r\nprint(o)\r\n```",
"Hi there @ydshieh @danielenricocahall 👋 \r\n\r\nNone of the Marian models can be successfully converted to TF -- they all fail when validating the hidden layers and outputs of the models. This is a shame since there are a ton of Marian models for translation :(\r\n\r\nThis means there is something wrong with either the model architecture or with weight cross-loading. I haven't looked into it, other than noticing the issue when attempting to convert the weights from `Helsinki-NLP`",
"Thank you for looking into it @ydshieh and @gante !!! This is great information.",
"@danielenricocahall a fix was merged and new weights were pushed -- if you run from `main`, the translations should be much better now 🙌 ",
"cc @gante \r\n\r\nWe still have the generation issue \r\n\r\n```python\r\nfrom transformers import MarianMTModel, MarianTokenizer, TFMarianMTModel\r\n\r\nmodel_name = \"Helsinki-NLP/opus-mt-en-ROMANCE\"\r\ntokenizer = MarianTokenizer.from_pretrained(model_name)\r\ntext_in = ['>>fr<< hello']\r\n\r\n# PT generates a few tokens then stops early -> very fast \r\nmodel = MarianMTModel.from_pretrained(model_name)\r\nbatch = tokenizer(text_in, return_tensors='pt', padding=True)\r\ntranslated = model.generate(**batch)\r\no = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\nprint(translated)\r\nprint(o)\r\n\r\n# TF generates 512 tokens, although the decoded version gives the same result as PT -> very slow\r\nmodel = TFMarianMTModel.from_pretrained(model_name, from_pt=False)\r\nbatch = tokenizer(text_in, return_tensors='tf', padding=True)\r\ntranslated = model.generate(**batch)\r\no = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\nprint(translated)\r\nprint(o)\r\n\r\n```",
"@ydshieh Hi, I am experiencing the same issue. Expected the TF version would be faster than the PT version."
] | 1,657
| 1,675
| 1,675
|
NONE
| null |
### System Info
**System**
macOS Monterey 12.2.1
```
transformers==4.20.1
tensorflow==2.9.1
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import TFMarianMTModel, MarianTokenizer
model_name = "Helsinki-NLP/opus-mt-en-ROMANCE"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = TFMarianMTModel.from_pretrained(model_name)
text_in = ['>>fr<< hello']
batch = tokenizer(text_in, return_tensors='tf', padding=True)
translated = model.generate(**batch)
```
Output:
```
- Qu'est-ce qu'il y a, là-bas, là-bas, là---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
```
### Expected behavior
I would expect similar performance to the PyTorch model.
Inference requires about 120s on my machine and outputs an incorrect translation. In contrast, the PyTorch model (replacing `TFMarianMTModel` with `MarianMTModel` and changing `return_tensors` to `'pt'` in the code snippet) returns the correct translation ("Bonjour") and inference requires about 6s on my machine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18149/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18148
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18148/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18148/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18148/events
|
https://github.com/huggingface/transformers/issues/18148
| 1,305,802,143
|
I_kwDOCUB6oc5N1PWf
| 18,148
|
core dumped
|
{
"login": "TomasAndersonFang",
"id": 38727343,
"node_id": "MDQ6VXNlcjM4NzI3MzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/38727343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TomasAndersonFang",
"html_url": "https://github.com/TomasAndersonFang",
"followers_url": "https://api.github.com/users/TomasAndersonFang/followers",
"following_url": "https://api.github.com/users/TomasAndersonFang/following{/other_user}",
"gists_url": "https://api.github.com/users/TomasAndersonFang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TomasAndersonFang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TomasAndersonFang/subscriptions",
"organizations_url": "https://api.github.com/users/TomasAndersonFang/orgs",
"repos_url": "https://api.github.com/users/TomasAndersonFang/repos",
"events_url": "https://api.github.com/users/TomasAndersonFang/events{/privacy}",
"received_events_url": "https://api.github.com/users/TomasAndersonFang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I find that my problem is produced by cuda version. It seems that GTX 3090 is not compatible with cuda 10 and torch installed by pip has the default cuda version (10). So I re-install pytorch with cuda 11 and solve the problem."
] | 1,657
| 1,657
| 1,657
|
NONE
| null |
### System Info
```shell
centos7
transformers==4.17.0
datasets==1.17.0
pytorch==1.6.0
GPUs==4*GTX 3090
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I modify the example code of text classification (run_glue.py) and train it on my own dataset. When I run this code on the CPU, it is well running (setting no_cuda true). But when I run it on GPUs, it quits when finishing loading data and generate the following information:
run_burebert_ft.sh: line 16: 37055 Aborted (core dumped) CUDA_LAUNCH_BLOCKING=1 python run_br_pred.py --model_name_or_path ../plm/BureBERT --train_file ../dataset/priority_pred_data/priority_train.csv --validation_file ../dataset/priority_pred_data/priority_valid.csv --test_file ../dataset/priority_pred_data/priority_test.csv --cache_dir ./cache_dir --do_train --do_eval --do_predict --no_cuda false --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 5e-6 --num_train_epochs 10 --save_steps 10000 --output_dir ./results_priority
I am very confused because it just gives limited information. Actually, I also run the same code on another server which contains one tesla V100 GPU, it can run well. So I guest whether It requires extra settings when training on multiple GPUs.
### Expected behavior
```shell
When I run the code, it should start to fine-tune BureBERT on my dataset for ten epochs.
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18148/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18147
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18147/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18147/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18147/events
|
https://github.com/huggingface/transformers/pull/18147
| 1,305,787,315
|
PR_kwDOCUB6oc47dTs2
| 18,147
|
[HPO] update to sigopt new experiment api
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@kding1 @yao-matrix @sgugger please have a review",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sywangyi, thanks for your contribution 🤗.\r\n\r\nThe current changes introduce a significant breaking change and we tend to limit them as much as possible.\r\nStill, we though about this and we would like to propose 3 options in order to keep backward compatibility with previous sigopt versions:\r\n\r\n1. Introduce a `if sigopt_version < x.y.z` -> previous behaviour `else` -> new behaviour in this PR\r\n2. Split the current implement and the one introduced in this PR in two distinct functions and dispatch to the right one based on the sigopt version\r\n3. Keep only the behaviour introduced in this PR but guard with a version check, raising an error if the version is too old to inform the user to upgrade its dependency.\r\n\r\nPlease pick one of the suggested options above (_or comment with potential other alternatives 🤓_) and we will be on track for merging 🤗.\r\n\r\nThanks, \r\nMorgan",
"@mfuntowicz hi, thanks for the suggestion, and choose option 1 and patch is uploaded",
"Thanks a lot @sywangyi!\r\n\r\nIt looks good to me, the failure in the CI seems related but I will let @sgugger have the final word 😃 "
] | 1,657
| 1,666
| 1,658
|
CONTRIBUTOR
| null |
* follow https://docs.sigopt.com/experiments
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/18145
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18147/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18147",
"html_url": "https://github.com/huggingface/transformers/pull/18147",
"diff_url": "https://github.com/huggingface/transformers/pull/18147.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18147.patch",
"merged_at": 1658150380000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18146
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18146/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18146/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18146/events
|
https://github.com/huggingface/transformers/issues/18146
| 1,305,786,327
|
I_kwDOCUB6oc5N1LfX
| 18,146
|
MLflow fails to log to a tracking server
|
{
"login": "juliensimon",
"id": 3436143,
"node_id": "MDQ6VXNlcjM0MzYxNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3436143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juliensimon",
"html_url": "https://github.com/juliensimon",
"followers_url": "https://api.github.com/users/juliensimon/followers",
"following_url": "https://api.github.com/users/juliensimon/following{/other_user}",
"gists_url": "https://api.github.com/users/juliensimon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juliensimon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juliensimon/subscriptions",
"organizations_url": "https://api.github.com/users/juliensimon/orgs",
"repos_url": "https://api.github.com/users/juliensimon/repos",
"events_url": "https://api.github.com/users/juliensimon/events{/privacy}",
"received_events_url": "https://api.github.com/users/juliensimon/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @sgugger ",
"I'm not the one who wrote or supports the ML Flow callback :-)",
"@noise-field wrote the integration two years ago, do you have an idea of why it doesn't seem to work anymore @noise-field?",
"@juliensimon, I had an error message similar (I think). I found that the issue was related to values with empty string values (https://github.com/mlflow/mlflow/issues/6253), and it looks like there is a patch in the upcoming MLFLOW version 1.28 (not yet released)\r\n\r\nIn my case, I had to set `mp_parameters` to `None` instead of leaving it as an empty string (the default value), and I see your error message has `{'key': 'mp_parameters', 'value': ''}`.\r\n\r\nWhile later MLflow version fix will address this issue, I think setting the `mp_parameters` to `None` instead of an empty string is cleaner. However, I'm not sure about the extent of this change.\r\n\r\n",
"OK, I'll give it a try and I'll let you know.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,662
| 1,662
|
NONE
| null |
### System Info
Python 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:56:21)
print(transformers.__version__)
4.20.1
print(mlflow.__version__)
1.27.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install mlflow
2. Configure a vanilla training job to use a tracking server (os.environ["MLFLOW_TRACKING_URI"]="...")
3. Run the job
You should see an error similar to:
```
Traceback (most recent call last):
File "/home/ubuntu/train.py", line 45, in <module>
trainer.train()
File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/trainer.py", line 1409, in train
return inner_training_loop(
File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/trainer.py", line 1580, in _inner_training_loop
self.control = self.callback_handler.on_train_begin(args, self.state, self.control)
File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/trainer_callback.py", line 347, in on_train_begin
return self.call_event("on_train_begin", args, state, control)
File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/trainer_callback.py", line 388, in call_event
result = getattr(callback, event)(
File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/integrations.py", line 856, in on_train_begin
self.setup(args, state, model)
File "/home/ubuntu/.local/lib/python3.9/site-packages/transformers/integrations.py", line 847, in setup
self._ml_flow.log_params(dict(combined_dict_items[i : i + self._MAX_PARAMS_TAGS_PER_BATCH]))
File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/tracking/fluent.py", line 675, in log_params
MlflowClient().log_batch(run_id=run_id, metrics=[], params=params_arr, tags=[])
File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/tracking/client.py", line 918, in log_batch
self._tracking_client.log_batch(run_id, metrics, params, tags)
File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/tracking/_tracking_service/client.py", line 315, in log_batch
self.store.log_batch(
File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/store/tracking/rest_store.py", line 309, in log_batch
self._call_endpoint(LogBatch, req_body)
File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/store/tracking/rest_store.py", line 56, in _call_endpoint
return call_endpoint(self.get_host_creds(), endpoint, method, json_body, response_proto)
File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/utils/rest_utils.py", line 256, in call_endpoint
response = verify_rest_response(response, endpoint)
File "/home/ubuntu/.local/lib/python3.9/site-packages/mlflow/utils/rest_utils.py", line 185, in verify_rest_response
raise RestException(json.loads(response.text))
mlflow.exceptions.RestException: INVALID_PARAMETER_VALUE: Invalid value [{'key': 'logging_nan_inf_filter', 'value': 'True'}, {'key': 'save_strategy', 'value': 'epoch'}, {'key': 'save_steps', 'value': '500'}, {'key': 'save_total_limit', 'value': 'None'}, {'key': 'save_on_each_node', 'value': 'False'}, {'key': 'no_cuda', 'value': 'False'}, {'key': 'seed', 'value': '42'}, {'key': 'data_seed', 'value': 'None'}, {'key': 'jit_mode_eval', 'value': 'False'}, {'key': 'use_ipex', 'value': 'False'}, {'key': 'bf16', 'value': 'False'}, {'key': 'fp16', 'value': 'False'}, {'key': 'fp16_opt_level', 'value': 'O1'}, {'key': 'half_precision_backend', 'value': 'auto'}, {'key': 'bf16_full_eval', 'value': 'False'}, {'key': 'fp16_full_eval', 'value': 'False'}, {'key': 'tf32', 'value': 'None'}, {'key': 'local_rank', 'value': '-1'}, {'key': 'xpu_backend', 'value': 'None'}, {'key': 'tpu_num_cores', 'value': 'None'}, {'key': 'tpu_metrics_debug', 'value': 'False'}, {'key': 'debug', 'value': '[]'}, {'key': 'dataloader_drop_last', 'value': 'False'}, {'key': 'eval_steps', 'value': 'None'}, {'key': 'dataloader_num_workers', 'value': '0'}, {'key': 'past_index', 'value': '-1'}, {'key': 'run_name', 'value': './output'}, {'key': 'disable_tqdm', 'value': 'False'}, {'key': 'remove_unused_columns', 'value': 'True'}, {'key': 'label_names', 'value': 'None'}, {'key': 'load_best_model_at_end', 'value': 'False'}, {'key': 'metric_for_best_model', 'value': 'None'}, {'key': 'greater_is_better', 'value': 'None'}, {'key': 'ignore_data_skip', 'value': 'False'}, {'key': 'sharded_ddp', 'value': '[]'}, {'key': 'fsdp', 'value': '[]'}, {'key': 'fsdp_min_num_params', 'value': '0'}, {'key': 'deepspeed', 'value': 'None'}, {'key': 'label_smoothing_factor', 'value': '0.0'}, {'key': 'optim', 'value': 'adamw_hf'}, {'key': 'adafactor', 'value': 'False'}, {'key': 'group_by_length', 'value': 'False'}, {'key': 'length_column_name', 'value': 'length'}, {'key': 'report_to', 'value': "['mlflow']"}, {'key': 'ddp_find_unused_parameters', 'value': 'None'}, {'key': 'ddp_bucket_cap_mb', 'value': 'None'}, {'key': 'dataloader_pin_memory', 'value': 'True'}, {'key': 'skip_memory_metrics', 'value': 'True'}, {'key': 'use_legacy_prediction_loop', 'value': 'False'}, {'key': 'push_to_hub', 'value': 'False'}, {'key': 'resume_from_checkpoint', 'value': 'None'}, {'key': 'hub_model_id', 'value': 'None'}, {'key': 'hub_strategy', 'value': 'every_save'}, {'key': 'hub_token', 'value': '<HUB_TOKEN>'}, {'key': 'hub_private_repo', 'value': 'False'}, {'key': 'gradient_checkpointing', 'value': 'False'}, {'key': 'include_inputs_for_metrics', 'value': 'False'}, {'key': 'fp16_backend', 'value': 'auto'}, {'key': 'push_to_hub_model_id', 'value': 'None'}, {'key': 'push_to_hub_organization', 'value': 'None'}, {'key': 'push_to_hub_token', 'value': '<PUSH_TO_HUB_TOKEN>'}, {'key': '_n_gpu', 'value': '1'}, {'key': 'mp_parameters', 'value': ''}, {'key': 'auto_find_batch_size', 'value': 'False'}, {'key': 'full_determinism', 'value': 'False'}, {'key': 'torchdynamo', 'value': 'None'}, {'key': 'ray_scope', 'value': 'last'}] for parameter 'params' supplied. Hint: Value was of type 'list'. See the API docs for more information about request parameters.
```
Training script:
```
import os
import numpy as np
from datasets import load_dataset, load_metric
from transformers import AutoTokenizer, Trainer, TrainingArguments, AutoModelForSequenceClassification
train_dataset, test_dataset = load_dataset("imdb", split=['train', 'test'])
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
train_dataset = train_dataset.map(tokenize_function, batched=True)
test_dataset = test_dataset.map(tokenize_function, batched=True)
model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-cased", num_labels=2)
metric = load_metric("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
os.environ["HF_MLFLOW_LOG_ARTIFACTS"]="1"
os.environ["MLFLOW_EXPERIMENT_NAME"]="trainer-mlflow-demo"
os.environ["MLFLOW_FLATTEN_PARAMS"]="1"
#os.environ["MLFLOW_TRACKING_URI"]=<MY_SERVER IP>
training_args = TrainingArguments(
num_train_epochs=1,
output_dir="./output",
logging_steps=500,
save_strategy="epoch",
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
compute_metrics=compute_metrics
)
trainer.train()
```
### Expected behavior
I would expect logging to work :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18146/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18145
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18145/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18145/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18145/events
|
https://github.com/huggingface/transformers/issues/18145
| 1,305,771,904
|
I_kwDOCUB6oc5N1H-A
| 18,145
|
the Sigopt api is outdated in transformers trainer.py, the old api could not work
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[] | 1,657
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.21.0.dev0
- Platform: Linux-5.8.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.0 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1.enable sigopt HPO in example and run.
2. work log like"UserWarning: You're currently using the old SigOpt Experience. Try out the new and improved SigOpt experience by getting started with the docs today. You have until July 2022 to migrate over without experiencing breaking changes."
### Expected behavior
HPO with sigopt backend could work correctly without warning
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18145/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18144
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18144/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18144/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18144/events
|
https://github.com/huggingface/transformers/pull/18144
| 1,305,575,877
|
PR_kwDOCUB6oc47cmwD
| 18,144
|
Fix typo in pipelines/base.py
|
{
"login": "SeonbeomKim",
"id": 12165303,
"node_id": "MDQ6VXNlcjEyMTY1MzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/12165303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SeonbeomKim",
"html_url": "https://github.com/SeonbeomKim",
"followers_url": "https://api.github.com/users/SeonbeomKim/followers",
"following_url": "https://api.github.com/users/SeonbeomKim/following{/other_user}",
"gists_url": "https://api.github.com/users/SeonbeomKim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SeonbeomKim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SeonbeomKim/subscriptions",
"organizations_url": "https://api.github.com/users/SeonbeomKim/orgs",
"repos_url": "https://api.github.com/users/SeonbeomKim/repos",
"events_url": "https://api.github.com/users/SeonbeomKim/events{/privacy}",
"received_events_url": "https://api.github.com/users/SeonbeomKim/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,657
| 1,657
|
NONE
| null |
dictionnary -> dictionary
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18144/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18144",
"html_url": "https://github.com/huggingface/transformers/pull/18144",
"diff_url": "https://github.com/huggingface/transformers/pull/18144.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18144.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18143
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18143/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18143/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18143/events
|
https://github.com/huggingface/transformers/issues/18143
| 1,305,570,917
|
I_kwDOCUB6oc5N0W5l
| 18,143
|
DeBERTa for MaskedLM appears to be producing random results
|
{
"login": "alexdauenhauer",
"id": 11903445,
"node_id": "MDQ6VXNlcjExOTAzNDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/11903445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexdauenhauer",
"html_url": "https://github.com/alexdauenhauer",
"followers_url": "https://api.github.com/users/alexdauenhauer/followers",
"following_url": "https://api.github.com/users/alexdauenhauer/following{/other_user}",
"gists_url": "https://api.github.com/users/alexdauenhauer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexdauenhauer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexdauenhauer/subscriptions",
"organizations_url": "https://api.github.com/users/alexdauenhauer/orgs",
"repos_url": "https://api.github.com/users/alexdauenhauer/repos",
"events_url": "https://api.github.com/users/alexdauenhauer/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexdauenhauer/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @alexdauenhauer ,\r\n\r\nI think there's also at least one issue in the upstream repo:\r\n\r\nhttps://github.com/microsoft/DeBERTa/issues/74\r\n\r\nHowever - this is not a rant - but the DeBERTa guys are not really responsive and e.g. pretraining code of v3 is still not available after months (but that's another story).",
"@stefan-it ah... ok thanks for the info, sounds like I'll just have to use a different model.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
### System Info
@LysandreJik when I run the sample code in the docs for `DeBERTaV2ForMaskedLM` it appears to be returning random predictions
```python
from transformers import DebertaV2Tokenizer, DebertaV2ForMaskedLM
import torch
tokenizer = DebertaV2Tokenizer.from_pretrained("microsoft/deberta-v3-base")
model = DebertaV2ForMaskedLM.from_pretrained("microsoft/deberta-v3-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
```
If I run this code block multiple times in a row, I get a different response every time, none of which are the correct response. If I run this code block with `BERT` or `RoBERTa` I get the correct answer every time. I also tried different models of DeBERTa such as `"microsoft/deberta-v2-xlarge"` and I get the same thing, random responses
transformers version = 4.15
python version = 3.8.12
os = macOS 12.4
running on cpu
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
run the code block in the description over and over again. The result will be different everytime.
```python
from transformers import DebertaV2Tokenizer, DebertaV2ForMaskedLM
import torch
tokenizer = DebertaV2Tokenizer.from_pretrained("microsoft/deberta-v3-base")
model = DebertaV2ForMaskedLM.from_pretrained("microsoft/deberta-v3-base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# retrieve index of [MASK]
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
tokenizer.decode(predicted_token_id)
```
example output
```python
'Independence'
```
### Expected behavior
I expect the result to at least be the same every time, but I also expect it to be correct.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18143/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18142
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18142/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18142/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18142/events
|
https://github.com/huggingface/transformers/issues/18142
| 1,305,543,148
|
I_kwDOCUB6oc5N0QHs
| 18,142
|
Question for implementation of resize in image-classification examples.
|
{
"login": "DataLama",
"id": 38907104,
"node_id": "MDQ6VXNlcjM4OTA3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/38907104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DataLama",
"html_url": "https://github.com/DataLama",
"followers_url": "https://api.github.com/users/DataLama/followers",
"following_url": "https://api.github.com/users/DataLama/following{/other_user}",
"gists_url": "https://api.github.com/users/DataLama/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DataLama/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DataLama/subscriptions",
"organizations_url": "https://api.github.com/users/DataLama/orgs",
"repos_url": "https://api.github.com/users/DataLama/repos",
"events_url": "https://api.github.com/users/DataLama/events{/privacy}",
"received_events_url": "https://api.github.com/users/DataLama/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @NielsRogge and @nateraw ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @amyeroberts as well, as you've been working with similar objects lately :)",
"Hi @DataLama, thanks for raising the issue. \r\n\r\nIn this script, the reason for the validation transformations being defined like this and in this order - resize then centre crop - is that we end up with an image of size `(feature_extractor.size, feature_extractor.size)`, but what's shown in the image has the same aspect ratio as the original i.e. the image isn't \"squashed\".\r\n\r\nIn your suggestion:\r\n```\r\n...\r\n_val_transforms = Compose(\r\n [\r\n Resize((feature_extractor.size, feature_extractor.size)),\r\n CenterCrop(feature_extractor.size),\r\n ToTensor(),\r\n normalize,\r\n ]\r\n)\r\n...\r\n```\r\n\r\nthe image would be resized to `(feature_extractor.size, feature_extractor.size)` first, changing the aspect ratio, and `CenterCrop(feature_extractor.size)` would then not have an effect. ",
"Hi @amyeroberts, thanks for explanation. \r\n\r\nNow I understand what you intended.\r\n\r\nI'm closing this issue. the issue has been resolved."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
### System Info
- `transformers` version: 4.21.0.dev0
- Platform: Linux-4.15.0-175-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
Examples:
maintained examples (not research project or legacy): @sgugger, @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
* minimal code for reproduction.
```python
## partial import from image classification scripts
from typing import Optional
from dataclasses import dataclass, field
from torchvision.transforms import (
CenterCrop,
Compose,
Resize,
)
from transformers import (
MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING,
AutoConfig,
AutoFeatureExtractor,
)
MODEL_CONFIG_CLASSES = list(MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
model_name_or_path: str = field(
default="google/vit-base-patch16-224-in21k",
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"},
)
model_type: Optional[str] = field(
default=None,
metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
model_revision: str = field(
default="main",
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
)
feature_extractor_name: str = field(default=None, metadata={"help": "Name or path of preprocessor config."})
use_auth_token: bool = field(
default=False,
metadata={
"help": (
"Will use the token generated when running `transformers-cli login` (necessary to use this script "
"with private models)."
)
},
)
ignore_mismatched_sizes: bool = field(
default=False,
metadata={"help": "Will enable to load a pretrained model whose head dimensions are different."},
)
# use defualt model_args
model_args = ModelArguments()
feature_extractor = AutoFeatureExtractor.from_pretrained(
model_args.feature_extractor_name or model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
# comment the ToTensor and normalize to check the PIL image.
_val_transforms = Compose(
[
Resize(feature_extractor.size),
CenterCrop(feature_extractor.size)
# ToTensor(),
# normalize,
]
)
```
* get sample image
```python
from datasets import load_dataset
ds = load_dataset('imagenet-1k',use_auth_token=True, streaming=True)
im = list(ds['train'].take(1))[0]['image']
```
* original transform
```python
original_transform = _val_transforms(im)
original_transform
```
* new transform
```python
_val_transforms_new = Compose(
[
Resize((feature_extractor.size, feature_extractor.size)),
CenterCrop(feature_extractor.size)
# ToTensor(),
# normalize,
]
)
new_transform = _val_transforms_new(im)
new_transform
```
### Expected behavior
I'm careful to say this because I'm a newbie in the field of vision, but the implementation for resize transformation in the `_val_transforms` function seems to be wrong in image classification example script.([here](https://github.com/huggingface/transformers/blob/8581a798c0a48fca07b29ce2ca2ef55adcae8c7e/examples/pytorch/image-classification/run_image_classification_no_trainer.py#L320) and [here](https://github.com/huggingface/transformers/blob/8581a798c0a48fca07b29ce2ca2ef55adcae8c7e/examples/pytorch/image-classification/run_image_classification.py#L301))
This transform may cut the object in validation step.
```python
...
_val_transforms = Compose(
[
Resize(feature_extractor.size),
CenterCrop(feature_extractor.size),
ToTensor(),
normalize,
]
)
...
```
In order to maintain the shape of the object and only change the size of the image, I think the following code is right for `_val_transforms` function.
```python
...
_val_transforms = Compose(
[
Resize((feature_extractor.size, feature_extractor.size)),
CenterCrop(feature_extractor.size),
ToTensor(),
normalize,
]
)
...
```
If I've misunderstood, please feel free to tell me about it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18142/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18141
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18141/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18141/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18141/events
|
https://github.com/huggingface/transformers/pull/18141
| 1,305,413,758
|
PR_kwDOCUB6oc47cFQ8
| 18,141
|
[Bloom] Remove unused position_ids, improve modeling code (lm head, alibi, attention multiplication)
|
{
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18141). All of your documentation changes will be reflected on that endpoint.",
"First of all, thanks for the quick review!\r\n\r\nConcerning the change in default behaviour, I'd advocate that this fixes a bug where we see poor generation specifically due to this issue. The argument is that at some level 255k+ vocab is just too big compared to the precision and we observe some sort of collapse when the top 2 values (2 being arbitrary) that are distinct in fp32 in fact collapse where you run them in fp16 or bf16, and so argmax takes the first value given that matches the max given the [documentation](https://pytorch.org/docs/stable/generated/torch.argmax.html). This issue with collapse actually breaks greedy generation really fast (we've seen cases where the first token generated don't match, and then it just gets a lot worse).\r\n\r\nAlso maybe we can rename `force_word_embeddings_in_fp32` to `force_lm_head_in_fp32`? The realy reason why I cast the word embeddings in fp32 is because `word_embeddings` and `lm_head` are tied, and I need `lm_head` to be fp32.\n\n**Edit**: Actually after thinking about it, this might be a more generic issue where whenever you have too high a lm_head size you need to upcast before running the final linear layer. Typically models like mt5 might struggle due to this?",
"This PR tries to group way too many things together which is very bad practice as when we realize after merging it that everything is broken, we won't find the cause easily. Please break it down in at lest three parts:\r\n- clean up code without any change\r\n- removing/deprecating position Ids\r\n- the float32 upcasting (which is where all the heated discussion is, so **really** should be its own PR)\r\n\r\nI am veto-ing any merge of everything altogether as we've already had quite enough of \"Oh the last BLOOM PR broke Xxx.\" 🙃 ",
"Here are some comments so far:\r\n\r\n- >Move back to baddbmm instead of bmm. It was unclear why the change was necessary.\r\n - Here: https://github.com/huggingface/transformers/pull/17866#discussion_r921825354\r\n - But if this PR solves the issue for FP16, OK for me to use `baddbmm`.\r\n - (**just wondering what are the difference, and if there is any reason you prefer to use `baddbmm`?**)\r\n- There are 4 `force_lm_head_in_fp32` in the test file. Other than the one in `test_force_lm_head_in_fp32_is_close_to_fp16`, I don't know why we set it to `False`.\r\n - Is it to keep the same behavior as before (the whole model in FP16)?\r\n - But `prepare_config_and_inputs` has default `force_lm_head_in_fp32=True`, so most tests now use `True`. It is a bit confusing to me we keep them `False` in a few places.\r\n- I agree with @sgugger that the default value for `force_lm_head_in_fp32` should be `False`.\r\n - Although `True` here is good for generation, this is kind special (casting parts of model weights to different dtype)\r\n - Also it's good to keep the previous behavior by default -> do not introduce surprise things to users\r\n",
"Also, I really appreciate your finding on \"max-collapse\" (especially being able to demonstrate it!), and glad that it improves the generation here.\r\n\r\nBut I personally would not expect FP16 generations will **always** match FP32 generations (even with `force_lm_head_in_fp32=True`), and we don't need to have tests that compare results across FP16/FP32. (I don't remember if we have a common test doing so though).",
"> Here: https://github.com/huggingface/transformers/pull/17866#discussion_r921825354\r\nBut if this PR solves the issue for FP16, OK for me to use baddbmm.\r\n(just wondering what are the difference, and if there is any reason you prefer to use baddbmm?)\r\n\r\nI think @younesbelkada and @NouamaneTazi changed the original behaviour, it was unclear what it actually fixed. The reason why I want to use `baddbmm` is because the training codebase used `baddbmm` and so there's no reason to use `bmm`.\r\n\r\n> There are 4 force_lm_head_in_fp32 in the test file. Other than the one in test_force_lm_head_in_fp32_is_close_to_fp16, I don't know why we set it to False.\r\nIs it to keep the same behavior as before (the whole model in FP16)?\r\nBut prepare_config_and_inputs has default force_lm_head_in_fp32=True, so most tests now use True. It is a bit confusing to me we keep them False in a few places.\r\n\r\nYeah so I initially thought that upcasting would have much better inference (at least in greedy style). turns out that's not true at least for 176b (it was true on the small models in test), so as @sgugger and @patrickvonplaten I'll try to figure out more if that feature is actually necessary at all.",
"Woops forgot to answer some question:\r\nI agree that default should be `False` now :D\r\n\r\n> But I personally would not expect FP16 generations will always match FP32 generations (even with force_lm_head_in_fp32=True), and we don't need to have tests that compare results across FP16/FP32. (I don't remember if we have a common test doing so though).\r\n\r\nWell technically given checkpoints are in float16 of bfloat16, there should be little reason that generation don't match. I mean it's the promise of pretraining on those half precision: \"use twice less compute/time to get more or less the same model\". I would not be surprised that it doesn't match perfectly, but at the same time, now that they do, it's a great signal that the model is robust to numerical inacurracies. Consequently, I think the test matching fp16 (with fp32 lm_head) output with full fp32 output makes sense.",
"As https://github.com/huggingface/transformers/pull/18344#event-7125942979 has been merged, can you merge main into this branch?",
"Actually going to close this PR, any reason why you want this branch to still be alive? What should be missing if the `fp32` upcasting that I've done in another branch.",
"Good to be closed for me"
] | 1,657
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
Notable changes:
- Remove `attention_mask` sum trick, and instead use `torch.masked_fill`
- Simplify the causal attention creation
- Move back to `baddbmm` instead of `bmm`. It was unclear why the change was necessary.
- ~Remove~ Deprecate `position_ids` as they don't make sense in BLOOM.
- Introduce a fp32 cast for lm_head (and consequently the word embeddings in order to respect the sharing). The intuition is as follows
`
One of the thing we're wondering about if something we'd like to call "max-collapse". Given that 16bit allows to generate at most 65536 different values, this means that with a vocabulary of 255k+ values are going to collapse, ie multiple values are going to be equal. So if that happens to the max value, this means that greedy decoding can change between fp32 and fp16/bf16.
`
- move back test to test generation on 16bit precision
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18141/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18141",
"html_url": "https://github.com/huggingface/transformers/pull/18141",
"diff_url": "https://github.com/huggingface/transformers/pull/18141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18141.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18140
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18140/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18140/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18140/events
|
https://github.com/huggingface/transformers/issues/18140
| 1,305,397,725
|
I_kwDOCUB6oc5Nzsnd
| 18,140
|
[TRACKER] Add alibi tests on BLOOM
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,657
| 1,657
| null |
CONTRIBUTOR
| null |
### Feature request
Add ALiBi tests on BLOOM !
We should add several tests to simply to test if alibi has been created correctly
- test padding
- test expected output
@Narsil
### Motivation
Build stronger CI tests
### Your contribution
Design and build the tests mentioned above
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18140/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/18139
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18139/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18139/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18139/events
|
https://github.com/huggingface/transformers/pull/18139
| 1,305,370,150
|
PR_kwDOCUB6oc47b-q4
| 18,139
|
Fix BLOOM DeepSpeed inference issue
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@RezaYazdaniAminabadi did you tried to infer by removing the elementwise multiplication after the softmax as proposed in the PR?\r\nWhen trying to infer on 8xA100 80GB I did obtained the same generations using the old code vs the new one with batch_size=1",
"> @RezaYazdaniAminabadi did you tried to infer by removing the elementwise multiplication after the softmax as proposed in the PR? When trying to infer on 8xA100 80GB I did obtained the same generations using the old code vs the new one with batch_size=1\r\nHi @younesbelkada,\r\n\r\nI did try this on 16 A100-40GB previously and it was not giving similar results. I will try with this one and let you know. Anyhow, I think that multiply is not needed since the scores are already masked.\r\nThanks",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18139). All of your documentation changes will be reflected on that endpoint.",
"Thank you very much @RezaYazdaniAminabadi !!",
"Finally after doing some tests it appears that we need the multiplication with the attention mask because of the following:\r\nin some cases we have an attention mask like the one below\r\n```\r\n0 0 0 0 0 \r\n0 1 0 0 0\r\n0 1 1 0 0 \r\n0 1 1 1 0 \r\n0 1 1 1 1\r\n```\r\nAfter replacing all zeros by `torch.finfo(dtype.min)` , the softmax will return the following on the first row:\r\n`0.2 0.2 0.2 0.2 0.2` because we have the same values on the first row. To avoid using these wrong values on the calculation later I had to multiply the attention scores by the original mask.\r\n\r\ncc @NouamaneTazi ",
"@younesbelkada, ok, so we have the first row of `0.2 0.2 0.2 0.2 0.2` let's follow through to the end - where does that manifest an issue?\r\n\r\nLet's perhaps use a small concrete example and use it to document why things are done the way they are - otherwise everybody will keep on questioning why this is done this way. ",
"Is this because of padding, we should not care about the padding row, ie when the padding is the query. The wrong values don't matter when they are in the padding no?",
"My guess was this will impact the computation of the `context_layer` tensor [here](https://github.com/younesbelkada/transformers/blob/9ef1a4a52020854e02eea104a5bb8553f3de83e8/src/transformers/models/bloom/modeling_bloom.py#L319) in the case we have padded inputs as mentioned by @thomasw21 \r\nSo at the end you are right ! Indeed it impacts the computation of this tensor but I think that it does not matter at all. At the end we get a token-to-token correspondance for the computed hidden states - ie the context layer will have a shape `batch_size x seq_len x hidden_dim` and the hidden states corresponding to the padding tokens will not impact anyway the prediction of the next token. Do you think that this explanation makes sense? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR tries to address a strange behaviour observed when inferring bloom-176 model using DeepSpeed!
My intuitions are:
- In the previous code we used `-10000` for the attention mask filling value whereas we should use `fp32.min` as it is written in the original cuda kernel of [`FusedScaledSoftmax`](https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/7b5f175b73a12d602cdadd3ee205ed666f6d4234/megatron/fused_kernels/scaled_masked_softmax.h#L288). This might lead to inconsistent result between the old version and the new version, but the new version should be considered as the correct one
- @RezaYazdaniAminabadi discovered that attention scores should not be multiplied by the attention mask after the softmax, which makes sense and could fix the issue
cc @RezaYazdaniAminabadi @stas00 @thomasw21
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18139/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18139/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18139",
"html_url": "https://github.com/huggingface/transformers/pull/18139",
"diff_url": "https://github.com/huggingface/transformers/pull/18139.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18139.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18138
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18138/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18138/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18138/events
|
https://github.com/huggingface/transformers/issues/18138
| 1,305,098,387
|
I_kwDOCUB6oc5NyjiT
| 18,138
|
Please Add a Fast Flaubert_tokenizer class as well to leverage fast_tokenizer methods
|
{
"login": "datatales-with-pankaj",
"id": 78647606,
"node_id": "MDQ6VXNlcjc4NjQ3NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/78647606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datatales-with-pankaj",
"html_url": "https://github.com/datatales-with-pankaj",
"followers_url": "https://api.github.com/users/datatales-with-pankaj/followers",
"following_url": "https://api.github.com/users/datatales-with-pankaj/following{/other_user}",
"gists_url": "https://api.github.com/users/datatales-with-pankaj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datatales-with-pankaj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datatales-with-pankaj/subscriptions",
"organizations_url": "https://api.github.com/users/datatales-with-pankaj/orgs",
"repos_url": "https://api.github.com/users/datatales-with-pankaj/repos",
"events_url": "https://api.github.com/users/datatales-with-pankaj/events{/privacy}",
"received_events_url": "https://api.github.com/users/datatales-with-pankaj/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
### System Info
transformers version: 4.20.1
GPU: Nvidia Titan T4
### Who can help?
@LysandreJik I feel that you can help best since Flaubert Tokenizer class is directly inherited from XLM's Tokenizer which also does not have a fast tokenizer class too!
### Information
- [x] My own modified scripts
- [ ] The official example scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This method is not working since my AutoTokenizer Class is taking the default FlaubertTokenizer class and since we don't have a Flaubert_tokenizer_fast class here; we are not able to leverage fast_tokenizer functionalities!
<img width="1245" alt="Screenshot 2022-07-14 at 4 39 38 PM" src="https://user-images.githubusercontent.com/78647606/179044514-e6081be4-591a-48bb-bf10-56ab90628ab0.png">
### Expected behavior
If FlaubertTokenizer has a faster version class like BertToeknizerFast; We will be able to use class methods like word_ids, word_to_tokens, etc Specifically for the TokenClasssifcation task.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18138/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18137
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18137/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18137/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18137/events
|
https://github.com/huggingface/transformers/issues/18137
| 1,305,031,159
|
I_kwDOCUB6oc5NyTH3
| 18,137
|
ViT modeling file is missing drop path present in PyTorch image models
|
{
"login": "muqeeth",
"id": 25932561,
"node_id": "MDQ6VXNlcjI1OTMyNTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/25932561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muqeeth",
"html_url": "https://github.com/muqeeth",
"followers_url": "https://api.github.com/users/muqeeth/followers",
"following_url": "https://api.github.com/users/muqeeth/following{/other_user}",
"gists_url": "https://api.github.com/users/muqeeth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muqeeth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muqeeth/subscriptions",
"organizations_url": "https://api.github.com/users/muqeeth/orgs",
"repos_url": "https://api.github.com/users/muqeeth/repos",
"events_url": "https://api.github.com/users/muqeeth/events{/privacy}",
"received_events_url": "https://api.github.com/users/muqeeth/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,663
| 1,663
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.10.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@rwightman @NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Vision transformer modeling file from Pytorch image models uses [drop path ](https://github.com/rwightman/pytorch-image-models/blob/324a4e58b6b0365d16dcf8f93739be8f74cd7d37/timm/models/vision_transformer.py#L234) after attention and feedforward layers. It is missing in the current Huggingface modeling_vit.py file. We found that on VTAB benchmark, having drop path significantly boost the performance of ViTB model.
2. The epsilon value of layer norm used in pretrained ViTConfig is 1e-12 whereas vision_transformer.py file in pytorch-image-models uses epsilon value of 1e-6. Because of this, google/vit-base-patch16-224 checkpoint performs slightly different with modeling_vit.py from Huggingface and vision_transformer.py from pytorch-image-models in the test mode.
### Expected behavior
1. Given that Drop path is shown to perform well on VTAB benchmark , it can be added to current modeling file of Huggingface.
2. Epsilon value in layer norm of the transformer can be made consistent to pytorch-image-models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18137/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18136
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18136/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18136/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18136/events
|
https://github.com/huggingface/transformers/issues/18136
| 1,304,763,955
|
I_kwDOCUB6oc5NxR4z
| 18,136
|
如何将xlnet作为嵌入层置于其他模型前
|
{
"login": "yangbin-Neil",
"id": 75511989,
"node_id": "MDQ6VXNlcjc1NTExOTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/75511989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangbin-Neil",
"html_url": "https://github.com/yangbin-Neil",
"followers_url": "https://api.github.com/users/yangbin-Neil/followers",
"following_url": "https://api.github.com/users/yangbin-Neil/following{/other_user}",
"gists_url": "https://api.github.com/users/yangbin-Neil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangbin-Neil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangbin-Neil/subscriptions",
"organizations_url": "https://api.github.com/users/yangbin-Neil/orgs",
"repos_url": "https://api.github.com/users/yangbin-Neil/repos",
"events_url": "https://api.github.com/users/yangbin-Neil/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangbin-Neil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
### Feature request
我想将预训练模型xlnet作为嵌入层添加到bilstm-CRF前,如何作为一个嵌入层加入到模型的构建当中。比如说,我现在用bert4keras框架搭建好了bert-Bilstm-Crf模型,我想将bert替换为xlnet,将xlnet作为模型的嵌入层,这样可不可以???
-------------------------
def bert_bilstm_crf(config_path,checkpoint_path,latm_units,drop_rate,learning_rate): # 构建模型的结构
-----------------------------------------------------------------------------------以下关于bert的内容是否可以替换
bert=build_transformer_model( #BERT加载权重
config_path=config_path,
checkpoint_path=checkpoint_path,
model='bert',
return_keras_model=False
)
x=bert.model.output #[batchsize,seq_len,768]
x=keras.layers.Bidirectional(
keras.layers.LSTM( #加入lstm模型
latm_units,
kernel_initializer='he_normal', #初始化规则
return_sequences=True
)
)(x) #[batchsize,seq_len,lstm_units*2]
### Motivation
修改关于ner方面的问题
### Your contribution
解决模型构建的问题
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18136/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18135
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18135/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18135/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18135/events
|
https://github.com/huggingface/transformers/pull/18135
| 1,304,746,790
|
PR_kwDOCUB6oc47Z4op
| 18,135
|
[bugfix] perceiverIO`PerceiverBasicDecoder` error when appending preprocessed inputs to decoder queries
|
{
"login": "orgoro",
"id": 20637412,
"node_id": "MDQ6VXNlcjIwNjM3NDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/20637412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orgoro",
"html_url": "https://github.com/orgoro",
"followers_url": "https://api.github.com/users/orgoro/followers",
"following_url": "https://api.github.com/users/orgoro/following{/other_user}",
"gists_url": "https://api.github.com/users/orgoro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orgoro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orgoro/subscriptions",
"organizations_url": "https://api.github.com/users/orgoro/orgs",
"repos_url": "https://api.github.com/users/orgoro/repos",
"events_url": "https://api.github.com/users/orgoro/events{/privacy}",
"received_events_url": "https://api.github.com/users/orgoro/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@julien-c @sgugger Let me know if I'm missing something 🙏 "
] | 1,657
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18135/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18135/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18135",
"html_url": "https://github.com/huggingface/transformers/pull/18135",
"diff_url": "https://github.com/huggingface/transformers/pull/18135.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18135.patch",
"merged_at": 1658219096000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18134
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18134/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18134/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18134/events
|
https://github.com/huggingface/transformers/pull/18134
| 1,304,714,763
|
PR_kwDOCUB6oc47ZxtG
| 18,134
|
FSDP integration enhancements and fixes
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
1. Fixes #17681 and https://github.com/pytorch/pytorch/issues/79605
2. integrates new features of FSDP to auto wrap transformer blocks and support for mixed precision
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18134/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18134",
"html_url": "https://github.com/huggingface/transformers/pull/18134",
"diff_url": "https://github.com/huggingface/transformers/pull/18134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18134.patch",
"merged_at": 1658169130000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18133
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18133/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18133/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18133/events
|
https://github.com/huggingface/transformers/issues/18133
| 1,304,687,656
|
I_kwDOCUB6oc5Nw_Qo
| 18,133
|
Expected behaviour for MBartTokenizer as target tokenizer
|
{
"login": "FelipeAlb94",
"id": 48952831,
"node_id": "MDQ6VXNlcjQ4OTUyODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/48952831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FelipeAlb94",
"html_url": "https://github.com/FelipeAlb94",
"followers_url": "https://api.github.com/users/FelipeAlb94/followers",
"following_url": "https://api.github.com/users/FelipeAlb94/following{/other_user}",
"gists_url": "https://api.github.com/users/FelipeAlb94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FelipeAlb94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FelipeAlb94/subscriptions",
"organizations_url": "https://api.github.com/users/FelipeAlb94/orgs",
"repos_url": "https://api.github.com/users/FelipeAlb94/repos",
"events_url": "https://api.github.com/users/FelipeAlb94/events{/privacy}",
"received_events_url": "https://api.github.com/users/FelipeAlb94/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I won't have time to look into this anytime soon. @ArthurZucker could you take a look here? ",
"Hey! Really sorry for the long delay! 🤗 \r\nFrom what I understand based on the tests, this behaviour is actually intended : once the `labels` are passed to the `MBartForConditionalGeneration`, they are shifted using `shift_tokens_right(labels)` (see [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mbart/modeling_mbart.py/#L1346-L1351). This means that the model actually receives the expected input 😄 \r\n\r\nNow about the languages that are generated, there are a lot of possibilities : \r\n- Since the original model is trained on a lot of language, it can belong to the dataset used to train that model. \r\n- You can remove the prediction of a list of languages by using `suppress_tokens` argument of the `generate` function (if you are using it). \r\n\r\nOtherwise, I can't really help about the actual finetuning! \r\nTell me if that makes sens",
"PS : \r\nyou can try the following : \r\n```python \r\nfrom transformers import MBartForConditionalGeneration, MBartTokenizer\r\nfrom transformers.models.mbart.modeling_mbart import shift_tokens_right\r\ntokenizer = MBartTokenizer.from_pretrained(\"facebook/mbart-large-en-ro\", src_lang=\"en_XX\", tgt_lang=\"ro_RO\")\r\nexample_english_phrase = \"UN Chief Says There Is No Military Solution in Syria\"\r\nexpected_translation_romanian = \"Şeful ONU declară că nu există o soluţie militară în Siria\"\r\ntokens = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors=\"pt\")\r\n\r\nshift_tokens_right(tokens[\"labels\"], tokenizer.pad_token_id)\r\n```\r\nShould output \r\n```python \r\ntensor([[250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315,\r\n 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2]])\r\n```"
] | 1,657
| 1,671
| 1,671
|
NONE
| null |
Hello,
I'm trying to fine-tune a MBart model with a multilingual dataset, but I'm facing some issues. The generated texts during training are really strange (mainly others languages not present in the dataset), then I noticed that the input_ids from target text does not follows the format [tgt_lang_code] [text tokens] [eos].
### **System info**
transformers==4.20.1
### **Who can help?**
@patrickvonplaten
### **Information**
- [x] The official example scripts
- [ ] My own modified scripts
### **Reproduction**
Running the script below I get both sequence of tokens with format of X [eos, src_lang_code].
```python
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO")
example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
inputs = tokenizer(example_english_phrase, return_tensors="pt")
with tokenizer.as_target_tokenizer():
labels = tokenizer(expected_translation_romanian, return_tensors="pt")
print(inputs['input_ids'])
#tensor([[ 8274, 127873, 25916, 7, 8622, 2071, 438, 67485, 53,
# 187895, 23, 51712, 2, 250004]])
print(labels['input_ids'])
#tensor([[ 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071,
# 36, 31563, 8454, 33796, 451, 346, 125577, 2, 250020]])
```
### **Expected Behaviour**
```python
print(labels['input_ids'])
#tensor([[ 250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071,
# 36, 31563, 8454, 33796, 451, 346, 125577, 2]])
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18133/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18132
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18132/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18132/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18132/events
|
https://github.com/huggingface/transformers/issues/18132
| 1,304,404,442
|
I_kwDOCUB6oc5Nv6Ha
| 18,132
|
model does not work after loss change
|
{
"login": "NaamaBerman",
"id": 46111254,
"node_id": "MDQ6VXNlcjQ2MTExMjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/46111254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NaamaBerman",
"html_url": "https://github.com/NaamaBerman",
"followers_url": "https://api.github.com/users/NaamaBerman/followers",
"following_url": "https://api.github.com/users/NaamaBerman/following{/other_user}",
"gists_url": "https://api.github.com/users/NaamaBerman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NaamaBerman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NaamaBerman/subscriptions",
"organizations_url": "https://api.github.com/users/NaamaBerman/orgs",
"repos_url": "https://api.github.com/users/NaamaBerman/repos",
"events_url": "https://api.github.com/users/NaamaBerman/events{/privacy}",
"received_events_url": "https://api.github.com/users/NaamaBerman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
### System Info
```shell
hello
My model finetunes bert (specifically Roberta) using a lst fully connected layer of a binary text classification task. I was using cross entropy loss and the code worked well. However when I changed the loss the model stopped learning and predicted 0 for all the examples and did not learn.
For other classification tasks the loss works fine.
The loss is decreasing but the accuracy stayes the same and the prediction is always 0.
I have tried different learning rate values and batch sizes and many other things but until now nothing worked. It happens when finetuning happens and also when the bert model is frozen. But not on other classification tasks.
The loss functio is RCE:
class ReverseCrossEntropy(torch.nn.Module):
def __init__(self, num_classes, scale=1.0):
super(ReverseCrossEntropy, self).__init__()
self.device = device
self.num_classes = num_classes
self.scale = scale
def forward(self, pred, labels):
pred = F.softmax(pred, dim=1)
pred = torch.clamp(pred, min=1e-7, max=1.0)
label_one_hot = torch.nn.functional.one_hot(labels, self.num_classes).float().to(self.device)
label_one_hot = torch.clamp(label_one_hot, min=1e-4, max=1.0)
rce = (-1*torch.sum(pred * torch.log(label_one_hot), dim=1))
return self.scale * rce.mean()
and I also tried NCE:
class NormalizedReverseCrossEntropy(torch.nn.Module):
def __init__(self, num_classes, scale=1.0):
super(NormalizedReverseCrossEntropy, self).__init__()
self.device = device
self.num_classes = num_classes
self.scale = scale
def forward(self, pred, labels):
pred = F.softmax(pred, dim=1)
pred = torch.clamp(pred, min=1e-7, max=1.0)
label_one_hot = torch.nn.functional.one_hot(labels, self.num_classes).float().to(self.device)
label_one_hot = torch.clamp(label_one_hot, min=1e-4, max=1.0)
normalizor = 1 / 4 * (self.num_classes - 1)
rce = (-1*torch.sum(pred * torch.log(label_one_hot), dim=1))
return self.scale * normalizor * rce.mean()
They are taken from the artical https://arxiv.org/abs/2006.13554
git: https://github.com/HanxunH/Active-Passive-Losses/blob/master/loss.py
any help will be much appreciated.
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
class Model(nn.Module):
def __init__(self, device='cuda', lm='roberta', alpha_aug=0.8):
super().__init__()
if lm in lm_mp:
self.bert = AutoModel.from_pretrained(lm_mp[lm])
else:
self.bert = AutoModel.from_pretrained(lm)
self.device = device
# linear layer
hidden_size = self.bert.config.hidden_size
self.fc = torch.nn.Linear(hidden_size, 2)
def forward(self, x1, x2=None):
"""Encode the left, right, and the concatenation of left+right.
Args:
x1 (LongTensor): a batch of ID's
Returns:
Tensor: binary prediction
"""
x1 = x1.to(self.device) # (batch_size, seq_len)
enc = self.bert(x1)[0][:, 0, :]
return self.fc(enc)
creating the model:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = Model(device=device,
lm=hp.lm,
alpha_aug=hp.alpha_aug)
model = model.cuda()
optimizer = AdamW(model.parameters(), lr=hp.lr)
The training step is:
#deciding the loss
criterion = nn.CrossEntropyLoss()
for i, batch in enumerate(train_iter):
optimizer.zero_grad()
if len(batch) == 2:
x, y = batch
prediction = model(x)
else:
x1, x2, y = batch
prediction = model(x1, x2)
loss = criterion(prediction, y.to(model.device))
if hp.fp16:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
optimizer.step()
scheduler.step()
if i % 10 == 0: # monitoring
print(f"step: {i}, loss: {loss.item()}")
del loss
This works well then the only change I did was for the loss:
criterion = ReverseCrossEntropy(2)
instead of cross entropy. And this change does not work.
### Expected behavior
```shell
The result for training with cross entropy is:
step: 0, loss: 0.5812623500823975
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 1: dev_f1=0.2772277227722772, f1=0.2745098039215686, best_f1=0.2745098039215686
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.3767085075378418
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 2: dev_f1=0.36363636363636365, f1=0.35294117647058826, best_f1=0.35294117647058826
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.43073320388793945
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 3: dev_f1=0.2978723404255319, f1=0.2978723404255319, best_f1=0.35294117647058826
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.6784828305244446
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 4: dev_f1=0.5365853658536585, f1=0.43999999999999995, best_f1=0.43999999999999995
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.25015905499458313
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 5: dev_f1=0.43076923076923085, f1=0.4745762711864407, best_f1=0.43999999999999995
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.329183429479599
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 6: dev_f1=0.8148148148148148, f1=0.7647058823529412, best_f1=0.7647058823529412
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.08995085209608078
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 7: dev_f1=0.88, f1=0.8333333333333333, best_f1=0.8333333333333333
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.18586984276771545
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 8: dev_f1=0.9032258064516129, f1=0.8750000000000001, best_f1=0.8750000000000001
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.007164476439356804
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 9: dev_f1=0.888888888888889, f1=0.8275862068965518, best_f1=0.8750000000000001
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.005751035641878843
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 10: dev_f1=0.9032258064516129, f1=0.8484848484848484, best_f1=0.8750000000000001
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.14081726968288422
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 11: dev_f1=0.8571428571428571, f1=0.9032258064516129, best_f1=0.8750000000000001
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0045958105474710464
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 12: dev_f1=0.896551724137931, f1=0.9032258064516129, best_f1=0.8750000000000001
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0023396878968924284
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 13: dev_f1=0.8333333333333333, f1=0.888888888888889, best_f1=0.8750000000000001
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0017288422677665949
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 14: dev_f1=0.8750000000000001, f1=0.8750000000000001, best_f1=0.8750000000000001
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0025747090112417936
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 15: dev_f1=0.896551724137931, f1=0.896551724137931, best_f1=0.8750000000000001
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0030487636104226112
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 16: dev_f1=0.88, f1=0.888888888888889, best_f1=0.8750000000000001
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0015720207011327147
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 17: dev_f1=0.896551724137931, f1=0.896551724137931, best_f1=0.8750000000000001
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.001150735653936863
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 18: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0009454995160922408
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 19: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0007868938846513629
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 20: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0006980099133215845
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 21: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0006197747425176203
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 22: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0006151695270091295
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 23: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0004854918224737048
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 24: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.000492772669531405
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 25: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0004389513051137328
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 26: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0003859938296955079
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 27: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0004301978333387524
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 28: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0004772722895722836
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 29: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0003848907945211977
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 30: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0003429920761846006
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 31: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0004783756739925593
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 32: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.00039960749563761055
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 33: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.00043797597754746675
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 34: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.00025380056467838585
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 35: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0003628128906711936
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 36: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.00036079881829209626
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 37: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.00036769770667888224
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 38: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0003665930707938969
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 39: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.0002882482949644327
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 40: dev_f1=0.9333333333333333, f1=0.896551724137931, best_f1=0.896551724137931
The expectation was the the rusults will be similar but hen changed to reverse cross entropy the results are:
step: 0, loss: 3.970363140106201
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 2048.0
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 1: dev_f1=0.28571428571428575, f1=0.30000000000000004, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 2.027850866317749
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 2: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.72965407371521
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 3: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 2.015202522277832
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 4: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.5761911273002625
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 5: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.439455270767212
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 6: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.7271339893341064
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 7: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.8637082576751709
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 8: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.1514854431152344
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 9: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.863682746887207
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 10: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.7270889282226562
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 11: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.863652765750885
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 12: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.1514408588409424
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 13: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.15143883228302
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 14: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 2.0148658752441406
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 15: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 2.5904781818389893
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 16: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 2.0148520469665527
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 17: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 2.01485013961792
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 18: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.4391952753067017
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 19: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.7270371913909912
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 20: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.4392175674438477
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 21: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.4392108917236328
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 22: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 2.0148367881774902
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 23: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 2.302647113800049
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 24: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.8635783195495605
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 25: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.5757505297660828
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 26: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.5757474303245544
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 27: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.4391957521438599
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 28: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 2.0148279666900635
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 29: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 2.0148282051086426
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 30: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.4392008781433105
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 31: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.8635559678077698
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 32: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.8635714054107666
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 33: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 2.0148158073425293
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 34: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.8635637760162354
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 35: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.5757399201393127
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 36: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.8635669946670532
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 37: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.1513622999191284
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 38: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 0.8635590076446533
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 39: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
step: 0, loss: 1.7269994020462036
/usr/local/lib/python3.7/dist-packages/apex/amp/_initialize.py:25: UserWarning: An input tensor was not cuda.
warnings.warn("An input tensor was not cuda.")
epoch 40: dev_f1=0.2666666666666667, f1=0.2666666666666667, best_f1=0.30000000000000004
Thank you for the help.
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18132/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18131
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18131/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18131/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18131/events
|
https://github.com/huggingface/transformers/pull/18131
| 1,304,384,471
|
PR_kwDOCUB6oc47YrXJ
| 18,131
|
Fixing a hard to trigger bug for `text-generation` pipeline.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR readds sending `attention_mask` to the `generate` function.
In order to trigger the bug ones would need:
- To use the pipeline in `batch_size>1` mode.
- Use a model that did not configure `pad_token_id`. (if it is configured, than generate just recovers the attention mask gracefully)
Then the `generate` function would not be able to recover the attention mask, warn
about it but still generate something (most likely incorrect)
Since the pipeline most likely already generated the `attention_mask` we might as well
send it along to `generate`.
@sgugger
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18131/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18131",
"html_url": "https://github.com/huggingface/transformers/pull/18131",
"diff_url": "https://github.com/huggingface/transformers/pull/18131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18131.patch",
"merged_at": 1657893248000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18130
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18130/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18130/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18130/events
|
https://github.com/huggingface/transformers/issues/18130
| 1,304,233,696
|
I_kwDOCUB6oc5NvQbg
| 18,130
|
model.generate doesn't validate kwargs
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @gante as well",
"Hi @stas00 👋 -- we do have a plan for it. The rough sketch is [here](https://github.com/huggingface/transformers/pull/17196#issuecomment-1155002093), and I will pick it up after the last wrinkles related to TF generate have been ironed out (which should be very soon!)",
"excellent. Thank you, @gante!\r\n\r\nI guess let's keep this Issue open for tracking unless there is another one already? ",
"Yeah, let's keep this one open!"
] | 1,657
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
### Feature request
I made a mistake in a script:
```
model.generate(**tokens, in_length=num_tokens)
```
missing `m` in `min_length` and I was puzzling over why I was getting unexpected results (as it was using the default value which was quite different from mine)
Would it be possible to have `generate` validate its input and assert on unexpected args?
Thank you!
@patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18130/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18129
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18129/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18129/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18129/events
|
https://github.com/huggingface/transformers/issues/18129
| 1,304,180,721
|
I_kwDOCUB6oc5NvDfx
| 18,129
|
DeltaLM
|
{
"login": "jcmc00",
"id": 35983171,
"node_id": "MDQ6VXNlcjM1OTgzMTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/35983171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcmc00",
"html_url": "https://github.com/jcmc00",
"followers_url": "https://api.github.com/users/jcmc00/followers",
"following_url": "https://api.github.com/users/jcmc00/following{/other_user}",
"gists_url": "https://api.github.com/users/jcmc00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcmc00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcmc00/subscriptions",
"organizations_url": "https://api.github.com/users/jcmc00/orgs",
"repos_url": "https://api.github.com/users/jcmc00/repos",
"events_url": "https://api.github.com/users/jcmc00/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcmc00/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Any progress on this?",
"Hi,\r\n\r\nI've noticed there are some DeltaLM available in the Hub:\r\n- https://huggingface.co/IDEA-CCNL/Randeng-Deltalm-362M-En-Zh\r\n- https://huggingface.co/IDEA-CCNL/Randeng-Deltalm-362M-Zh-En\r\n\r\nThe code they use seems to be here: https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen/models/deltalm\r\n\r\nHowever, I'd be interested in using the original checkpoints available in the official repository: https://github.com/microsoft/unilm/tree/master/deltalm\r\n\r\nDo you have any idea on how to do it?\r\n\r\nThanks!",
"It seems like DeltaLM is still the best multilingual NMT.\r\nHow can we progress?",
"The modeling files have already been added for this model on the hub repo e.g. [here](https://huggingface.co/IDEA-CCNL/Randeng-Deltalm-362M-En-Zh/blob/main/modeling_deltalm.py). \r\n\r\nAt the moment, the [model config](https://huggingface.co/IDEA-CCNL/Randeng-Deltalm-362M-En-Zh/blob/main/config.json) is missing the auto_map parameter e.g. [like this one](https://huggingface.co/01-ai/Yi-34B/blob/58cb0a1e67e71efee59c3b9956723f87e97fe7b7/config.json#L5).\r\n\r\nOnce added, you should be able to use the models directly using: \r\n\r\n```py\r\nfrom transformers import AutoMode\r\n\r\nmodel = AutoModel.from_pretrained(\"IDEA-CCNL/Randeng-Deltalm-362M-En-Zh\", trust_remote_code=True)\r\n```\r\n\r\nIf you want to use the already implemented architecture with other checkpoints, you can upload these to the hub and point to that code in the model config using the `_name_or_path` parameter. \r\n\r\n"
] | 1,657
| 1,700
| null |
CONTRIBUTOR
| null |
### Model description
DeltaLM is a multilingual encoder-decoder architecture that regards the decoder as the task layer of off-the-shelf pre-trained encoders. This architecture introduces an interleaved decoder, which has a more consistent structure with the encoder. Weights from pre-trained multilingual encoders are used to initialise both the encoder and decoder models before training on monolingual and bilingual data.
As of September 2021 DeltaLM ranks first on the [WMT21 multilingual translation task](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html).
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The model implementation is available at:
https://github.com/microsoft/unilm/tree/master/deltalm
Model weights are available:
[DeltaLM-base](https://deltalm.blob.core.windows.net/deltalm/deltalm-base.pt)
[DeltaLM-large](https://deltalm.blob.core.windows.net/deltalm/deltalm-large.pt)
Who are the authors:
@shumingma @gitnlp
I'd be happy to try work on contributing the model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18129/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18129/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/18128
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18128/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18128/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18128/events
|
https://github.com/huggingface/transformers/pull/18128
| 1,303,805,446
|
PR_kwDOCUB6oc47WxsA
| 18,128
|
Gradual types
|
{
"login": "migeed-z",
"id": 10407760,
"node_id": "MDQ6VXNlcjEwNDA3NzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/10407760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/migeed-z",
"html_url": "https://github.com/migeed-z",
"followers_url": "https://api.github.com/users/migeed-z/followers",
"following_url": "https://api.github.com/users/migeed-z/following{/other_user}",
"gists_url": "https://api.github.com/users/migeed-z/gists{/gist_id}",
"starred_url": "https://api.github.com/users/migeed-z/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/migeed-z/subscriptions",
"organizations_url": "https://api.github.com/users/migeed-z/orgs",
"repos_url": "https://api.github.com/users/migeed-z/repos",
"events_url": "https://api.github.com/users/migeed-z/events{/privacy}",
"received_events_url": "https://api.github.com/users/migeed-z/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,657
| 1,657
|
NONE
| null |
Test XGLM Model with gradual types. We annotate the second dimension of the model, trace using constraints and also generate constraints to migrate the first annotation of the model after tracing is complete.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18128/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18128",
"html_url": "https://github.com/huggingface/transformers/pull/18128",
"diff_url": "https://github.com/huggingface/transformers/pull/18128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18128.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18127
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18127/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18127/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18127/events
|
https://github.com/huggingface/transformers/issues/18127
| 1,303,675,328
|
I_kwDOCUB6oc5NtIHA
| 18,127
|
todo: enable CI to run torchdynamo/tensorrt tests
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi, @stas00 \r\n\r\nSure! It sounds that you have already some tests written, and we just want to run them with CI, right? Let me know how you would like to proceed (discussion or PR review etc.).",
"Hi @stas00 Let me know if we should proceed :-)",
"Yes, please. Please let me know how I can help.\r\n\r\nThank you, @ydshieh!",
"Hi! Basically I don't know what the task is. I didn't go through #17765 in detail, and only knows that PR merged.\r\n\r\nIs the (new) goal to install some libraries in docker files, write some specific test methods, or something else.\r\n\r\nYou mentioned `I installed the environment locally, the tests work.`. It would be nice if you can let me know which test you mean here, and if you want me to `installed the environment` for the scheduled CI(s) to run those tests 🙏 \r\n\r\nThank you, @stas00 !",
"1. The instructions of what needs to be installed are verbatim in the OP of https://github.com/huggingface/transformers/pull/17765\r\n\r\n2. To test:\r\n\r\n```\r\npytest tests/trainer/test_trainer.py -k torchdynamo\r\n```",
"OK, I get it better now. So basically I need to\r\n- install what mentioned in `To reproduce and set up the environment` section in #17765 inside some docker files\r\n- have a job to run `torchdynamo` tests\r\n\r\nDo you think it's better to have a new docker image for this (just like `deepspeed`), or we can just put it in the `transformers-all-latest-gpu` (the one used for mode/tokenizer/generation tests)? I can try the later first in any case.",
"I'd say one docker image for all the extensions.",
"At some point it will become stable and will work with just the official released version.",
"@stas00 I have tried to build the image and run the tests.\r\n\r\nHere is the [run page](https://github.com/huggingface/transformers/runs/7878615152?check_suite_focus=true)\r\n\r\nIt failed with\r\n\r\n```bash\r\n> from torchdynamo.optimizations.training import aot_autograd_speedup_strategy\r\nE ImportError: cannot import name 'aot_autograd_speedup_strategy' from 'torchdynamo.optimizations.training'\r\n```\r\n\r\nI can't find `aot_autograd_speedup_strategy` in the latest [torchdynamo repo](https://github.com/pytorch/torchdynamo).\r\n\r\ncc @frank-wei ",
"Thank you for trying, @ydshieh. It's too bad that the API appears to be unstable :(\r\n\r\nLet's wait for @frank-wei to reply",
"> torchdynamo repo\r\n\r\nThanks for setting up the testing @stas00 and @ydshieh \r\nLooks like @Chillee or @anijain2305 had made some updates there for AOTAutoGrad. Could you elaborate here? ",
"We have cleaned up some Aot Autograd related things in TorchDynamo repo.\r\n\r\nhttps://github.com/huggingface/transformers/blob/4eed2beca0fd8058a1c51684f68599522adf20c9/src/transformers/trainer.py#L652\r\n\r\nThe above line could be replaced with \r\n\r\n`return torchdynamo.optimize(\"aot_nvfuser\")`\r\n\r\nTherefore, we do not need the import anymore on line 645\r\n",
"> We have cleaned up some Aot Autograd related things in TorchDynamo repo.\r\n> \r\n> https://github.com/huggingface/transformers/blob/4eed2beca0fd8058a1c51684f68599522adf20c9/src/transformers/trainer.py#L652\r\n> \r\n> The above line could be replaced with\r\n> \r\n> `return torchdynamo.optimize(\"aot_nvfuser\")`\r\n\r\n1. could you please make a PR that fixes things.\r\n\r\n2. could you please include the relevant transformers tests in your CI, so that if you break things in the future you'd instantly know and then update the transformers side? Thank you.\r\n\r\ne.g. you can see how Deepspeed runs transformers/deepspeed integration tests on their CI https://github.com/microsoft/DeepSpeed/blob/master/.github/workflows/nv-transformers-v100.yml\r\n\r\nIn your case it'd be cloning the latest `transformers` repo and running:\r\n\r\n```\r\npytest tests/trainer/test_trainer.py -k torchdynamo\r\n```\r\n",
"I can change to `return torchdynamo.optimize(\"aot_nvfuser\")` in my PR directly (to enable CI testing).",
"`import` issue fixed. But get `ResetRequired` from `../torchdynamo/torchdynamo/eval_frame.py:101: in __enter__\r\n self.on_enter()`. See the full error below.\r\n\r\nMaybe I could just `torchdynamo.reset()` somewhere below ` # 2. TorchDynamo nvfuser`??\r\n\r\n```bash\r\n________________ TrainerIntegrationTest.test_torchdynamo_memory ________________\r\n\r\nself = <tests.trainer.test_trainer.TrainerIntegrationTest testMethod=test_torchdynamo_memory>\r\n\r\n @require_torch_non_multi_gpu\r\n @require_torchdynamo\r\n def test_torchdynamo_memory(self):\r\n # torchdynamo at the moment doesn't support DP/DDP, therefore require a single gpu\r\n class CustomTrainer(Trainer):\r\n def compute_loss(self, model, inputs, return_outputs=False):\r\n x = inputs[\"x\"]\r\n output = model(x)\r\n if self.args.n_gpu == 1:\r\n return output.mean()\r\n return output\r\n \r\n class MyModule(torch.nn.Module):\r\n \"\"\"Simple module that does aggressive fusion\"\"\"\r\n \r\n def __init__(self):\r\n super().__init__()\r\n \r\n def forward(self, x):\r\n for _ in range(20):\r\n x = torch.nn.functional.relu(x)\r\n return x\r\n \r\n mod = MyModule()\r\n \r\n # 1. without TorchDynamo (eager baseline)\r\n a = torch.ones(1024, 1024, device=\"cuda\", requires_grad=True)\r\n a.grad = None\r\n trainer = CustomTrainer(model=mod)\r\n # warmup\r\n for _ in range(10):\r\n orig_loss = trainer.training_step(mod, {\"x\": a})\r\n \r\n # resets\r\n gc.collect()\r\n torch.cuda.empty_cache()\r\n torch.cuda.reset_peak_memory_stats()\r\n \r\n orig_loss = trainer.training_step(mod, {\"x\": a})\r\n orig_peak_mem = torch.cuda.max_memory_allocated()\r\n del trainer\r\n \r\n # 2. TorchDynamo nvfuser\r\n a = torch.ones(1024, 1024, device=\"cuda\", requires_grad=True)\r\n a.grad = None\r\n args = TrainingArguments(output_dir=\"None\", torchdynamo=\"nvfuser\")\r\n trainer = CustomTrainer(model=mod, args=args)\r\n # warmup\r\n for _ in range(10):\r\n> loss = trainer.training_step(mod, {\"x\": a})\r\n\r\ntests/trainer/test_trainer.py:1893: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\nsrc/transformers/trainer.py:2479: in training_step\r\n with self.compute_loss_context_manager():\r\nsrc/transformers/utils/generic.py:291: in __enter__\r\n self.stack.enter_context(context_manager)\r\n/opt/conda/lib/python3.8/contextlib.py:425: in enter_context\r\n result = _cm_type.__enter__(cm)\r\n../torchdynamo/torchdynamo/eval_frame.py:101: in __enter__\r\n self.on_enter()\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\n def on_enter():\r\n global most_recent_backend\r\n if (\r\n most_recent_backend is not None\r\n and most_recent_backend is not compiler_fn\r\n ):\r\n> raise ResetRequired()\r\nE torchdynamo.exc.ResetRequired: \r\nE Must call `torchdynamo.reset()` before changing backends. Detected two calls to\r\nE `torchdynamo.optimize(...)` with a different backend compiler arguments.\r\n\r\n../torchdynamo/torchdynamo/eval_frame.py:[183](https://github.com/huggingface/transformers/runs/7894266049?check_suite_focus=true#step:7:184): ResetRequired\r\n```",
"That's why I asked of @anijain2305 to fix it and make a new PR that actually fixes the tests.\r\n\r\nIt's not productive to keep going back and forth when we don't know what other things have changed.\r\n",
"Yes, @stas00 I am gonna send a PR to transformers to make the appropriate changes. Apologize for the failures.\r\n\r\nRegarding adding transformers in the CI, thats a very good idea. Let me see how much extra time it adds on TorchDynamo side.",
"Thank you, @anijain2305!\r\n\r\nYou can add it as a separate job, so it'd run in parallel with your other jobs and thus not add to the total CI runtime. It should be real fast to finish at least with barely a few basic tests we have right now for torchdynamo. or even tacking it to the existing job - the main overhead will be cloning `transformers` and installing its prerequisites. ",
"Fixed by #19056"
] | 1,657
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
@ydshieh, let's talk about instrumenting one of the jobs to run tests for torchdynamo/tensorrt
It's quite a handful of things to build, the instructions are in the OP:
https://github.com/huggingface/transformers/pull/17765
I installed the environment locally, the tests work. I didn't want to get in the way of the PR, so doing it in a separate task.
thank you.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18127/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18126
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18126/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18126/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18126/events
|
https://github.com/huggingface/transformers/pull/18126
| 1,303,636,321
|
PR_kwDOCUB6oc47WNKf
| 18,126
|
NLLB tokenizer
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"All models are now public, feel free to try it out @stefan-it. The generation seems good, have not tried fine-tuning yet.",
"_The documentation is not available anymore as the PR was closed or merged._",
"I don't know of a better place to post (issue?), so I'll do it here :)\r\n\r\n @LysandreJik Thank you so much for adding support for the NLLB dense models! I pulled out this branch and tried all of them and they work awesome!\r\n\r\nThere is the following place in the readme\r\n\"This implementation contains dense models available in release. Let us know via GitHub if you want to see MoE models as well.\"\r\n\r\nSo it would be really great if you could add MoE models! I tried to figure out the original repo, but it turned out to be unexpectedly difficult. I couldn't get MoE to run. So if you add MoE models, I'm sure it will make a lot of people happier, at least me :)\r\n",
"@LysandreJik Thanks a lot for your promt work! I tried using NLLB model from HuggingFace and noticed one problem:\r\n\r\nmax_length does not set in config.json for any of the NLLB models, so it uses default value of max_length (20).\r\nhttps://github.com/huggingface/transformers/blob/33028f4c795e76f9e97226fc591bc7d0b8c7d815/src/transformers/configuration_utils.py#L125\r\nAs the result, your example code cannot generate more than 20 tokens. it is possible to set max_length higher when calling translation method, but it will be great to have meaningful default as well.\r\n\r\nFor comparison, both for M2M and MBart50 models max_length set in config.json file to 200.",
"> @LysandreJik Thanks a lot for your promt work! I tried using NLLB model from HuggingFace and noticed one problem:\r\n> \r\n> max_length does not set in config.json for any of the NLLB models, so it uses default value of max_length (20).\r\n> \r\n> https://github.com/huggingface/transformers/blob/33028f4c795e76f9e97226fc591bc7d0b8c7d815/src/transformers/configuration_utils.py#L125\r\n> \r\n> \r\n> As the result, your example code cannot generate more than 20 tokens. it is possible to set max_length higher when calling translation method, but it will be great to have meaningful default as well.\r\n> For comparison, both for M2M and MBart50 models max_length set in config.json file to 200.\r\n\r\nHow is the default max_length determined per model? Or is it documented in their white papers? With this PR, I have started evaluating the extremely large model (facebook/nllb-200-3.3B) against GCP translation and so far it is doing really well despite the length of text I give it but I want to give it the best chance to perform so knowing the ideal max_length would help.",
"> > @LysandreJik Thanks a lot for your promt work! I tried using NLLB model from HuggingFace and noticed one problem:\r\n> > max_length does not set in config.json for any of the NLLB models, so it uses default value of max_length (20).\r\n> > https://github.com/huggingface/transformers/blob/33028f4c795e76f9e97226fc591bc7d0b8c7d815/src/transformers/configuration_utils.py#L125\r\n> > \r\n> > As the result, your example code cannot generate more than 20 tokens. it is possible to set max_length higher when calling translation method, but it will be great to have meaningful default as well.\r\n> > For comparison, both for M2M and MBart50 models max_length set in config.json file to 200.\r\n> \r\n> How is the default max_length determined per model? Or is it documented in their white papers? With this PR, I have started evaluating the extremely large model (facebook/nllb-200-3.3B) against GCP translation and so far it is doing really well despite the length of text I give it but I want to give it the best chance to perform so knowing the ideal max_length would help.\r\n\r\nI think usual default for max_length is to be equal to max input length. Translation pipeline in transformers are checking that max_length at higher than 90% of input length.\r\nhttps://github.com/huggingface/transformers/blob/33028f4c795e76f9e97226fc591bc7d0b8c7d815/src/transformers/pipelines/text2text_generation.py#L272-L278",
"as mentionned here #19943\r\nwhere did you guys see that the \"</s> Langtoken\" is added AFTER the tokens ?\r\nIn the NLLB paper, it says only the \"Langtoken\" is placed BEFORE the tokens. (mBart does the opposite)",
"I've just seen this example - where the lang-token is prepended:\r\n\r\nhttps://github.com/facebookresearch/fairseq/blob/nllb/fairseq/data/multilingual/multilingual_data_manager.py#L78-L101\r\n\r\nfrom original code base :thinking: ",
"right. Also I am wondering why they use \"</s>\" which is \"eos\" as the start token of the source sequence. (in fact same for the target sequence). I would have expected:\r\nSRC = LangTok + tokens\r\nTGT = BOS + LangTok, tokens + EOS\r\n\r\nIt seems they use EOS instead of BOS and that they put a EOS as the SRC start.\r\n"
] | 1,657
| 1,679
| 1,658
|
MEMBER
| null |
Adds the NLLB tokenizer. In order to run:
```py
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
>>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
>>> translator = pipeline('translation', model=model, tokenizer=tokenizer, src_lang="eng_Latn", tgt_lang='ron_Latn')
>>> translator("UN Chief says there is no military solution in Syria")
[{'translation_text': 'Şeful ONU spune că nu există o soluţie militară în Siria'}]
```
Closes https://github.com/huggingface/transformers/issues/18043
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18126/reactions",
"total_count": 13,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 10,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18126/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18126",
"html_url": "https://github.com/huggingface/transformers/pull/18126",
"diff_url": "https://github.com/huggingface/transformers/pull/18126.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18126.patch",
"merged_at": 1658146355000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18125
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18125/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18125/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18125/events
|
https://github.com/huggingface/transformers/pull/18125
| 1,303,562,126
|
PR_kwDOCUB6oc47V9Fx
| 18,125
|
Make sharded checkpoints work in offline mode
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,657
| 1,657
|
COLLABORATOR
| null |
# What does this PR do?
This PR make sharded checkpoint work in offline mode and add more information to an error we return.
The crux of the issue is that the `from_pretrained` method of the various models will catch `EntryNotFoundError` on the regular model weights file, but we return a `FileNotFoundError` in offline mode. I changed the error type at the root, to avoid making three modifications in the PyTorch/TF/Flax model classes, but can change if you don't find this suitable.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18125/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18125/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18125",
"html_url": "https://github.com/huggingface/transformers/pull/18125",
"diff_url": "https://github.com/huggingface/transformers/pull/18125.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18125.patch",
"merged_at": 1657730588000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18124
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18124/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18124/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18124/events
|
https://github.com/huggingface/transformers/issues/18124
| 1,303,402,230
|
I_kwDOCUB6oc5NsFb2
| 18,124
|
Bloom-6b3 not utilizing much from GPU
|
{
"login": "farzanehnakhaee70",
"id": 30573681,
"node_id": "MDQ6VXNlcjMwNTczNjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/30573681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farzanehnakhaee70",
"html_url": "https://github.com/farzanehnakhaee70",
"followers_url": "https://api.github.com/users/farzanehnakhaee70/followers",
"following_url": "https://api.github.com/users/farzanehnakhaee70/following{/other_user}",
"gists_url": "https://api.github.com/users/farzanehnakhaee70/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farzanehnakhaee70/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farzanehnakhaee70/subscriptions",
"organizations_url": "https://api.github.com/users/farzanehnakhaee70/orgs",
"repos_url": "https://api.github.com/users/farzanehnakhaee70/repos",
"events_url": "https://api.github.com/users/farzanehnakhaee70/events{/privacy}",
"received_events_url": "https://api.github.com/users/farzanehnakhaee70/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @farzanehnakhaee70 !\r\nThanks a lot for your message!\r\nCould you give us the output of `nvidia-smi` when running your script? Also could you share with us the version of `accelerate` you are using?",
"Hi @younesbelkada \r\nThanks for your support.\r\nnvidia-smi:\r\n```\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 495.29.05 Driver Version: 495.29.05 CUDA Version: 11.5 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|===============================+======================+======================|\r\n| 0 Tesla V100-SXM2... Off | 00000000:3E:00.0 Off | 0 |\r\n| N/A 38C P0 54W / 300W | 10358MiB / 32510MiB | 0% Default |\r\n| | | N/A |\r\n+-------------------------------+----------------------+----------------------+\r\n```\r\nAlso accelerate version is `0.10.0`.\r\nIt should also be mentioned that the same behavior existed if I use deep-speed or even if I didn't use any of accelerate and deep-speed.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
### System Info
GPU: Nvidia V100
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Using the following code, when I do some inferences, GPU utilization is below 10%.
```
tokenizer = AutoTokenizer.from_pretrained("models/bloom_1b3")
model = AutoModelForCausalLM.from_pretrained(
"models/bloom_1b3",
device_map="auto",
torch_dtype=torch.float16
)
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, device=torch.device(0))
```
### Expected behavior
GPU utilization should be larger than 50%.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18124/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18123
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18123/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18123/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18123/events
|
https://github.com/huggingface/transformers/pull/18123
| 1,303,324,523
|
PR_kwDOCUB6oc47VJeh
| 18,123
|
Adding OPTForSeqClassification class
|
{
"login": "oneraghavan",
"id": 3041890,
"node_id": "MDQ6VXNlcjMwNDE4OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3041890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oneraghavan",
"html_url": "https://github.com/oneraghavan",
"followers_url": "https://api.github.com/users/oneraghavan/followers",
"following_url": "https://api.github.com/users/oneraghavan/following{/other_user}",
"gists_url": "https://api.github.com/users/oneraghavan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oneraghavan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oneraghavan/subscriptions",
"organizations_url": "https://api.github.com/users/oneraghavan/orgs",
"repos_url": "https://api.github.com/users/oneraghavan/repos",
"events_url": "https://api.github.com/users/oneraghavan/events{/privacy}",
"received_events_url": "https://api.github.com/users/oneraghavan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Not sure how to make these failing test pass. Need help",
"The following three test are failing :\r\n\r\nFAILED tests/models/opt/test_modeling_opt.py::OPTModelTest::test_load_with_mismatched_shapes - AssertionError: RuntimeError not raised\r\nFAILED tests/models/opt/test_modeling_opt.py::OPTModelTest::test_model_common_attributes - NotImplementedError\r\nFAILED tests/models/opt/test_modeling_opt.py::OPTModelTest::test_resize_tokens_embeddings - NotImplementedError\r\n\r\nNeed help on how to fix them .",
"@NielsRogge Can you help me on how to fix the tests?\r\n",
"@ArthurZucker Please point on how to fix these errors",
"Hey, thanks a lot for this PR ! sorry for the delay I was OOO. 2 of the failed tests are quite simple to solve, it seems that the class is missing a function `get_input_embeddings`. \r\nOne test is probably unrelated to your PR : `PegasusStandaloneDecoderModelTest` for that I would just recommend merging the latest updates from the main branch. \r\nI would have to dig a little bit deeper for the last test `OPTModelTest.test_load_with_mismatched_shapes` but it might be related to a missing `set_input_embedding` function, will check that ",
"@younesbelkada Some tests are failing due to same model(OPTForSeqClassification) available in torch not available for tensorflow. Should I add it part of this PR ? \r\n\r\nCan we close this PR and I will raise another PR with TFOPTForSeqClassification ? ",
"Hey @oneraghavan !\r\nThanks for your comment, I am not sure if not having `TFOPTForSeqClassification` explains why those tests are failing since the test should not return nothing [here](https://github.com/oneraghavan/transformers/blob/d672d9c54a5a329220a8dabb6b6e3f961fbdca5b/tests/test_modeling_common.py#L1767). \r\nThe error says `AttributeError: decoder.embed_tokens.weight not found in TF 2.0 model` so it might be possible that the modifications you made on `OPTModel` and `OPTPretrainedModel` classes broke those tests. \r\nAlso the git history seems to be broken, could you please rebase to main or merge with force-push to clean the git histoiry (aka the number of modified files has increased) 💪 \r\nThanks again for your help here! And let us know if anything is unclear or if you need any help",
"Hey, I think the history is a bit messed up but it is alright (issues with merging I guess). Also, we should not have to change the `OPTPretrainedModel` base prefix. This can be a backward compatibility issue and should definitely be avoided. If you could just revert on that change it would be great. I think that it will solve the failing test and we will be able to merge 👍🏻 ",
"> Hey @oneraghavan ! Thanks for your comment, I am not sure if not having `TFOPTForSeqClassification` explains why those tests are failing since the test should not return nothing [here](https://github.com/oneraghavan/transformers/blob/d672d9c54a5a329220a8dabb6b6e3f961fbdca5b/tests/test_modeling_common.py#L1767). The error says `AttributeError: decoder.embed_tokens.weight not found in TF 2.0 model` so it might be possible that the modifications you made on `OPTModel` and `OPTPretrainedModel` classes broke those tests. Also the git history seems to be broken, could you please rebase to main or merge with force-push to clean the git histoiry (aka the number of modified files has increased) 💪 Thanks again for your help here! And let us know if anything is unclear or if you need any help\r\n\r\nI Tried to do pull rebase from my main to huggingface repo. How do you want me to leave this PR? Just leave my commits on top ? ",
"@ArthurZucker All tests fixed, we can close this PR.",
"> Looks good! Only left a comment regarding one addition.\r\n\r\nDone",
"Hey @oneraghavan do you have any checkpoints we could use for the documentation? It seems that you added some expected loss value and expected outputs, wanted to know if we can maybe use the model checkpoints for documentation (otherwise it is totally ok! I will use a dummy model 😄 )",
"@ArthurZucker I do not have any checkpoints, I used a dummy model."
] | 1,657
| 1,660
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
It add the class for OPTForSequenceClassification based on OPT model
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17525
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18123/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18123",
"html_url": "https://github.com/huggingface/transformers/pull/18123",
"diff_url": "https://github.com/huggingface/transformers/pull/18123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18123.patch",
"merged_at": 1658304862000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18122
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18122/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18122/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18122/events
|
https://github.com/huggingface/transformers/issues/18122
| 1,303,145,422
|
I_kwDOCUB6oc5NrGvO
| 18,122
|
TypeError: TextInputSequence must be str
|
{
"login": "choshiho",
"id": 17508435,
"node_id": "MDQ6VXNlcjE3NTA4NDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/17508435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/choshiho",
"html_url": "https://github.com/choshiho",
"followers_url": "https://api.github.com/users/choshiho/followers",
"following_url": "https://api.github.com/users/choshiho/following{/other_user}",
"gists_url": "https://api.github.com/users/choshiho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/choshiho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/choshiho/subscriptions",
"organizations_url": "https://api.github.com/users/choshiho/orgs",
"repos_url": "https://api.github.com/users/choshiho/repos",
"events_url": "https://api.github.com/users/choshiho/events{/privacy}",
"received_events_url": "https://api.github.com/users/choshiho/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"What is contained in your CSV files? Would you have a reproducible code example we can run in colab?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-3.10.0-1160.6.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.7
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@LysandreJik @SaulLu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
step1.
I downloaded bert-base-cased from https://huggingface.co/models, then I placed all files(config.json pytorch_model.bin tokenizer_config.json tokenizer.json vocab.txt) in the directory /transformers/examples/pytorch/text-classification/bert-base-cased
step2.
from https://github.com/nyu-mll/GLUE-baselines/download_glue_data.py, I got train.tsv and dev.tsv, then converted them to train.csv and validation.csv(both are three columns, namely label, sentence1, sentence2). I placed these two files in the directory /transformers/examples/pytorch/text-classification/
step3.
python run_glue.py --model_name_or_path bert-base-cased --train_file train.csv --validation_file validation.csv --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 1 --output_dir ./output/
Then I got this error as shown below:
Running tokenizer on dataset: 0%| | 0/4 [00:00<?, ?ba/s]
Traceback (most recent call last):
File "/root/zhaozhifeng/transformers/examples/pytorch/text-classification/run_glue.py", line 613, in <module>
main()
File "/root/zhaozhifeng/transformers/examples/pytorch/text-classification/run_glue.py", line 442, in main
raw_datasets = raw_datasets.map(
File "/root/anaconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 770, in map
{
File "/root/anaconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp>
k: dataset.map(
File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2376, in map
return self._map_single(
File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 551, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 518, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/root/anaconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2764, in _map_single
batch = apply_function_on_filtered_inputs(
File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2644, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/root/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2336, in decorated
result = f(decorated_item, *args, **kwargs)
File "/root/zhaozhifeng/transformers/examples/pytorch/text-classification/run_glue.py", line 434, in preprocess_function
result = tokenizer(*args, padding=padding, max_length=max_seq_length, truncation=True)
File "/root/anaconda3/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2495, in __call__
return self.batch_encode_plus(
File "/root/anaconda3/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2686, in batch_encode_plus
return self._batch_encode_plus(
File "/root/anaconda3/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 426, in _batch_encode_plus
encodings = self._tokenizer.encode_batch(
TypeError: TextInputSequence must be str
### Expected behavior
run run_glue.py and fine-tune on the pre-trained model successfully.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18122/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18121
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18121/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18121/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18121/events
|
https://github.com/huggingface/transformers/issues/18121
| 1,303,039,519
|
I_kwDOCUB6oc5Nqs4f
| 18,121
|
how to frozen TFGPT2LMHeadModel Embedding matrix?
|
{
"login": "Orient12",
"id": 39329359,
"node_id": "MDQ6VXNlcjM5MzI5MzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/39329359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Orient12",
"html_url": "https://github.com/Orient12",
"followers_url": "https://api.github.com/users/Orient12/followers",
"following_url": "https://api.github.com/users/Orient12/following{/other_user}",
"gists_url": "https://api.github.com/users/Orient12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Orient12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Orient12/subscriptions",
"organizations_url": "https://api.github.com/users/Orient12/orgs",
"repos_url": "https://api.github.com/users/Orient12/repos",
"events_url": "https://api.github.com/users/Orient12/events{/privacy}",
"received_events_url": "https://api.github.com/users/Orient12/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @Orient12 👋 As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗\r\n\r\n(You can set the `trainable` attribute of a layer to `False`, see [this guide](https://keras.io/guides/transfer_learning/#freezing-layers-understanding-the-trainable-attribute))",
"> \r\nActually, class TFGPT2LMHeadModel model can not split by layer,it just have one layer, so i can't set embedding matrix trainable=False\r\n",
"@Orient12 \r\nSure you can, you just need to know the right attribute names (for which you might need to dig through the code). The following snippet freezes the embedding layer.\r\n\r\n```python\r\nfrom transformers import TFGPT2LMHeadModel\r\n\r\nmodel = TFGPT2LMHeadModel.from_pretrained(\"distilgpt2\")\r\nprint(model.transformer.wte)\r\nprint(model.transformer.wte.trainable)\r\n\r\n# setting embeddings to not trainable\r\nmodel.transformer.wte.trainable = False\r\nprint(model.transformer.wte.trainable)\r\n```\r\n\r\nOur models do not use the Sequential nor the Functional API -- they use the [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) method.",
"Thanks for your help!I have known how to freeze it!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
### System Info
when i use the TFGPT2LMHeadModel structure to train, i want to frozen the embedding matrix, how can i do it?@patil-suraj
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
config = AutoConfig.from_pretrained(pretrained_model_path,vocab_size=VOCAB_SIZE, n_positions=MAX_SEQ_LEN, n_ctx=MAX_SEQ_LEN, n_layer=1, n_embd=384, initializer_range=0.002)#, initializer_range=0.002
model = TFGPT2LMHeadModel.from_config(config)
embedding = np.load(embed_data_path)
model.set_input_embeddings(embedding)
### Expected behavior
help me to frozen embedding layer
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18121/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18120
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18120/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18120/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18120/events
|
https://github.com/huggingface/transformers/issues/18120
| 1,302,809,944
|
I_kwDOCUB6oc5Np01Y
| 18,120
|
Pipelines returns inconsistent results when using non-default model
|
{
"login": "sjgiorgi",
"id": 4297839,
"node_id": "MDQ6VXNlcjQyOTc4Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4297839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sjgiorgi",
"html_url": "https://github.com/sjgiorgi",
"followers_url": "https://api.github.com/users/sjgiorgi/followers",
"following_url": "https://api.github.com/users/sjgiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/sjgiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sjgiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sjgiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/sjgiorgi/orgs",
"repos_url": "https://api.github.com/users/sjgiorgi/repos",
"events_url": "https://api.github.com/users/sjgiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sjgiorgi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @sjgiorgi ,\r\n\r\nDid you disable the logs somehow ?\r\nWhen running your code you can see:\r\n\r\n```\r\nSome weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight', 'cls.predictions.decoder.weight']\r\n- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nwhich *does* warn you about uninitialized weights and potential issues.\r\nMaybe you deactivated the warnings ?",
"Ah, yes, I see those warnings now (I had changed the logging) and apologize. Thank you.\r\n\r\nMy concerns are:\r\n\r\n- While the warning message says \"You should probably TRAIN...\" it does not _explicitly_ say \"We don't recommend using this model for classification\". \r\n- Results are still being returned, albeit with different labels. It's not clear what these numbers mean or how something like `bert-base-uncased` is being used in a classification setting. Is there documentation on what is happening?\r\n\r\nSo it's not clear how I should be catching model mismatch given that pipeline is returning something that looks reasonably formatted. Also, blindly adding pipeline to some code could produce some weird results and new users might not fully understand the current warning's implications.\r\n\r\nI've handled this on my side, where I check the returned pipeline dictionary results against known keys which _should_ be in the results (e.g., `bert-base-uncased` in the `sentiment-analysis` pipeline returns a label of `LABEL_0` as opposed to `POSITIVE` or `NEGATIVE`):\r\n\r\n```\r\nPIPELINE_RESULTS_BY_TASK = {\r\n \"text-classification\": [\"POSITIVE\", \"NEGATIVE\"], \r\n \"sentiment-analysis\": [\"POSITIVE\", \"NEGATIVE\"], \r\n \"question-answering\": [\"answer\"], \r\n \"translation\": [\"translation_text\"], \r\n \"summarization\": [\"summary_text\"], \r\n \"token-classification\": [\"entity\"], \r\n \"ner\": [\"entity\"], \r\n \"text-generation\": [\"generated_text\"], \r\n}\r\n```\r\n\r\nbut I'm not sure if this will catch everything. ",
"@sjgiorgi \r\n\r\nI do agree that it's easy to miss warnings, especially when running setups automatically and serving them for instance, those warnings might not be readily visible to you.\r\n\r\nThe real culprit here, is that the model architecture you are trying to load is actually very capable of running the pipeline.\r\nBut the model weights themselves are missing the layers the architecture is looking for (here it doesn't have the classification head).\r\n\r\nCatching the warning would be the best way to be 100% sure it works that way.\r\n\r\nPinging a core maintainer to see if we have other solutions. My personal idea would be to enable a flag to raise a hard error on mismatched weights instead of a warning, and using that flag in pipelines because we really don't want to load from pretrained an incomplete model.\r\nIt's a different story in Model.from_pretrained where it's actually a desired feature if you intend to finetune,\r\n\r\n@sgugger maybe ?",
"100% agree on the culprit. People will think \"i know bert does classification\" and then blindly use this model. Which is what we were doing :) \r\n\r\nYes, a flag like that would be very useful. Thank you. ",
"Not really in favor of that flag, as you could have weights not in the checkpoint that are not actually used, and thus still having the pipeline work.",
"Really ? But we're using `AutoModelForCausalLM` for instance. So extraneous weights can be safely ignored, but missing weights are almost always necessarily used, no ?\r\n\r\nDo you have an example of architecture where that fails? \r\n\r\nThanks for the answer, you're probably right, I just can't find an example from my experience.",
"What does \"work\" mean though? In the example above, I'm using `bert-base-uncased` in the sentiment pipeline. As @Narsil pointed out, there is no classification head but a result is returned so in some sense it \"works\" (maybe we are using different senses of \"work\"). \r\n\r\nWhat is that number? How is it being calculated? Why does it change when I re-instantiate the pipeline? Regardless of how to handle all of this, these answers are not clear from the documentation. \r\n\r\nA flag, which could default to the current behavior, would at least allow end users to have some control over this. \r\n\r\nEdit: I see your point @sgugger, and yes we are using different senses of \"work\". And this explains why the warning message doesn't explicitly say \"don't do this\" (because it may be the case that it _is_ okay to do this). Is there a way to distinguish what you had in mind from my example (which doesn't actually work even though it returns well-formatted results)? ",
"> What is that number? How is it being calculated? Why does it change when I re-instantiate the pipeline? Regardless of how to handle all of this, these answers are not clear from the documentation.\r\n\r\nBy default the classification is created randomly. Then the correct weights are placed onto your model. Since those weights are missing we just don't place them. That's why outputs change all the time. the head is different all the times.",
"The problem is that we have *a lot* of architectures and while there shouldn't be any warning in theory if everything has been coded right, I can't guarantee there is not one that shows some warning because some of the internal class variables for the weights that should be ignored in that warning are not properly set (those keys would be tensors that are not set randomly but deterministically like the `position_ids` of BERT). That's why I'm not too much in favor of erroring instead of warning.",
"Makes sense ! Thanks.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,660
| 1,660
|
NONE
| null |
### System Info
Transformers version 4.19.2
Python 3.7.13
Ubuntu 16.04.6 LTS
### Who can help?
@Narsil
I've noticed that `pipeline` returns inconsistent results, after re-instantiating it, when supplying a non-standard model. See code below.
- What is being returned and why does it change?
- What exactly does `pipeline` do when you give it a non-default model or a model not trained for the specific task?
- Since it doesn't necessarily make sense to use `bert-base-uncased` for a sentiment analysis task, should pipeline allow this? I don't get a warning or error. Is there a recommended way to tell pipeline to fail if the supplied model doesn't make sense?
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
>>> from transformers import pipeline
>>> pipe = pipeline("sentiment-analysis", model="bert-base-uncased")
>>> pipe("This restaurant is awesome")
[{'label': 'LABEL_0', 'score': 0.5899267196655273}]
>>> pipe = pipeline("sentiment-analysis", model="bert-base-uncased")
>>> pipe("This restaurant is awesome")
[{'label': 'LABEL_0', 'score': 0.5623320937156677}]
>>> pipe = pipeline("sentiment-analysis", model="bert-base-uncased")
>>> pipe("This restaurant is awesome")
[{'label': 'LABEL_1', 'score': 0.5405012369155884}]
```
### Expected behavior
I would expect pipeline to either fail or give a warning message if given a model not trained for the task.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18120/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18119
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18119/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18119/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18119/events
|
https://github.com/huggingface/transformers/pull/18119
| 1,302,801,259
|
PR_kwDOCUB6oc47TaSU
| 18,119
|
Better messaging and fix for incorrect shape when collating data.
|
{
"login": "CakeCrusher",
"id": 37946988,
"node_id": "MDQ6VXNlcjM3OTQ2OTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/37946988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CakeCrusher",
"html_url": "https://github.com/CakeCrusher",
"followers_url": "https://api.github.com/users/CakeCrusher/followers",
"following_url": "https://api.github.com/users/CakeCrusher/following{/other_user}",
"gists_url": "https://api.github.com/users/CakeCrusher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CakeCrusher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CakeCrusher/subscriptions",
"organizations_url": "https://api.github.com/users/CakeCrusher/orgs",
"repos_url": "https://api.github.com/users/CakeCrusher/repos",
"events_url": "https://api.github.com/users/CakeCrusher/events{/privacy}",
"received_events_url": "https://api.github.com/users/CakeCrusher/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
I ran into an error related to an incorrect shape of inputs when using DataCollatorForSeq2Seq. I learned it had to do with having excessively nested inputs for my features. The error message was not particularly useful.
This PR adds an assertion checking for incorrectly shaped inputs to be collated. The assertion also provides a solution by suggesting to use `remove_excess_nesting` util.
`remove_excess_nesting` removes excessive nesting from features within a `DatasetDict`.
Fixes #15505
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18119/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18119",
"html_url": "https://github.com/huggingface/transformers/pull/18119",
"diff_url": "https://github.com/huggingface/transformers/pull/18119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18119.patch",
"merged_at": 1658392541000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18118
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18118/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18118/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18118/events
|
https://github.com/huggingface/transformers/issues/18118
| 1,302,799,354
|
I_kwDOCUB6oc5NpyP6
| 18,118
|
Model parallelism for m2m100
|
{
"login": "elricwan",
"id": 33402371,
"node_id": "MDQ6VXNlcjMzNDAyMzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/33402371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elricwan",
"html_url": "https://github.com/elricwan",
"followers_url": "https://api.github.com/users/elricwan/followers",
"following_url": "https://api.github.com/users/elricwan/following{/other_user}",
"gists_url": "https://api.github.com/users/elricwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elricwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elricwan/subscriptions",
"organizations_url": "https://api.github.com/users/elricwan/orgs",
"repos_url": "https://api.github.com/users/elricwan/repos",
"events_url": "https://api.github.com/users/elricwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/elricwan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"We recommend using accelerate to achieve parallelisation now: https://github.com/huggingface/accelerate"
] | 1,657
| 1,658
| null |
NONE
| null |
### Model description
The translation model m2m100 proposed by Facebook is too huge to train using DDP, is there any open solution for model parallelism of m2m100, just like GPT2? Thank you.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18118/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/18117
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18117/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18117/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18117/events
|
https://github.com/huggingface/transformers/pull/18117
| 1,302,789,905
|
PR_kwDOCUB6oc47TX7Z
| 18,117
|
Add summarization name mapping for MultiNews
|
{
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Adds the `text_column` and `summary_column` names to the to the `summarization_name_mapping` dictionary in `run_summarization.py`. This allows a user to use the script with [MultiNews](https://huggingface.co/datasets/multi_news) without having to specify these variables explicitly. Admittedly this is a tiny change but benefits anyone using MultiNews with this script.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18117/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18117",
"html_url": "https://github.com/huggingface/transformers/pull/18117",
"diff_url": "https://github.com/huggingface/transformers/pull/18117.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18117.patch",
"merged_at": 1657714760000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18116
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18116/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18116/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18116/events
|
https://github.com/huggingface/transformers/pull/18116
| 1,302,740,283
|
PR_kwDOCUB6oc47TN8s
| 18,116
|
supported python versions reference
|
{
"login": "CakeCrusher",
"id": 37946988,
"node_id": "MDQ6VXNlcjM3OTQ2OTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/37946988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CakeCrusher",
"html_url": "https://github.com/CakeCrusher",
"followers_url": "https://api.github.com/users/CakeCrusher/followers",
"following_url": "https://api.github.com/users/CakeCrusher/following{/other_user}",
"gists_url": "https://api.github.com/users/CakeCrusher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CakeCrusher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CakeCrusher/subscriptions",
"organizations_url": "https://api.github.com/users/CakeCrusher/orgs",
"repos_url": "https://api.github.com/users/CakeCrusher/repos",
"events_url": "https://api.github.com/users/CakeCrusher/events{/privacy}",
"received_events_url": "https://api.github.com/users/CakeCrusher/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger another thing to consider is that we are only referencing the line number so the moment the file is updated, and the lines shift it will link to something else. Any fixes come to mind? Or will that do for the time being?"
] | 1,657
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
# What does this PR do?
Provides a reference to the supported python versions to get a development environment working.
Fixes #18112
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18116/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18116",
"html_url": "https://github.com/huggingface/transformers/pull/18116",
"diff_url": "https://github.com/huggingface/transformers/pull/18116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18116.patch",
"merged_at": 1657714724000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18115
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18115/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18115/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18115/events
|
https://github.com/huggingface/transformers/pull/18115
| 1,302,680,746
|
PR_kwDOCUB6oc47TBro
| 18,115
|
Add custom config to quicktour
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Oops, sorry about all the changes! How about we keep the custom config section here and rollback all the other changes, which we can discuss in a separate issue?",
"Yes please!"
] | 1,657
| 1,658
| 1,658
|
MEMBER
| null |
This PR updates the quicktour to include a section for building custom configurations that creates a randomly initialized model. Other changes include:
- Added a brief section for `Trainer`.
- Switched back to the code switcher (instead of code blocks) for some code examples which showed essentially the same thing and didn't have drastically different text associated with them. I think this will reduce the amount of scrolling and improve user experience.
- Minor maintenance work to improve conciseness.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18115/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18115",
"html_url": "https://github.com/huggingface/transformers/pull/18115",
"diff_url": "https://github.com/huggingface/transformers/pull/18115.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18115.patch",
"merged_at": 1658337783000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18114
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18114/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18114/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18114/events
|
https://github.com/huggingface/transformers/pull/18114
| 1,302,554,172
|
PR_kwDOCUB6oc47SmEN
| 18,114
|
Added a varification step for to the development contribution guide
|
{
"login": "CakeCrusher",
"id": 37946988,
"node_id": "MDQ6VXNlcjM3OTQ2OTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/37946988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CakeCrusher",
"html_url": "https://github.com/CakeCrusher",
"followers_url": "https://api.github.com/users/CakeCrusher/followers",
"following_url": "https://api.github.com/users/CakeCrusher/following{/other_user}",
"gists_url": "https://api.github.com/users/CakeCrusher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CakeCrusher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CakeCrusher/subscriptions",
"organizations_url": "https://api.github.com/users/CakeCrusher/orgs",
"repos_url": "https://api.github.com/users/CakeCrusher/repos",
"events_url": "https://api.github.com/users/CakeCrusher/events{/privacy}",
"received_events_url": "https://api.github.com/users/CakeCrusher/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"closing due to unrelated code"
] | 1,657
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
# What does this PR do?
Informs the user to have the proper python version to ensure they can get a development environment working.
Fixes #18112
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18114/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18114",
"html_url": "https://github.com/huggingface/transformers/pull/18114",
"diff_url": "https://github.com/huggingface/transformers/pull/18114.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18114.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18113
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18113/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18113/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18113/events
|
https://github.com/huggingface/transformers/issues/18113
| 1,302,553,289
|
I_kwDOCUB6oc5No2LJ
| 18,113
|
LayoutLMv3 image preparation code snippet does not work with PDFs
|
{
"login": "joehoover",
"id": 11277670,
"node_id": "MDQ6VXNlcjExMjc3Njcw",
"avatar_url": "https://avatars.githubusercontent.com/u/11277670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joehoover",
"html_url": "https://github.com/joehoover",
"followers_url": "https://api.github.com/users/joehoover/followers",
"following_url": "https://api.github.com/users/joehoover/following{/other_user}",
"gists_url": "https://api.github.com/users/joehoover/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joehoover/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joehoover/subscriptions",
"organizations_url": "https://api.github.com/users/joehoover/orgs",
"repos_url": "https://api.github.com/users/joehoover/repos",
"events_url": "https://api.github.com/users/joehoover/events{/privacy}",
"received_events_url": "https://api.github.com/users/joehoover/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @NielsRogge ",
"Yes, feel free to improve the docs as was done for LayoutLMv2 in #15293",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,662
| 1,662
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@NielsRogge
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is not a bug per se, but I wasn't sure how else to file it. The official LayoutLMv3 Transformers documentation indicates that PDF files can be directly processed; however, they can't -- at least, not with the current code snippets.
For example, this [code snippet](https://huggingface.co/docs/transformers/model_doc/layoutlmv3#transformers.LayoutLMv3FeatureExtractor.__call__.example) has the lines:
```
from PIL import Image
image = Image.open("name_of_your_document - can be a png file, pdf, etc.").convert("RGB")
```
However, `PIL.Image` cannot open PDFs. In fact, the [Pillow documentation](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html?highlight=pdf#:~:text=.palm.-,PDF,-%23) indicates that PDFs are only writable.
Reproduction is trivial, but, for completeness:
1. Download this pdf: https://slicedinvoices.com/pdf/wordpress-pdf-invoice-plugin-sample.pdf
2. Install Pillow: `pip install pillow`
3. Run this code:
```python
from PIL import Image
image = Image.open(<path_to_invoice.pdf>).convert("RGB")
```
Expected error:
```
UnidentifiedImageError: cannot identify image file '/Users/joe/Downloads/wordpress-pdf-invoice-plugin-sample.pdf'
```
### Expected behavior
The documentation should provide a working solution for processing PDFs.
I did notice that the `__call__` implementation of the `LayoutLMv3FeatureExtractor` has an `images` argument that accepts numpy arrays and torch tensors, in addition to Image objects. So, I assume one or more of the following options is the correct workflow:
1. Read PDFs into a python object that can be converted to an PIL.Image type.
2. Read/transform PDFs into an array as expected by the feature extractor.
3. Convert PDFs to an image and proceed with PIL.Image
However, as I'm new to document intelligence and modeling PDFs, I'll have to do some digging to identify the right solution. So, it would be nice if the documentation was updated so that others won't have to do the same.
One work-around (or solution?) is to just convert the PDF to an image, e.g.:
```python
import io
from wand.image import Image as WImage
import PIL
local_path = "/Users/joe/Downloads/wordpress-pdf-invoice-plugin-sample.pdf"
img = WImage(filename=local_path, resolution=100) # bigger
image = PIL.Image.open(io.BytesIO(img.make_blob("png"))).convert("RGB")
```
It also [looks like](https://stackoverflow.com/questions/47599012/how-to-convert-a-wand-image-object-to-numpy-array-without-opencv) Wand supports exporting to Numpy `array`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18113/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18112
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18112/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18112/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18112/events
|
https://github.com/huggingface/transformers/issues/18112
| 1,302,480,256
|
I_kwDOCUB6oc5NokWA
| 18,112
|
Cannot set up development environment on Python 3.10
|
{
"login": "CakeCrusher",
"id": 37946988,
"node_id": "MDQ6VXNlcjM3OTQ2OTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/37946988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CakeCrusher",
"html_url": "https://github.com/CakeCrusher",
"followers_url": "https://api.github.com/users/CakeCrusher/followers",
"following_url": "https://api.github.com/users/CakeCrusher/following{/other_user}",
"gists_url": "https://api.github.com/users/CakeCrusher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CakeCrusher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CakeCrusher/subscriptions",
"organizations_url": "https://api.github.com/users/CakeCrusher/orgs",
"repos_url": "https://api.github.com/users/CakeCrusher/repos",
"events_url": "https://api.github.com/users/CakeCrusher/events{/privacy}",
"received_events_url": "https://api.github.com/users/CakeCrusher/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"As the error clearly mentions, this is because the ray package does not offer a distribution for Python 3.10, so I would open the issue there :-)",
"As a matter of GitHub etiquette, pinging random people ain't cool.",
"@sgugger yeah i forgot to mention that to resolve it you need to downgrade as mentioned in this issue https://github.com/ray-project/tune-sklearn/issues/169 but I figured idealy you would want development to work on any python distribution\r\n\r\n@aaugustin \r\n[](https://github.com/huggingface/transformers/commit/3233b58ad4aceb9d048b3c48cad44ef526470b53)\r\n",
"Exactly my point. Just because someone did something 3 years ago doesn't mean you can ping them. If I want to contribute to Hugging Face Transformers for free, I'll follow the repo!"
] | 1,657
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Windows-10-10.0.19043-SP0
- Python version: 3.10.2
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@aaugustin @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. under the same environment conditions
2. go through the steps described in https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests
3. when you run `pip install -e ".[dev]"` you will see the following error
> ERROR: Could not find a version that satisfies the requirement ray[tune]; extra == "dev" (from transformers[dev]) (from versions: none)
ERROR: No matching distribution found for ray[tune]; extra == "dev"
the full traceback:
> Obtaining file:///C:/Projects/transformers
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Requirement already satisfied: filelock in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (3.7.1)
Requirement already satisfied: packaging>=20.0 in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (21.3)
Requirement already satisfied: pyyaml>=5.1 in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (6.0)
Collecting regex!=2019.12.17
Using cached regex-2022.7.9-cp310-cp310-win_amd64.whl (262 kB)
Requirement already satisfied: tqdm>=4.27 in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (4.64.0)
Requirement already satisfied: numpy>=1.17 in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (1.23.1)
Requirement already satisfied: huggingface-hub<1.0,>=0.1.0 in c:\projects\transformers\env\lib\site-packages
(from transformers==4.21.0.dev0) (0.8.1)
Requirement already satisfied: requests in c:\projects\transformers\env\lib\site-packages (from transformers==4.21.0.dev0) (2.28.1)
Collecting tokenizers!=0.11.3,<0.13,>=0.11.1
Using cached tokenizers-0.12.1-cp310-cp310-win_amd64.whl (3.3 MB)
Collecting phonemizer
Using cached phonemizer-3.2.1-py3-none-any.whl (90 kB)
Collecting tensorflow>=2.3
Using cached tensorflow-2.9.1-cp310-cp310-win_amd64.whl (444.1 MB)
Collecting dill<0.3.5
Using cached dill-0.3.4-py2.py3-none-any.whl (86 kB)
Collecting sentencepiece!=0.1.92,>=0.1.91
Using cached sentencepiece-0.1.96-cp310-cp310-win_amd64.whl (1.1 MB)
Collecting onnxconverter-common
Using cached onnxconverter_common-1.9.0-py2.py3-none-any.whl (78 kB)
Collecting pyctcdecode>=0.3.0
Using cached pyctcdecode-0.3.0-py2.py3-none-any.whl (43 kB)
Collecting ipadic<2.0,>=1.0.0
Using cached ipadic-1.0.0.tar.gz (13.4 MB)
Collecting torchaudio
Using cached torchaudio-0.12.0-cp310-cp310-win_amd64.whl (969 kB)
Collecting unidic-lite>=1.0.7
Using cached unidic-lite-1.0.8.tar.gz (47.4 MB)
Collecting sigopt
Using cached sigopt-8.5.0-py2.py3-none-any.whl (182 kB)
Collecting timeout-decorator
Using cached timeout-decorator-0.5.0.tar.gz (4.8 kB)
Collecting fugashi>=1.0
Using cached fugashi-1.1.2-cp310-cp310-win_amd64.whl (497 kB)
Collecting protobuf<=3.20.1
Using cached protobuf-3.20.1-cp310-cp310-win_amd64.whl (903 kB)
Collecting hf-doc-builder>=0.3.0
Using cached hf_doc_builder-0.3.0-py3-none-any.whl (56 kB)
Collecting flake8>=3.8.3
Using cached flake8-4.0.1-py2.py3-none-any.whl (64 kB)
Collecting cookiecutter==1.7.3
Using cached cookiecutter-1.7.3-py2.py3-none-any.whl (34 kB)
Collecting tf2onnx
Using cached tf2onnx-1.11.1-py3-none-any.whl (440 kB)
Collecting parameterized
Using cached parameterized-0.8.1-py2.py3-none-any.whl (26 kB)
Collecting pytest-xdist
Using cached pytest_xdist-2.5.0-py3-none-any.whl (41 kB)
Collecting unidic>=1.0.2
Using cached unidic-1.1.0.tar.gz (7.7 kB)
Collecting sacrebleu<2.0.0,>=1.4.12
Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB)
ERROR: Could not find a version that satisfies the requirement ray[tune]; extra == "dev" (from transformers[dev]) (from versions: none)
ERROR: No matching distribution found for ray[tune]; extra == "dev"
WARNING: You are using pip version 21.2.4; however, version 22.1.2 is available.
You should consider upgrading via the 'C:\Projects\transformers\env\Scripts\python.exe -m pip install --upgrade pip' command.
(this error stems the process significantly to the extent that I couldn't run tests as a result)
### Expected behavior
When running this command in Python 10 the whole development process run without errors like it does with `Python 3.8.8`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18112/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18111
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18111/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18111/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18111/events
|
https://github.com/huggingface/transformers/issues/18111
| 1,302,258,827
|
I_kwDOCUB6oc5NnuSL
| 18,111
|
Word offsets of some fast tokenizers are not compatible with token classification pipeline label aggregation
|
{
"login": "davidbenton",
"id": 1603279,
"node_id": "MDQ6VXNlcjE2MDMyNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1603279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidbenton",
"html_url": "https://github.com/davidbenton",
"followers_url": "https://api.github.com/users/davidbenton/followers",
"following_url": "https://api.github.com/users/davidbenton/following{/other_user}",
"gists_url": "https://api.github.com/users/davidbenton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidbenton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidbenton/subscriptions",
"organizations_url": "https://api.github.com/users/davidbenton/orgs",
"repos_url": "https://api.github.com/users/davidbenton/repos",
"events_url": "https://api.github.com/users/davidbenton/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidbenton/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Thank you very much for the detailed issue, that's a good point! \r\n\r\n> I know a lot of the default configuration matches reference implementations or published research, so I'm not sure where inconsistencies between tokenizers are desired behavior. I did notice, for example, that some sentencepiece tokenizers include leading spaces in offset indices (DeBERTa V2/3), and some don't (Albert, XLNet). I looked at the converter config and the rust code (which is pretty opaque to me), but it's not obvious to me why the offsets are different. Do you know, @SaulLu? Is that expected?\r\n\r\nI reassure you, it is not obvious to me either why the offsets are different :smile: . In principle I think it's not a problem that the default value is different. But the problem is that for the moment, for many tokenizers it is not possible to change this value. Technically for 2 reasons: 1) for some we don't expose the argument at init and 2) for others, even if we did we couldn't change it some processors such as `Templateprocessor` don't allow to set it (I think it's the case for deberta).\r\n\r\nI'll ping you in particular Narsil: \r\n1. do you think it's worth changing the heuristic for tokenizers that have `trim_offsets` set to False? I think you're the more knowledgeable for this :smile: \r\n2. does it make sense in principle to allow this argument to be set for all tokenizers that use a rust component that allows you to choose whether you want to trim the offsets or not? I have the impression that the NER use case is one of the main use cases for offsets in general\r\n3. If you agree with the previous point, we will be blocked by the fact that Deberta doesn't use the `bytelevel` processor but the `TemplateProcessor` (which doesn't allow to choose trim_offset). We can surely leverage the processor sequence feature that @mishig25 is doing inside the tokenizers library https://github.com/huggingface/tokenizers/pull/1005 to solve it.\r\n\r\nNote to @Narsil , I'm using \"deberta-base\" to reproduce the issue.",
"I agree that different defaults are not a problem, and it would be great if `trim_offsets` was configurable on more tokenizers. Maybe with a user warning if specifying it on an unsupported tokenizer?\r\n\r\nI'll make a case briefly for updating the heuristic: It would be good if more people tried non-Bert models out for their tasks. For new users, the pipelines and associated doc recipes with a fine-tuned hub model can be a \"hello world\" for trying out transformers. Having the word heuristic work with default settings (without having to know about `trim_offsets`) for most models will mean more users have early success when they venture away from Bertland.",
"> f you agree with the previous point, we will be blocked by the fact that Deberta doesn't use the bytelevel processor but the TemplateProcessor (which doesn't allow to choose trim_offset). We can surely leverage the processor sequence feature that @mishig25 is doing inside the tokenizers library\r\n\r\nThis is the long term solution. As what's the good default for the tokenizer, it's not really up to me to decide, but the tokenizer's creator. I really like not trimming offets, and tokenizers like gpt2 which treat the *full string* as something that has to be fed to the model, meaning spaces have to be somewhere. This is a really great feature as it alleviates lots of headaches about \"skipped\" spaces, decoding issues and so on. But not all tokenizers are created equal and it's sometime more convenient to trim offsets (for whatever reason) or even it was just done that way in the original implem.\r\n\r\n> I have the impression that the NER use case is one of the main use cases for offsets in general\r\n\r\nNER, POS, question-answering, and even mask filling when you want to make a correct replacement and don't have access to the ids. ANY task, which has to treat the original string really. It's also super helpful to debug whenever ids are *incorrect* and you want to know why (like weird unicode looking like ascii but screwing your results).\r\n\r\n> I'll make a case briefly for updating the heuristi\r\n\r\n@davidbenton I really like the idea of updating the heuristic. As long as it's clear it's a heuristic and not a *real* solution (aggregation_mode=\"simple\" is the only real solution IMHO, others are workaround to poorly performing models adding extra bias)\r\n\r\nThe heuristic should be simple and elegant, your current solution is ! So I really like that fix. I think we can make it even simpler and commented directly on the diff .\r\nNow to make it become a PR, I think it's really important is to add the slow test which is exactly shown here above.\r\n\r\nThen add a fast test which deals only with the aggregation strategy (we can extract the actual entities from the slow test to have a real example), and add it as a fast test (so any regression is detected early on).\r\nAll the other tests should help cover any regression this heuristic change might introduce (I hope it doesn't).\r\n\r\nDoes that strategy seem viable ? Would you have time to set up such a PR ? \r\nCheers !!\r\n\r\nAnd thanks for bringing that up, if there's room for improvement in the pipelines I am all up for it. (But I am relatively convinced, that it's impossible to be 100% correct as \"words\" are ill defined in some languages ;)) ",
"Thanks a lot for your feedback @Narsil! Super good points!\r\n\r\nFor the heuristics improvement, couldn't we test if the `trim_offset` argument is defined in the processor of the `backend_tokenizer`, and if:\r\n- yes and it is set to True keep the old logic\r\n- otherwise use your logic @davidbenton that tests the first character as you suggest? \r\n\r\n(I share the same conviction as you @Narsil that it's impossible to be 100% correct as \"words\" are ill-defined in some languages )",
"`trim_offsets` cannot be linked the heuristic IMO.\r\n\r\nThe heuristic is just trying to determine if what we're looking at is a \"word\". currently it only looks like if the previous character before the offset ends with a space. But prefix space could also exist so checking that the first character (in the original string) corresponding this token is a space is also valid IMO.\r\nAgain extremely biased towards space separated language, but working.\r\n\r\nI may have to dive and see really what the issue is, but this is my current understanding without exactly looking at the issue in detail.",
"> trim_offsets cannot be linked the heuristic IMO.\r\n\r\nI understand your point, I also realize that my proposal would not be adapted to tokenizers that have a pre-tokenizer that splits on spaces and removes them! \r\n ",
"> I understand your point, I also realize that my proposal would not be adapted to tokenizers that have a pre-tokenizer that splits on spaces and removes them!\r\n\r\nOh it's perfectly fine if that's the desired behavior we want. But I don't think we should bend backwards to make que QA pipeline work within a mode where it tries to recover because a model doesn't work properly ;). And the tokenizer cannot provide \"words\" boundaries (because it just wasn't made that way)",
"I'm not sure how to read your answer ahah. The tokenizer I have in mind is for example Bert's: Bert's tokenizer doesn't have trim_offset set to True, but the spaces are removed during the pre-tokenization step and the \"words\" boundaries are built the other way by adding \"##\" to the token that doesn't start a word.",
"Thanks for the comments and direction! I'm sorry to let this go stale, but I had a family emergency and this dropped off my radar. I should be available going forward, and I'll work on adding the tests mentioned above and set up a PR.\r\n\r\n@Narsil I answered your question about the heuristic in context on my commit.\r\n\r\n",
"@SaulLu I'll note, relating to your suggestion to branch on `trim_offset`, that as my suggested heuristic works now, the logic is unchanged for models that do not tokenize whitespace, as it only checks in the decoded token for a leading space.",
"@davidbenton Perfect. I think you can do the modifications, and we would really benefit if there was a test making sure that the new heuristic works.\r\n\r\nFor instance here is a slow test to test the heuristics for spanish:\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/tests/pipelines/test_pipelines_token_classification.py#L200-L215",
"I have a couple tests added in my local wip, but it looks like there might be a borken pipeline test prior to my changes. The \"UN\" start/end entity offsets don't seem to match the input sequence on [this line](https://github.com/huggingface/transformers/blob/51227e26ab8fe6d1a19804da697786649f9340e3/tests/pipelines/test_pipelines_token_classification.py#L289), along with a few other diffs. @Narsil Is this expected, or should I be looking for green (or skipped) tests before I create a PR?\r\n\r\n(FYI I'm running `RUN_SLOW=1 RUN_PIPELINE_TESTS=yes pytest tests/pipelines/test_pipelines_token_classification.py`; will also run the full suite once that's looking good.)",
"@davidbenton ,\r\n\r\nIf all **fast** tests pass, you should be fine. They are tested on every commit, so they should be green.\r\n\r\nFor slow tests, we run them before releases and in controlled environments, they are sometimes affected by `torch` version or `python` version. Usually the differences are minor so we can decide how to deal with them on a case-by-case basis.\r\n\r\nBut for a PR it shouldn't be blocking (that's why we try to have good fast tests, as they are run very often, the slow tests are more like integration tests, usually when we need an actual trained model output to showcase something)",
"Yeah, that all makes sense. Are we sure slow pipeline tests are ~~running~~ being run? That test I linked has start/end offsets that seem to be incorrect (past the end of the input). I just wanted to flag that, but I'll go ahead and get that PR up too.",
"Thanks for flagging, I am looking into it right now :)",
"@davidbenton what's your environement ? I can't seem to reproduce on my local env\r\n\r\nDo you mind creating a new issue for this ? Report it like a regular bug, there should be tools to print your exact env.\r\nhttps://github.com/huggingface/transformers/issues/new?assignees=&labels=bug&template=bug-report.yml\r\n\r\nAs I said, slow tests can be sometimes a little more flaky that fast tests, but usually within acceptable bounds (pytorch will modify kernels which affects ever so slightly values, but it can pile up, Python version can break dictionary order etc..)"
] | 1,657
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.21.0.dev0
- Platform: macOS-12.4-x86_64-i386-64bit
- Python version: 3.9.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.5.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: N
- Using distributed or parallel set-up in script?: N
### Who can help?
Tagging @Narsil for pipelines and @SaulLu for tokenization. Let me know if I should tag anyone for specific models, but it's not really a model issue, except in terms of tokenization.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I noticed this issue with a DeBERTa model, but it also affects some others. The high level issue is that some tokenizers include leading spaces in the offset indices, some exclude them, and some are configurable with `trim_offsets`. When offsets include leading spaces (equivalent to `trim_offsets==False`), the pipeline [word heuristic](https://github.com/huggingface/transformers/blob/afe5d42d8d1d80af911ed980c2936bfe887078f6/src/transformers/pipelines/token_classification.py#L294) doesn't work. The result is aggregating all tokens in the sequence to one label. Simple example:
```python
model_name = "brandon25/deberta-base-finetuned-ner"
model = AutoModelForTokenClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
ner_aggregate = pipeline("ner", model=model, tokenizer=tokenizer, ignore_labels=[], aggregation_strategy="max")
ner_aggregate("We're from New York")
```
Result:
```
[{'entity_group': 'O', 'score': 0.9999778, 'word': " We're from New York", 'start': 0, 'end': 19}]
```
### Expected behavior
Expected result, something like:
```
[{'entity_group': 'O', 'score': 0.9999778, 'word': " We're from", 'start': 0, 'end': 10}, {'entity_group': 'O', 'score': 0.9xxx, 'word': "New York", 'start': 11, 'end': 19}]
```
If you'd like to see actual output, here's a [colab notebook with relevant models](https://colab.research.google.com/drive/1bcWotnqSPNIuAaRNkELKmKiLQheudHu1?usp=sharing) for comparison.
This affects at least these:
- DeBERTa V1
- DeBERTa V2/3
- GPT2 (tested because `DebertaTokenizerFast` is a subclass of `GPT2TokenizerFast`)
- Depending on config, Roberta (and any other tokenizer that honors `trim_offsets==False`)
The easiest solution would be to update the heuristic. [Here is a change](https://github.com/davidbenton/transformers/commit/5c43c63d401f80818d95e9cafb627607680f4dff) that works for preceding space in sequence (like current heuristic) _or_ leading space in token. I can turn into a PR if desired.
I know a lot of the default configuration matches reference implementations or published research, so I'm not sure where inconsistencies between tokenizers are desired behavior. I did notice, for example, that some sentencepiece tokenizers include leading spaces in offset indices (DeBERTa V2/3), and some don't (Albert, XLNet). I looked at the converter config and the rust code (which is pretty opaque to me), but it's not obvious to me why the offsets are different. Do you know, @SaulLu? Is that expected?
I am comparing different architectures to replace a production Bert model and was evaluating models fine tuned on an internal dataset when I ran into this. I have my manager's blessing to spend some time on this (and already have! 😂), so I'm happy to work on a PR or help out how I can.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18111/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18110
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18110/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18110/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18110/events
|
https://github.com/huggingface/transformers/pull/18110
| 1,302,221,163
|
PR_kwDOCUB6oc47Rcvr
| 18,110
|
TF: `unpack_inputs` decorator independent from `main_input_name`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,657
| 1,657
|
MEMBER
| null |
# What does this PR do?
As the title indicates -- the `unpack_inputs` decorator becomes independent from `main_input_name` in this PR.
The old `input_processing` included some checks for `input_ids`, which somewhat transitioned into the `unpack_inputs` decorator (the `input_ids` input was obtained there from the argument under `main_input_name`). However, in practice, it is not needed -- what we want is to support the case where all model arguments come packed in the first input, which happens to be `main_input_name` for most use cases of `unpack_inputs`. Note that Keras often expects this packing behavior with input dictionaries, which `input_processing` maps back to our expected format.
Fixes #18040
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18110/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18110",
"html_url": "https://github.com/huggingface/transformers/pull/18110",
"diff_url": "https://github.com/huggingface/transformers/pull/18110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18110.patch",
"merged_at": 1657705422000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18109
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18109/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18109/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18109/events
|
https://github.com/huggingface/transformers/issues/18109
| 1,302,174,715
|
I_kwDOCUB6oc5NnZv7
| 18,109
|
"ValueError: initial_value must be specified." error when compiling bert for text classification
|
{
"login": "djellalmohamedaniss",
"id": 24865594,
"node_id": "MDQ6VXNlcjI0ODY1NTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/24865594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/djellalmohamedaniss",
"html_url": "https://github.com/djellalmohamedaniss",
"followers_url": "https://api.github.com/users/djellalmohamedaniss/followers",
"following_url": "https://api.github.com/users/djellalmohamedaniss/following{/other_user}",
"gists_url": "https://api.github.com/users/djellalmohamedaniss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/djellalmohamedaniss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/djellalmohamedaniss/subscriptions",
"organizations_url": "https://api.github.com/users/djellalmohamedaniss/orgs",
"repos_url": "https://api.github.com/users/djellalmohamedaniss/repos",
"events_url": "https://api.github.com/users/djellalmohamedaniss/events{/privacy}",
"received_events_url": "https://api.github.com/users/djellalmohamedaniss/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 @gante ",
"Hi @djellalmohamedaniss, your version of TF is quite old - our support for TF 2.3 is very shaky, and we prefer TF >= 2.4, and TF 2.8 or 2.9 are even better!\r\n\r\nCan you check with a more recent version of TF and let us know if the problem still exists?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
### System Info
I'm following the hugging face tutorial for the sequence classification and while trying to fine-tune 'distilbert-base-multilingual-cased' I'm having the following error when running the model.compile() method.
I'm having the same error when using bert-uncased.
```
ValueError Traceback (most recent call last)
<ipython-input-22-68f5487774e6> in <module>
----> 1 model.compile(optimizer=optimizer)
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in compile(self, optimizer, loss, metrics, loss_weights, weighted_metrics, run_eagerly, steps_per_execution, **kwargs)
1035 run_eagerly=run_eagerly,
1036 experimental_steps_per_execution=steps_per_execution,
-> 1037 **kwargs,
1038 )
1039
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, weighted_metrics, run_eagerly, **kwargs)
547 experimental_steps_per_execution = kwargs.pop(
548 'experimental_steps_per_execution', 1)
--> 549 self._configure_steps_per_execution(experimental_steps_per_execution)
550
551 # Initializes attrs that are reset each time `compile` is called.
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
455 self._self_setattr_tracking = False # pylint: disable=protected-access
456 try:
--> 457 result = method(self, *args, **kwargs)
458 finally:
459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _configure_steps_per_execution(self, steps_per_execution)
581 steps_per_execution,
582 dtype='int64',
--> 583 aggregation=variables.VariableAggregationV2.ONLY_FIRST_REPLICA)
584
585 @property
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in __call__(cls, *args, **kwargs)
260 return cls._variable_v1_call(*args, **kwargs)
261 elif cls is Variable:
--> 262 return cls._variable_v2_call(*args, **kwargs)
263 else:
264 return super(VariableMetaclass, cls).__call__(*args, **kwargs)
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in _variable_v2_call(cls, initial_value, trainable, validate_shape, caching_device, name, variable_def, dtype, import_scope, constraint, synchronization, aggregation, shape)
254 synchronization=synchronization,
255 aggregation=aggregation,
--> 256 shape=shape)
257
258 def __call__(cls, *args, **kwargs):
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in getter(**kwargs)
65
66 def getter(**kwargs):
---> 67 return captured_getter(captured_previous, **kwargs)
68
69 return getter
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py in creator(next_creator, **kwargs)
2855 def creator(next_creator, **kwargs):
2856 _require_strategy_scope_strategy(strategy)
-> 2857 return next_creator(**kwargs)
2858
2859 self._var_creator_scope = variable_scope.variable_creator_scope(creator)
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in <lambda>(**kws)
235 shape=None):
236 """Call on Variable class. Useful to force the signature."""
--> 237 previous_getter = lambda **kws: default_variable_creator_v2(None, **kws)
238 for _, getter in ops.get_default_graph()._variable_creator_stack: # pylint: disable=protected-access
239 previous_getter = _make_getter(getter, previous_getter)
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py in default_variable_creator_v2(next_creator, **kwargs)
2644 synchronization=synchronization,
2645 aggregation=aggregation,
-> 2646 shape=shape)
2647
2648
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in __call__(cls, *args, **kwargs)
262 return cls._variable_v2_call(*args, **kwargs)
263 else:
--> 264 return super(VariableMetaclass, cls).__call__(*args, **kwargs)
265
266
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py in __init__(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, variable_def, import_scope, constraint, distribute_strategy, synchronization, aggregation, shape)
1516 aggregation=aggregation,
1517 shape=shape,
-> 1518 distribute_strategy=distribute_strategy)
1519
1520 def _init_from_args(self,
/tf/Raph/jupyter/tf2-37/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py in _init_from_args(self, initial_value, trainable, collections, caching_device, name, dtype, constraint, synchronization, aggregation, distribute_strategy, shape)
1594 synchronization, aggregation, trainable, name))
1595 if initial_value is None:
-> 1596 raise ValueError("initial_value must be specified.")
1597 init_from_fn = callable(initial_value)
1598
ValueError: initial_value must be specified.
```
tensorflow version: 2.3.0
transformers version: 4.20.1
Python version: 3.7
Any ideas? thanks in advance.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
model_name = 'distilbert-base-multilingual-cased'
model = TFAutoModelForSequenceClassification.from_pretrained(model_name, num_labels=3)
batch_size = 16
num_epochs = 5
batches_per_epoch = len(train) // batch_size
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss)
```
### Expected behavior
a compilation without an error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18109/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18108
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18108/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18108/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18108/events
|
https://github.com/huggingface/transformers/pull/18108
| 1,301,903,228
|
PR_kwDOCUB6oc47QX9J
| 18,108
|
CLI: reenable `pt_to_tf` test
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,657
| 1,657
|
MEMBER
| null |
# What does this PR do?
Reenables the `pt_to_tf` CLI test that was disabled a few days ago.
⚠️ Before this PR is merged, [this](https://huggingface.co/hf-internal-testing/tiny-random-gptj/discussions/1) hub PR must be merged, and CI must be rerun. This test model is also used in `tests/deepspeed/test_model_zoo.py`, not sure if the change will have implications there.
The problem was not due to code, but rather to problems in the config file of the test model. When `config.rotary_dim` is larger than `self.head_dim` (which is `config.hidden_size` divided by `config.num_attention_heads`), then we try to slice out of bounds in some tensors, causing downstream dimension-related exceptions -- e.g. [slicing here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gptj/modeling_tf_gptj.py#L229).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18108/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18108",
"html_url": "https://github.com/huggingface/transformers/pull/18108",
"diff_url": "https://github.com/huggingface/transformers/pull/18108.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18108.patch",
"merged_at": 1657629486000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18107
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18107/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18107/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18107/events
|
https://github.com/huggingface/transformers/issues/18107
| 1,301,764,556
|
I_kwDOCUB6oc5Nl1nM
| 18,107
|
求教:load BERT 模型报错
|
{
"login": "aaronzhangTechGeek",
"id": 109060333,
"node_id": "U_kgDOBoAg7Q",
"avatar_url": "https://avatars.githubusercontent.com/u/109060333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronzhangTechGeek",
"html_url": "https://github.com/aaronzhangTechGeek",
"followers_url": "https://api.github.com/users/aaronzhangTechGeek/followers",
"following_url": "https://api.github.com/users/aaronzhangTechGeek/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronzhangTechGeek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronzhangTechGeek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronzhangTechGeek/subscriptions",
"organizations_url": "https://api.github.com/users/aaronzhangTechGeek/orgs",
"repos_url": "https://api.github.com/users/aaronzhangTechGeek/repos",
"events_url": "https://api.github.com/users/aaronzhangTechGeek/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronzhangTechGeek/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"please help guys! thanks so much in advance!",
"what is your transformers version?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
### System Info
OS: Windows-10-10.0.19041-SP0
Python: 3.8.3
PyTorch: 1.12.0+cpu
TensorFlow: 2.6.0
HanLP: 2.1.0-beta.36
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I tried the 2 options:
1. recog = hanlp.load('MSRA_NER_BERT_BASE_ZH')
OR
2. recog = hanlp.load(hanlp.pretrained.ner.MSRA_NER_BERT_BASE_ZH)
Error Log:
>>> import hanlp
>>> recognizer = hanlp.load(hanlp.pretrained.ner.MSRA_NER_BERT_BASE_ZH)
2022-07-12 16:32:32.871385: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2022-07-12 16:32:32.871563: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-07-12 16:32:44.634072: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2022-07-12 16:32:44.634254: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2022-07-12 16:32:44.654910: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: SZH-C-000XF
2022-07-12 16:32:44.655263: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: SZH-C-000XF
2022-07-12 16:32:51.161833: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Failed to load https://file.hankcs.com/hanlp/ner/ner_bert_base_msra_20211227_114712.zip.
If the problem still persists, please submit an issue to https://github.com/hankcs/HanLP/issues
When reporting an issue, make sure to paste the FULL ERROR LOG below.
================================ERROR LOG BEGINS================================
OS: Windows-10-10.0.19041-SP0
Python: 3.8.3
PyTorch: 1.12.0+cpu
TensorFlow: 2.6.0
HanLP: 2.1.0-beta.36
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\__init__.py", line 43, in load
return load_from_meta_file(save_dir, 'meta.json', verbose=verbose, **kwargs)
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\utils\component_util.py", line 175, in load_from_meta_file
raise e from None
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\utils\component_util.py", line 99, in load_from_meta_file
obj.load(save_dir, verbose=verbose, **kwargs)
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\common\keras_component.py", line 214, in load
self.build(**merge_dict(self.config, training=False, logger=logger, **kwargs, overwrite=True, inplace=True))
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\common\keras_component.py", line 224, in build
self.model = self.build_model(**merge_dict(self.config, training=kwargs.get('training', None),
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\components\taggers\transformers\transformer_tagger_tf.py", line 34, in build_model
model, tokenizer = build_transformer(transformer, max_seq_length, len(self.transform.tag_vocab), tagging=True)
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\layers\transformers\loader_tf.py", line 11, in build_transformer
tokenizer = AutoTokenizer_.from_pretrained(transformer)
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\hanlp\layers\transformers\pt_imports.py", line 68, in from_pretrained
tokenizer = cls.from_pretrained(get_tokenizer_mirror(transformer), use_fast=use_fast,
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\transformers\models\auto\tokenization_auto.py", line 535, in from_pretrained
config = AutoConfig.from_pretrained(
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\transformers\models\auto\configuration_auto.py", line 705, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\transformers\configuration_utils.py", line 553, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\AAH2SZH\AppData\Roaming\Python\Python38\site-packages\transformers\configuration_utils.py", line 641, in _get_config_dict
raise EnvironmentError(
OSError: Can't load config for 'bert-base-chinese'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-chinese' is the correct path to a directory containing a config.json file
### Expected behavior
fix the error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18107/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18106
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18106/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18106/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18106/events
|
https://github.com/huggingface/transformers/pull/18106
| 1,301,484,986
|
PR_kwDOCUB6oc47O_kk
| 18,106
|
speed up Nezha model tests
|
{
"login": "sijunhe",
"id": 11987277,
"node_id": "MDQ6VXNlcjExOTg3Mjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/11987277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sijunhe",
"html_url": "https://github.com/sijunhe",
"followers_url": "https://api.github.com/users/sijunhe/followers",
"following_url": "https://api.github.com/users/sijunhe/following{/other_user}",
"gists_url": "https://api.github.com/users/sijunhe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sijunhe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sijunhe/subscriptions",
"organizations_url": "https://api.github.com/users/sijunhe/orgs",
"repos_url": "https://api.github.com/users/sijunhe/repos",
"events_url": "https://api.github.com/users/sijunhe/events{/privacy}",
"received_events_url": "https://api.github.com/users/sijunhe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @sijunhe, I can confirm that this does significantly speed up the tests:\r\n\r\n```\r\nslowest durations\r\n5.00s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_torch_fx_output_loss\r\n4.70s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_torch_fx\r\n3.61s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model_outputs_equivalence\r\n1.71s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_attention_outputs\r\n1.69s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load_fast_init_to_base\r\n1.68s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load_fast_init_from_base\r\n1.06s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_training\r\n1.03s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_integration\r\n0.95s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load\r\n0.90s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_save_load_from_pretrained\r\n0.83s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_feed_forward_chunking\r\n0.77s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_correct_missing_keys\r\n0.74s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_hidden_states_output\r\n0.56s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_load_with_mismatched_shapes\r\n0.53s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_training_gradient_checkpointing\r\n0.51s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_resize_tokens_embeddings\r\n0.47s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_headmasking\r\n0.46s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_tie_model_weights\r\n0.45s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_save_load_from_config_init\r\n0.45s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_forward_signature\r\n0.43s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning\r\n0.43s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load_keys_to_ignore_on_save\r\n0.42s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_determinism\r\n0.41s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_gradient_checkpointing_backward_compatibility\r\n0.39s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_inputs_embeds\r\n0.39s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_resize_embeddings_untied\r\n0.38s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_gradient_checkpointing_enable_disable\r\n0.36s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_initialization\r\n0.35s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model_common_attributes\r\n0.18s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_problem_types\r\n0.11s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model_as_decoder\r\n0.10s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model_as_decoder_with_default_input_mask\r\n0.06s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model\r\n0.06s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_for_masked_lm\r\n0.05s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_for_sequence_classification\r\n0.05s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_for_multiple_choice\r\n0.05s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_retain_grad_hidden_states_attentions\r\n```\r\n\r\n",
"Thanks for fixing them!"
] | 1,657
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR speeds up Nezha tests, which was pointed out by @sgugger to be slow. On my machine, this change speeds up the test by about 80% (~160s -> ~20s). I think we should merge this instead of #18103.
<!-- Remove if not applicable -->
Fixes #18103
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18106/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18106",
"html_url": "https://github.com/huggingface/transformers/pull/18106",
"diff_url": "https://github.com/huggingface/transformers/pull/18106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18106.patch",
"merged_at": 1657614509000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18105
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18105/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18105/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18105/events
|
https://github.com/huggingface/transformers/pull/18105
| 1,301,382,482
|
PR_kwDOCUB6oc47OqXX
| 18,105
|
Add support for Sagemaker Model Parallel >= 1.10 new checkpoint API
|
{
"login": "viclzhu",
"id": 20961977,
"node_id": "MDQ6VXNlcjIwOTYxOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/20961977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/viclzhu",
"html_url": "https://github.com/viclzhu",
"followers_url": "https://api.github.com/users/viclzhu/followers",
"following_url": "https://api.github.com/users/viclzhu/following{/other_user}",
"gists_url": "https://api.github.com/users/viclzhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/viclzhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/viclzhu/subscriptions",
"organizations_url": "https://api.github.com/users/viclzhu/orgs",
"repos_url": "https://api.github.com/users/viclzhu/repos",
"events_url": "https://api.github.com/users/viclzhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/viclzhu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for iterating on this PR! It looks like a lot of CI failures are fixed on the mian branch. Could you do a quick rebase so we can make sure this PR does not break anything?",
"It looks like GitHub did not like this rebase as it know shows 290 files changed. Could you open a clean new PR after fixing the merge conflicts?",
"> It looks like GitHub did not like this rebase as it know shows 290 files changed. Could you open a clean new PR after fixing the merge conflicts?\r\n\r\nYeah, will do!",
"New clean PR at #18221!"
] | 1,657
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds support for Sagemaker Model Parallel >= 1.10's new checkpoint API as well as keeping SMP < 1.10 functionality.
* Support loading checkpoints saved with SMP < 1.10 in SMP < 1.10 and SMP >= 1.10
* Support loading checkpoints saved with SMP >= 1.10 in SMP >= 1.10
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18105/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18105",
"html_url": "https://github.com/huggingface/transformers/pull/18105",
"diff_url": "https://github.com/huggingface/transformers/pull/18105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18105.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18104
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18104/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18104/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18104/events
|
https://github.com/huggingface/transformers/issues/18104
| 1,301,377,146
|
I_kwDOCUB6oc5NkXB6
| 18,104
|
gpt2 results with past_key_values not the same as when computed from scratch
|
{
"login": "IanMagnusson",
"id": 40903802,
"node_id": "MDQ6VXNlcjQwOTAzODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/40903802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IanMagnusson",
"html_url": "https://github.com/IanMagnusson",
"followers_url": "https://api.github.com/users/IanMagnusson/followers",
"following_url": "https://api.github.com/users/IanMagnusson/following{/other_user}",
"gists_url": "https://api.github.com/users/IanMagnusson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IanMagnusson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IanMagnusson/subscriptions",
"organizations_url": "https://api.github.com/users/IanMagnusson/orgs",
"repos_url": "https://api.github.com/users/IanMagnusson/repos",
"events_url": "https://api.github.com/users/IanMagnusson/events{/privacy}",
"received_events_url": "https://api.github.com/users/IanMagnusson/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"On further inspection, I believe the source of the difference is the `position_ids`. When the batched and padded `past_key_values` are used, the default `position_ids` are computed by [this code](https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/gpt2/modeling_gpt2.py#L791):\r\n\r\n``` python\r\n if past_key_values is None:\r\n past_length = 0\r\n past_key_values = tuple([None] * len(self.h))\r\n else:\r\n past_length = past_key_values[0][0].size(-2)\r\n if position_ids is None:\r\n position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device)\r\n position_ids = position_ids.unsqueeze(0).view(-1, input_shape[-1])\r\n```\r\n\r\nBecause the past_length includes the padded parts of past_key_values, this will cause the `position_ids` for the new tokens to be different than if everything is computed from scratch.\r\n\r\nI tested and if you modify my minimal example in the original post with `position_ids = torch.tensor([[3],[4]],dtype=torch.int64)` and pass that to the model forward pass, both asserts now pass. So just manually specifying the `position_ids` solves this problem.",
"I won't have time to look into this I'm afraid. @ArthurZucker could you give it a try? ",
"Yep, I will have a look asap ",
"So! Sorry for the late reply. My first answer would be that the `attention_mask` and the inputs are different. \r\n- In the first case, you are feeding `[ 64, 275, 269, 50256]` and then `[288]` with the combined attention mask : `[1, 1, 1, 0, 1]`.\r\n- In the second case, you are feeding `[ 64, 275, 269, 288, 50256]` with attention mask `[1, 1, 1, 1, 0]`. \r\n\r\n\r\nI thought that using `padding_side='left'` would fix it, let me investigate!\r\n",
"Okay this fixes it for me : \r\n```python \r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\nmodel = AutoModelForCausalLM.from_pretrained('gpt2')\r\ntokenizer = AutoTokenizer.from_pretrained('gpt2', padding_side=\"left\")\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\ns = [\"a b c\", \"l m n o p q r s t u v w x y\"]\r\ninputs1 = tokenizer(s, return_tensors='pt', padding=True)\r\n# First sequence is indeed padded : [ 64, 275, 269, 50256]\r\noutputs1 = model(**inputs1)\r\n\r\ns = [\" d\", \" z\"]\r\ninputs2 = tokenizer(s, return_tensors='pt', padding=True)\r\nattention_mask = torch.cat((inputs1['attention_mask'], inputs2['attention_mask']), dim=-1)\r\n# outputs1.past_key_values[0][0].shape\r\n# torch.Size([2, 12, 4, 64])\r\noutputs2 = model(input_ids=inputs2['input_ids'], attention_mask=attention_mask, past_key_values=outputs1.past_key_values)\r\n\r\ns = [\"a b c d\", \"l m n o p q r s t u v w x y z\"]\r\ninputs_full = tokenizer(s, return_tensors='pt', padding=True)\r\noutputs_full = model(**inputs_full)\r\n\r\n\r\nassert torch.allclose(outputs2.logits[1,0],outputs_full.logits[1,-1]) # are second example last token logits the same? -> passes\r\nassert torch.allclose(outputs2.logits[0,0], outputs_full.logits[0,-1]) # are first example last token logits the same? -> fails\r\n```\r\n",
"@ArthurZucker thanks for looking into this! Yes using `padding_side=\"left\"` seems like a great solution to this issue!\r\n\r\nI'm curious what is the intended path for users to figure out this usage? I can see how the most common use case for `past_key_values` is sequential decoding, in which case batched generation will already mandate left padding. However there may be some other users like myself that are using past_key_values to compute likelihoods of a set of reference texts that all have some shared prefix that can be cached with past_key_values. In that case, the necessity of left padding wont emerge until one considers what will happen to the `position_ids` as we have here.\r\n\r\nI wonder if the [documentation](https://github.com/huggingface/transformers/blob/31d452c68b34c2567b62924ee0df40a83cbc52d5/src/transformers/models/gpt2/modeling_gpt2.py#L558) for the `past_key_values` and `attention_mask` parameters of `forward` could mention that left padding will preserve the `position_ids`. Below is a possibility with changes in bold. It's just a thought, in case it might be helpful. Thank you for your consideration!\r\n\r\n> past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. **However, the attention_mask of given past input_ids does need to be provided (see attention_mask).**\r\n> \r\n> attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:\r\n> 1 for tokens that are not masked,\r\n> 0 for tokens that are masked.\r\n> If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids). **For batching with past_key_values, left padding is required to make uninterrupted attention_masks that preserve position_ids.**",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@ArthurZucker what if the suffixes (`[\" d\", \" z\"]` in the example) have a different number of tokens? I changed the suffixes to `[\" d e\", \" z\"]` and don't get the expected result\r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained('gpt2')\r\ntokenizer = AutoTokenizer.from_pretrained('gpt2', padding_side=\"left\")\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\ns = [\"a b c\", \"l m n o p q r s t u v w x y\"]\r\ninputs1 = tokenizer(s, return_tensors='pt', padding=True)\r\noutputs1 = model(**inputs1)\r\n\r\ns = [\" d e\", # add e\r\n \" z\"]\r\ninputs2 = tokenizer(s, return_tensors='pt', padding=True)\r\nattention_mask = torch.cat((inputs1['attention_mask'],\r\n inputs2['attention_mask']),\r\n dim=-1)\r\noutputs2 = model(input_ids=inputs2['input_ids'],\r\n attention_mask=attention_mask,\r\n past_key_values=outputs1.past_key_values)\r\n\r\ns = [\"a b c d e\", # add e\r\n \"l m n o p q r s t u v w x y z\"]\r\ninputs_full = tokenizer(s, return_tensors='pt', padding=True)\r\noutputs_full = model(**inputs_full)\r\n\r\nassert torch.allclose(outputs2.logits[0,-1], outputs_full.logits[0,-1])\r\n# are first example last token logits the same? -> fails\r\n\r\nassert torch.allclose(outputs2.logits[1,-1], outputs_full.logits[1,-1])\r\n# are second example last token logits the same? -> fails\r\n```\r\n\r\nEdit: I think I have a general solution. Will add another comment",
"A general solution (general meaning: prefixes can have a different number of tokens, and suffixes can have a different number of tokens) is to create and supply `position_ids` as @IanMagnusson found [above](https://github.com/huggingface/transformers/issues/18104#issuecomment-1182489082). I also think right-padding is the more correct solution b/c prefix position ids are the same as they were if there was no padding.\r\n\r\nDemo\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained('gpt2')\r\ntokenizer = AutoTokenizer.from_pretrained('gpt2')\r\ntokenizer.pad_token = tokenizer.eos_token # allow batching\r\nif not tokenizer.padding_side == 'right':\r\n raise ValueError('Gotta use right padding to ensure position IDs are '\r\n 'correct.')\r\n\r\n\r\nprefixes = ['a b c',\r\n 'l m n o p q r s t u v w x y']\r\n# Make sure to start each suffix w/ a whitespace\r\nsuffixes = [' d e',\r\n ' z']\r\n\r\n\r\n# Batch inference prefixes\r\nprefixes_encoding = tokenizer(prefixes, return_tensors='pt', padding=True)\r\nwith torch.no_grad():\r\n prefixes_out = model(**prefixes_encoding)\r\n# Need offsets so that position_ids for future tokens are set correctly\r\noffsets = prefixes_encoding.attention_mask.sum(dim=1)\r\n\r\n\r\n# Batch inference suffixes\r\nsuffixes_encoding = tokenizer(suffixes, return_tensors='pt',\r\n padding=True)\r\nnum_completion_tokens = suffixes_encoding.input_ids.shape[1]\r\n\r\n# Set position_ids to what they were had we fed each prefix + suffix\r\n# together w/ right-padding (right-padding b/c GPT-2 uses absolute position ids)\r\nsuffixes_position_ids = (torch.arange(0, num_completion_tokens) +\r\n offsets[:, None]) # broadcast\r\n\r\n# Need attention_mask to include the prefixes since it could have padding\r\nattention_mask = torch.cat((prefixes_encoding.attention_mask,\r\n suffixes_encoding.attention_mask),\r\n dim=1)\r\n\r\n\r\n# Everything should now be aligned 🤞 🙏\r\nwith torch.no_grad():\r\n suffixes_out = model(input_ids=suffixes_encoding.input_ids,\r\n attention_mask=attention_mask,\r\n past_key_values=prefixes_out.past_key_values,\r\n position_ids=suffixes_position_ids)\r\n```\r\n\r\nTests\r\n\r\n```python\r\n\r\n# Expected output\r\nfull = [prefix + suffix for prefix, suffix in zip(prefixes, suffixes)]\r\nfull_encoding = tokenizer(full, return_tensors='pt', padding=True)\r\nwith torch.no_grad():\r\n full_out = model(**full_encoding)\r\n\r\n\r\n# Test shape\r\nassert suffixes_out.logits.shape[0] == full_out.logits.shape[0]\r\nassert suffixes_out.logits.shape[-1] == full_out.logits.shape[-1]\r\n\r\n\r\n# Test that every non-pad token's logits are close.\r\n# (in the comments, the token in parentheses is the one whose logits we're\r\n# acessing)\r\nassert torch.allclose(suffixes_out.logits[0, 0], # (d), e\r\n full_out.logits[0, 3]) # a, b, c, (d), e, rest are <PAD>\r\n\r\nassert torch.allclose(suffixes_out.logits[0, 1], # d, (e)\r\n full_out.logits[0, 4]) # a, b, c, d, (e), rest are <PAD>\r\n\r\nassert torch.allclose(suffixes_out.logits[1, 0], # (z), <PAD>\r\n full_out.logits[1, -1]) # l m n o p q r s t u v w x y (z)\r\n```",
"Hey! Yes as mentioned before, the positional IDS in GPT2 are not created on the fly contrary to other of our models. A fix is in the makinf, see #21853, which should prevent you from having to pass the positional ids. "
] | 1,657
| 1,678
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.0-89-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu113 (False)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patil-suraj @patrickvonplaten @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Below is a minimal example that reproduces this unexpected behavior I encountered while tinkering with past_key_values. Essentially when I cache keys and values from a padded batch and then use past_key_values to run forward on an additional token for each example in the batch, I get somewhat different results than if I just compute the whole inputs from scratch and look at the last tokens.
It seems that something is going wrong when past_key_values involves some padding, however I believe I am using attention_mask correctly by including the masking strategy that was used for past_key_values as specified in the docs.
``` python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('gpt2')
tokenizer = AutoTokenizer.from_pretrained('gpt2')
tokenizer.pad_token = tokenizer.eos_token
s = ["a b c", "l m n o"]
inputs1 = tokenizer(s, return_tensors='pt', padding=True)
outputs1 = model(**inputs1)
s = [" d", " p"]
inputs2 = tokenizer(s, return_tensors='pt', padding=True)
attention_mask = torch.cat((inputs1['attention_mask'], inputs2['attention_mask']), dim=1)
outputs2 = model(input_ids=inputs2['input_ids'], attention_mask=attention_mask, past_key_values=outputs1.past_key_values)
s = ["a b c d", "l m n o p"]
inputs_full = tokenizer(s, return_tensors='pt', padding=True)
outputs_full = model(**inputs_full)
assert torch.allclose(outputs2.logits[1,0],outputs_full.logits[1,-1]) # are second example last token logits the same? -> passes
assert torch.allclose(outputs2.logits[0,0], outputs_full.logits[0,-2]) # are first example last token logits the same? -> fails
```
### Expected behavior
The expected behavior would be for the logits of given tokens to be the same regardless of whether past_key_values is used for preceding tokens or if the full inputs are computed from scratch.
Thanks so much for all your hard work on this great library!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18104/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18104/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18103
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18103/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18103/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18103/events
|
https://github.com/huggingface/transformers/pull/18103
| 1,300,966,618
|
PR_kwDOCUB6oc47NQVQ
| 18,103
|
Make Nezha tests slow
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18103). All of your documentation changes will be reflected on that endpoint.",
"Hi @sgugger. I am not sure why the Nezha tests are slow. The tests that you listed here are not the integration test with the full model, but the model tests with test config (which is be pretty small and the same size as the regular bert test). I can try to decrease the test config size to see if it helps.",
"Found the issue and here is the fix #18106"
] | 1,657
| 1,657
| 1,657
|
COLLABORATOR
| null |
# What does this PR do?
The Nezha model tests are fairly slow (see below) so this PR marks them as such. @sijunhe if you have an idea on how to make them faster, it's more than welcome!
Current times on main:
```
46.62s call tests/models/longt5/test_modeling_longt5.py::LongT5TGlobalModelTest::test_export_to_onnx
38.86s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_attention_outputs
35.41s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load_fast_init_to_base
35.37s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load_fast_init_from_base
28.36s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_integration
26.86s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_hidden_states_output
26.62s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_feed_forward_chunking
26.54s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_save_load_from_pretrained
26.08s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_save_load
25.02s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_correct_missing_keys
22.04s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_attention_outputs
20.56s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_save_load_fast_init_from_base
19.49s call tests/models/longt5/test_modeling_longt5.py::LongT5ModelTest::test_export_to_onnx
18.63s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_save_load_fast_init_to_base
18.27s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_torch_fx_output_loss
18.20s call tests/models/flava/test_modeling_flava.py::FlavaImageCodebookTest::test_save_load
17.45s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_torch_fx
17.15s call tests/models/flava/test_modeling_flava.py::FlavaImageCodebookTest::test_feed_forward_chunking
16.16s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_tie_model_weights
16.05s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_save_load
15.34s call tests/models/flava/test_modeling_flava.py::FlavaImageCodebookTest::test_determinism
14.88s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_feed_forward_chunking
14.87s call tests/models/mobilevit/test_modeling_mobilevit.py::MobileViTModelTest::test_save_load_fast_init_from_base
14.62s call tests/models/detr/test_modeling_detr.py::DetrModelTest::test_hidden_states_output
14.45s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_headmasking
14.20s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_model_outputs_equivalence
14.03s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_initialization
13.81s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_inputs_embeds
13.80s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning
13.58s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_determinism
13.36s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_head_pruning_save_load_from_config_init
13.16s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_forward_signature
13.05s call tests/models/data2vec/test_modeling_data2vec_audio.py::Data2VecAudioModelTest::test_mask_time_prob_ctc
13.00s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_gradient_checkpointing_backward_compatibility
12.99s call tests/models/nezha/test_modeling_nezha.py::NezhaModelTest::test_gradient_checkpointing_enable_disable
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18103/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18103",
"html_url": "https://github.com/huggingface/transformers/pull/18103",
"diff_url": "https://github.com/huggingface/transformers/pull/18103.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18103.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18102
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18102/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18102/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18102/events
|
https://github.com/huggingface/transformers/pull/18102
| 1,300,959,850
|
PR_kwDOCUB6oc47NO60
| 18,102
|
TF: remove graph mode distinction when processing boolean options
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"LGTM! All code of this type should be removed imo - it's an artifact of TF 1.x. Modern TF code is compiled by tracing and recompiled if Python flags are changed and can handle all kinds of weird Python flow control as a result."
] | 1,657
| 1,657
| 1,657
|
MEMBER
| null |
# What does this PR do?
Removes a very old `if` branch related to boolean options, as TF graph mode can handle both branches with no issues -- it passes core tests for the models I tried.
It also unblocks @sayakpaul in the demo he's building for the TF SegFormer (#17910), which requires setting `output_hidden_states` in graph mode.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18102/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18102/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18102",
"html_url": "https://github.com/huggingface/transformers/pull/18102",
"diff_url": "https://github.com/huggingface/transformers/pull/18102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18102.patch",
"merged_at": 1657649131000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18101
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18101/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18101/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18101/events
|
https://github.com/huggingface/transformers/issues/18101
| 1,300,958,183
|
I_kwDOCUB6oc5Niwvn
| 18,101
|
OOM error when training with trainer
|
{
"login": "zunlongzhou",
"id": 42513377,
"node_id": "MDQ6VXNlcjQyNTEzMzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/42513377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zunlongzhou",
"html_url": "https://github.com/zunlongzhou",
"followers_url": "https://api.github.com/users/zunlongzhou/followers",
"following_url": "https://api.github.com/users/zunlongzhou/following{/other_user}",
"gists_url": "https://api.github.com/users/zunlongzhou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zunlongzhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zunlongzhou/subscriptions",
"organizations_url": "https://api.github.com/users/zunlongzhou/orgs",
"repos_url": "https://api.github.com/users/zunlongzhou/repos",
"events_url": "https://api.github.com/users/zunlongzhou/events{/privacy}",
"received_events_url": "https://api.github.com/users/zunlongzhou/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,660
| 1,660
|
NONE
| null |
### System Info
transformers: 4.19.2
When I use trainer to do mlm training on deberta-v3-large, there is out of memory problem.
And the GPU occupancy continues to grow over time, eventually out of memory.and throws OOM error every time at fixed position
By searching forums and issues, I tried to modify the source code of trainer, including
1. in the huggingface transformers trainer code (function create_optimizer), add force_broadcast_object=True
2. rewrite the Trainer saving function to skip saving the optimizer weights
3. disable the optimizer saving by commenting out consolidate_state_dict as well as the optimizer saving part
4. Remove useless intermediate variables and empty_cache
but it didn't work
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
training_args = TrainingArguments(
output_dir=args.output_dir,
evaluation_strategy="no",
learning_rate=args.lr,
weight_decay=0.01,
save_strategy='steps',
per_device_train_batch_size=args.batch_size,
num_train_epochs=args.num_train_epochs,
# report_to="wandb",
run_name=f'output-mlm-{args.exp_num}',
# logging_dir='./logs',
lr_scheduler_type='cosine',
warmup_ratio=0.2,
fp16=True,
logging_steps=500,
gradient_accumulation_steps=args.gradient_accumulation_steps,
save_steps=5000,
prediction_loss_only=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
# eval_dataset=tokenized_datasets['valid'],
data_collator=data_collator,
# optimizers=(optimizer, scheduler)
)
### Expected behavior
Hope it works without changing batch_size
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18101/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18100
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18100/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18100/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18100/events
|
https://github.com/huggingface/transformers/pull/18100
| 1,300,943,663
|
PR_kwDOCUB6oc47NLcR
| 18,100
|
Fix image segmentation and object detection pipeline tests
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,657
| 1,657
|
COLLABORATOR
| null |
# What does this PR do?
The recent release of timm has broken two pipeline tests, this PR fixes them.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18100/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18100",
"html_url": "https://github.com/huggingface/transformers/pull/18100",
"diff_url": "https://github.com/huggingface/transformers/pull/18100.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18100.patch",
"merged_at": 1657557717000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18099
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18099/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18099/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18099/events
|
https://github.com/huggingface/transformers/pull/18099
| 1,300,928,479
|
PR_kwDOCUB6oc47NIKF
| 18,099
|
Add filename to info displayed when downloading things in from_pretrained
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you so much, Sylvain!"
] | 1,657
| 1,657
| 1,657
|
COLLABORATOR
| null |
# What does this PR do?
The progress bar used in `http_get` has a description saying "Downloading". When we download multiple files (for instance for a sharded chackpoint) that's not necessarily super informative, so this PR adds the name of the file being downloaded to the description.
cc @stas00
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18099/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18099/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18099",
"html_url": "https://github.com/huggingface/transformers/pull/18099",
"diff_url": "https://github.com/huggingface/transformers/pull/18099.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18099.patch",
"merged_at": 1657557906000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18098
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18098/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18098/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18098/events
|
https://github.com/huggingface/transformers/pull/18098
| 1,300,912,117
|
PR_kwDOCUB6oc47NEoW
| 18,098
|
[ create_a_model.mdx ] translate to pt
|
{
"login": "Fellip15",
"id": 81062614,
"node_id": "MDQ6VXNlcjgxMDYyNjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/81062614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Fellip15",
"html_url": "https://github.com/Fellip15",
"followers_url": "https://api.github.com/users/Fellip15/followers",
"following_url": "https://api.github.com/users/Fellip15/following{/other_user}",
"gists_url": "https://api.github.com/users/Fellip15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Fellip15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fellip15/subscriptions",
"organizations_url": "https://api.github.com/users/Fellip15/orgs",
"repos_url": "https://api.github.com/users/Fellip15/repos",
"events_url": "https://api.github.com/users/Fellip15/events{/privacy}",
"received_events_url": "https://api.github.com/users/Fellip15/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Obrigado @Fellip15! Thank you for your translation, and sorry for my late review. \r\n\r\nThe text looks good to me, but the tests seem to have a weird loop. I can not find the reason, WDYT @sgugger?"
] | 1,657
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
Creates a new file called create_a_model.mdx in docs/source/pt
Translates all the content of the base create_a_model to pt-br
Fixes issue #16824
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18098/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18098",
"html_url": "https://github.com/huggingface/transformers/pull/18098",
"diff_url": "https://github.com/huggingface/transformers/pull/18098.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18098.patch",
"merged_at": 1658836868000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18097
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18097/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18097/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18097/events
|
https://github.com/huggingface/transformers/pull/18097
| 1,300,876,365
|
PR_kwDOCUB6oc47M88P
| 18,097
|
TF: use the correct config with `(...)EncoderDecoder` models
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I believe they are -- going to give it a go afterwards if @ydshieh also agrees :)",
"I have limited connection at this moment in the mountain, so feel free to merge if you prefer. Regarding the common mixin, good for me. I see there are a few little things to address, like the input names (input_ids, pixel values etc).\r\nWould be nice if you do this refactorization after merging my PR about the PT/TF equivalence tests, or incoperate the change in it 🙏 \r\n\r\nThank you for the fix, @gante",
"@ydshieh can I have a review plz 🙏 ",
"@ydshieh rebased with main and reran tests -- all working 👍 "
] | 1,657
| 1,658
| 1,658
|
MEMBER
| null |
# What does this PR do?
Fixes #18071
Modifies `unpack_inputs` to ignore the config file for `(...)EncoderDecoder` models, mimicking the behavior in PT. If we don't ignore it, then unset options will get set with the config's default (`False` for most of them), causing the inner models to ignore their own config files.
⚠️ I've added a corresponding test for the `EncoderDecoder` models. I then noticed that other `(...)EncoderFecoder` tests have copy/pasted their own `EncoderDecoderMixin`, so I've left the other classes for a follow-up PR with the following question: should a common `EncoderDecoderMixin` be defined and shared across `(...)EncoderDecoder` tests, or should I add a similar test to all other classes individually?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18097/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18097",
"html_url": "https://github.com/huggingface/transformers/pull/18097",
"diff_url": "https://github.com/huggingface/transformers/pull/18097.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18097.patch",
"merged_at": 1658493106000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18096
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18096/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18096/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18096/events
|
https://github.com/huggingface/transformers/issues/18096
| 1,300,860,058
|
I_kwDOCUB6oc5NiYya
| 18,096
|
TFWav2Vec2ForCTC breaks when not run eagerly
|
{
"login": "Sreyan88",
"id": 36225987,
"node_id": "MDQ6VXNlcjM2MjI1OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/36225987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sreyan88",
"html_url": "https://github.com/Sreyan88",
"followers_url": "https://api.github.com/users/Sreyan88/followers",
"following_url": "https://api.github.com/users/Sreyan88/following{/other_user}",
"gists_url": "https://api.github.com/users/Sreyan88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sreyan88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sreyan88/subscriptions",
"organizations_url": "https://api.github.com/users/Sreyan88/orgs",
"repos_url": "https://api.github.com/users/Sreyan88/repos",
"events_url": "https://api.github.com/users/Sreyan88/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sreyan88/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @Sreyan88 👋 \r\n\r\nThat is a big reproduction script 😅 To ensure we can provide quality support, a short reproduction script goes a long way.\r\n\r\nI haven't run the code, but my suspicion goes to the line where the model is defined (`model = TFWav2Vec2ForCTC.from_pretrained(MODEL_CHECKPOINT,vocab_size=len(processor.tokenizer), pad_token_id=processor.tokenizer.pad_token_id,apply_spec_augment=False, from_pt = True)`). Try removing the `pad_token_id` keyword argument here -- the data pipeline knows which token is the padding token from the tokenizer. It might be creating a new token with the argument, which causes the `vocab_size` error.\r\n\r\nLet me know if it works! If it does, it probably means that our `pad_token_id` is not working properly -- I've been seeing similar errors lately.",
"Hi @gante ,\r\n\r\nThe error persists even after removing it!\r\n\r\nI would please request you to run the script twice by toggling run_eagerly boolean in this line:\r\n\r\nmodel.compile(optimizer=optimizer ,run_eagerly = False, metrics =[compute_wer])\r\n\r\nwhich is in the Building and Compiling the Model section. When `run_eagerly = True`, the training does not throw any error!\r\n\r\nApologies for the script but I am about to push it to Keras examples soon so it's indeed a detailed one and I thought of explaining every step because I was not sure about the error. The script takes about 2 mins to run on colab!",
"Hi @Sreyan88 it is not about the run time, but about being able to pin the issue. We don't have the bandwidth to help the community with all requests and bugs, so we request some help from the community to create short scripts for bug reproducibility.\r\n\r\nWithout it, I'm afraid this issue will not jump high on my priority list :)",
"No problem! I have modified the script and deleted comments/explanations. Currently, all code blocks are just the ones absolutely necessary. Hope you find some time to look at it! and please also suggest if I should clean more!\r\n\r\nThank You!",
"Hi @gante ,\r\n\r\nGood day! Any leads to this? I tried but couldn't figure out the exact issue. :( ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @gante ,\r\n\r\nGood day! Could you please re-open this as it wasn't solved? Thank You!"
] | 1,657
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
### System Info
`transformers` version: 4.21.0.dev0
- Platform: Linux-5.13.0-48-generic-x86_64-with-glibc2.31
- Python version: 3.7.13
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@Rocketknight1 @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Colab link to reproduce: https://colab.research.google.com/drive/1GxQtnDLaDFooG8t2uAIR-2GnS2RFod_j?usp=sharing
When model is compiled with:
model.compile(optimizer=optimizer ,run_eagerly = False, metrics =[compute_wer])
the code errors out with:
```
ValueError: Label values must be <= vocab_size: 30
```
However, it works perfectly when `run_eagerly = True`.
### Expected behavior
Training should happen fine even when not run eagerly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18096/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18095
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18095/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18095/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18095/events
|
https://github.com/huggingface/transformers/pull/18095
| 1,300,621,996
|
PR_kwDOCUB6oc47MGC3
| 18,095
|
Report value for a step instead of epoch.
|
{
"login": "zhawe01",
"id": 15643982,
"node_id": "MDQ6VXNlcjE1NjQzOTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/15643982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhawe01",
"html_url": "https://github.com/zhawe01",
"followers_url": "https://api.github.com/users/zhawe01/followers",
"following_url": "https://api.github.com/users/zhawe01/following{/other_user}",
"gists_url": "https://api.github.com/users/zhawe01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhawe01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhawe01/subscriptions",
"organizations_url": "https://api.github.com/users/zhawe01/orgs",
"repos_url": "https://api.github.com/users/zhawe01/repos",
"events_url": "https://api.github.com/users/zhawe01/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhawe01/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Looks good, thanks for fixing!\r\n> \r\n> Can you just run `make style` on your branch to make sure the formatting check passes?\r\n\r\nDone. @sgugger ",
"Thanks a lot! (test failure is already fixed on main, so merging)"
] | 1,657
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
# What does this PR do?
Report an objective function value for a step instead of epoch to optuna.
## I made this modification for the following reason:
If "eval_steps" is less than steps per epoch, there maybe warnings: `optuna/trial/_trial.py:592: UserWarning: The reported value is ignored because this ‘step’ 0 is already reported.`. This is because the epoch granularity is too coarse. So "step" are more appropriate than "epoch" here.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@sgugger @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18095/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18095",
"html_url": "https://github.com/huggingface/transformers/pull/18095",
"diff_url": "https://github.com/huggingface/transformers/pull/18095.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18095.patch",
"merged_at": 1657628315000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18094
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18094/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18094/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18094/events
|
https://github.com/huggingface/transformers/pull/18094
| 1,300,481,439
|
PR_kwDOCUB6oc47Lnfv
| 18,094
|
Good difficult issue override for the stalebot
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,658
| 1,658
|
MEMBER
| null |
Ignores issue with the `Good difficult issue` label with the stalebot, as otherwise these get closed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18094/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18094",
"html_url": "https://github.com/huggingface/transformers/pull/18094",
"diff_url": "https://github.com/huggingface/transformers/pull/18094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18094.patch",
"merged_at": 1658821154000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18093
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18093/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18093/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18093/events
|
https://github.com/huggingface/transformers/issues/18093
| 1,300,480,859
|
I_kwDOCUB6oc5Ng8Nb
| 18,093
|
[logging] Turn off loss logging, while keeping progress bar and logging to third-party application
|
{
"login": "antonioloison",
"id": 48316195,
"node_id": "MDQ6VXNlcjQ4MzE2MTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/48316195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antonioloison",
"html_url": "https://github.com/antonioloison",
"followers_url": "https://api.github.com/users/antonioloison/followers",
"following_url": "https://api.github.com/users/antonioloison/following{/other_user}",
"gists_url": "https://api.github.com/users/antonioloison/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antonioloison/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antonioloison/subscriptions",
"organizations_url": "https://api.github.com/users/antonioloison/orgs",
"repos_url": "https://api.github.com/users/antonioloison/repos",
"events_url": "https://api.github.com/users/antonioloison/events{/privacy}",
"received_events_url": "https://api.github.com/users/antonioloison/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You should be able to disable logging while keeping progress bars active. Have you tried setting the log level manually?\r\n\r\n```python\r\nfrom transformers.utils.logging import set_verbosity_error\r\n\r\nset_verbosity_error()\r\n```",
"Yes, I've tried that, but this makes my progress bar disappear and keeps the loss logging. I want to do the opposite, keep my progress bar and remove the loss logging.",
"Ah, understood. Then in that case providing the training argument `logging_strategy` should do what you want: `logging_strategy=\"no\"` will not output these logs. Did that solve your problem?\r\n",
"Thank you for your answer @LysandreJik. I also tried that, but I want to keep the logs in my third-party application (Weights and Biases here). If I use `logging_strategy=\"no\"`, no more logs are reported to Weights and Biases.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Not sure if it ever solved in the recent releases but for people who stuck in the older versions. As it is [this](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_callback.py#L516C20-L516C20) line that prints the dict log. While progress bar is handled somewhere else. This is an ugly way to stop `ProgressCallback` from printing the dict without modifying the source (at least in older version).\r\n\r\n```\r\nfrom transformers.trainer_callback import ProgressCallback\r\ndef on_log(self, args, state, control, logs=None, **kwargs):\r\n if state.is_local_process_zero and self.training_bar is not None:\r\n _ = logs.pop(\"total_flos\", None)\r\nProgressCallback.on_log = on_log\r\n```",
"I know this issue is old, but as far as I know, there's no way to achieve what OP intends unless he uses the solution posted above by @DableUTeeF (worked for me!). If there is, I didn't find it in the documentation.\r\n\r\nThis is very important for people that have their own logging solution. It would be cool if there were a clean way to keep logging the loss every `logging_steps`, since this should still work if the user has Neptune/wandb, but disable printing to the terminal."
] | 1,657
| 1,703
| 1,660
|
NONE
| null |
### Feature request
I would like to add a training argument to the `TrainingArguments` class to turn off the loss logging to stdout while keeping the progress bar and logging to a third-party application like Weights and Biases.
### Motivation
I am working on a project that trains a model with the Trainer class. I need to log the losses at every epoch to Weights and Biases. Here is my code:
```
training_arguments = TrainingArguments(
output_dir="./logging_dir",
num_train_epochs=epochs,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
report_to="wandb",
logging_strategy="epoch",
)
trainer = Trainer(
model=model,
args=training_arguments,
tokenizer=tokenizer,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
```
This code is contained in a python file `train.py` and is launched in the terminal via the command `python train.py`.
However, I don’t want to have the loss printed because it breaks my progress bar as you can see in the image below and I need the progress bar to give some feedback on the training time.
<img width="1155" alt="cut_progress_bar" src="https://user-images.githubusercontent.com/48316195/178239258-ad61d97f-d851-4c31-a4da-c653aa17db3b.png">
### Your contribution
I think that I could add a `disable_on_log` argument to `TrainingArguments`. Then in the `on_log` of the `ProgressCallback`, a condition should be added like this:
```
def on_log(self, args, state, control, logs=None, **kwargs):
if state.is_local_process_zero and self.training_bar is not None and args.disable_on_log:
_ = logs.pop("total_flos", None)
self.training_bar.write(str(logs))
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18093/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18093/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18092
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18092/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18092/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18092/events
|
https://github.com/huggingface/transformers/issues/18092
| 1,300,456,571
|
I_kwDOCUB6oc5Ng2R7
| 18,092
|
Can't convert Flax T5 model to PyTorch
|
{
"login": "Beau-xu",
"id": 78057213,
"node_id": "MDQ6VXNlcjc4MDU3MjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/78057213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Beau-xu",
"html_url": "https://github.com/Beau-xu",
"followers_url": "https://api.github.com/users/Beau-xu/followers",
"following_url": "https://api.github.com/users/Beau-xu/following{/other_user}",
"gists_url": "https://api.github.com/users/Beau-xu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Beau-xu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Beau-xu/subscriptions",
"organizations_url": "https://api.github.com/users/Beau-xu/orgs",
"repos_url": "https://api.github.com/users/Beau-xu/repos",
"events_url": "https://api.github.com/users/Beau-xu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Beau-xu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Sorry, I made a mistake. \r\nActually the missing weights of above the above PyTorch model converted from Flax can be correctly initialized by accepting the Flax model's weight, `fx_model.params['shared']['embedding']`."
] | 1,657
| 1,657
| 1,657
|
NONE
| null |
### System Info
- `transformers` version: 4.18.0
- Platform: Linux-3.10.0-957.5.1.el7.x86_64-x86_64-with-centos-7.6.1810-Core
- Python version: 3.6.13
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.2 (False)
- Tensorflow version (GPU?): 2.6.2 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.5 (cpu)
- Jax version: 0.2.17
- JaxLib version: 0.1.69
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help?
@patrickvonplaten @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Failed to convert a T5 model from Flax to PyTorch
```python
import tempfile
from transformers import AutoTokenizer, FlaxT5ForConditionalGeneration, T5ForConditionalGeneration
tmp = tempfile.mkdtemp()
flax_model = FlaxT5ForConditionalGeneration.from_pretrained("t5-small")
flax_model.save_pretrained(tmp)
pt_model = T5ForConditionalGeneration.from_pretrained(tmp, from_flax=True)
```
### Expected behavior
Some weights of T5ForConditionalGeneration were not initialized from the Flax model: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']
```
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
/data/home/db72687/miniconda3/envs/decipher/lib/python3.6/site-packages/transformers/modeling_flax_pytorch_utils.py:240: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /opt/conda/conda-bld/pytorch_1640811805959/work/torch/csrc/utils/tensor_numpy.cpp:189.)
pt_model_dict[flax_key] = torch.from_numpy(flax_tensor)
All Flax model weights were used when initializing T5ForConditionalGeneration.
Some weights of T5ForConditionalGeneration were not initialized from the Flax model and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18092/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18091
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18091/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18091/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18091/events
|
https://github.com/huggingface/transformers/issues/18091
| 1,300,417,665
|
I_kwDOCUB6oc5NgsyB
| 18,091
|
LayoutLMv2ForRelationExtraction is missing in transformers
|
{
"login": "binkjakub",
"id": 24194342,
"node_id": "MDQ6VXNlcjI0MTk0MzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24194342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/binkjakub",
"html_url": "https://github.com/binkjakub",
"followers_url": "https://api.github.com/users/binkjakub/followers",
"following_url": "https://api.github.com/users/binkjakub/following{/other_user}",
"gists_url": "https://api.github.com/users/binkjakub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/binkjakub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/binkjakub/subscriptions",
"organizations_url": "https://api.github.com/users/binkjakub/orgs",
"repos_url": "https://api.github.com/users/binkjakub/repos",
"events_url": "https://api.github.com/users/binkjakub/events{/privacy}",
"received_events_url": "https://api.github.com/users/binkjakub/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @NielsRogge ",
"+1 to this",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,660
| 1,660
|
NONE
| null |
### Feature request
Microsoft's [unilim repository](https://github.com/microsoft/unilm/tree/db1095a693aa0d6d15bb9312cccb7f8af42b0aeb/layoutlmft/layoutlmft), which originally implements all `LayoutLM` models, contains implementation of the model for relation extraction with Biaffine Attention Classifier, namely [`LayoutLMv2ForRelationExtraction`](https://github.com/microsoft/unilm/blob/db1095a693aa0d6d15bb9312cccb7f8af42b0aeb/layoutlmft/layoutlmft/models/layoutlmv2/modeling_layoutlmv2.py#L895). However, this class wasn't included in `transfromers`, due to unknown reasons. Therefore, the implementation for relation extraction `LayoutLMv2ForRelationExtraction` should be included to extend current LayoutLMv2 and LayoutXLM.
### Motivation
This repository should implement all tasks included in papers ([LayoutLMv2](https://arxiv.org/pdf/2012.14740.pdf), [LayoutXLM](https://arxiv.org/pdf/2104.08836.pdf)) and [unilim repository]((https://github.com/microsoft/unilm/tree/db1095a693aa0d6d15bb9312cccb7f8af42b0aeb/layoutlmft/layoutlmft)), thus this missing part should be added to `transformers`. It would enable users to easily reproduce entire paper as well as convenient use of relation extraction in their downstream applications.
### Your contribution
If there are no obstacles unknown to me, I could try to move the implementation from unilim to transformers.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18091/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18091/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18090
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18090/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18090/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18090/events
|
https://github.com/huggingface/transformers/issues/18090
| 1,300,415,357
|
I_kwDOCUB6oc5NgsN9
| 18,090
|
TypeError: to_json_file() got an unexpected keyword argument 'use_diff'
|
{
"login": "ADaBenxiong",
"id": 37175235,
"node_id": "MDQ6VXNlcjM3MTc1MjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/37175235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ADaBenxiong",
"html_url": "https://github.com/ADaBenxiong",
"followers_url": "https://api.github.com/users/ADaBenxiong/followers",
"following_url": "https://api.github.com/users/ADaBenxiong/following{/other_user}",
"gists_url": "https://api.github.com/users/ADaBenxiong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ADaBenxiong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ADaBenxiong/subscriptions",
"organizations_url": "https://api.github.com/users/ADaBenxiong/orgs",
"repos_url": "https://api.github.com/users/ADaBenxiong/repos",
"events_url": "https://api.github.com/users/ADaBenxiong/events{/privacy}",
"received_events_url": "https://api.github.com/users/ADaBenxiong/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Maybe @muellerzr or @sgugger have an idea?",
"Very strange. Can you tell us a bit more about your system? Specifically the python version you are using and the accelerate version? (Might not be relevant, but so we can know everything)",
"This is not linked to Accelerate at all, just the internals of `save_pretrained`. Without the whole traceback and the code executed, we can't really though.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I get the same error when I run `run_summarization_no_trainer.py` script from [DQ-Bart](https://github.com/amazon-science/dq-bart) repo:\r\n\r\n```\r\npython3 run_summarization_no_trainer.py \\\r\n --model_name_or_path google/flan-t5-base \\\r\n --dataset_name samsum \\\r\n --pred_distill \\\r\n --num_train_epochs 1 \\\r\n --weight_bits 8 \\\r\n --do_train \\\r\n --do_test \\\r\n --distill_encoder 6 \\\r\n --distill_decoder 6 \\\r\n --learning_rate 5e-5 \\\r\n --source_prefix summarize: \\\r\n --seed 7 \\\r\n```\r\n\r\nThe error happens in [line 822](https://github.com/amazon-science/dq-bart/blob/main/run_summarization_no_trainer.py#L822) when trying to save the trained student model. \r\n\r\nThis is the whole traceback:\r\n\r\n```\r\nFile \"run_summarization_no_trainer.py\", line 896, in <module>\r\n main()\r\nFile \"run_summarization_no_trainer.py\", line 822, in main\r\n unwrapped_model.save_pretrained(args.output_dir, save_function=accelerator.save)\r\nFile \"/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py\", line 1068, in save_pretrained\r\n model_to_save.config.save_pretrained(save_directory)\r\nFile \"/opt/conda/lib/python3.8/site-packages/transformers/configuration_utils.py\", line 438, in save_pretrained\r\n self.to_json_file(output_config_file, use_diff=True)\r\nTypeError: to_json_file() got an unexpected keyword argument 'use_diff'\r\n```\r\n\r\nThese are the dependencies:\r\n\r\n```\r\ntransformers==4.17.0\r\ndatasets==1.18.4\r\nsacrebleu==2.0\r\nwandb\r\nnltk\r\naccelerate==0.5.1\r\ntensorboard\r\nsetuptools<50\r\nrouge_score\r\npy7zr\r\n```\r\n\r\nI have tried to remove `use_diff=True` from `self.to_json_file(output_config_file, use_diff=True)` but I still get the same error. \r\n\r\nAny help is appreciated @sgugger @muellerzr\r\n\r\n\r\n\r\n\r\n",
"That error should be raised on that repo @jmdu99 as they use a custom configuration for their model that doesn't implement the same APIs as the Transformers configurations.",
"Just out of curiosity, what was the problem in your case @ADaBenxiong? Did you find a solution? "
] | 1,657
| 1,675
| 1,660
|
NONE
| null |
### System Info
transformers/modeling_utils.py
model_to_save.config.save_pretrained(save_directory)
transformers/configuration_utils.py
self.to_json_file(output_config_file, use_diff=True)
TypeError: to_json_file() got an unexpected keyword argument 'use_diff'
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
unwrapped_model = accelerator.unwrap_model(student_model)
unwrapped_model.**save_pretrained**(args.output_dir, save_function=accelerator.save)
### Expected behavior
Transformer 4.17.0
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18090/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18089
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18089/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18089/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18089/events
|
https://github.com/huggingface/transformers/pull/18089
| 1,300,299,064
|
PR_kwDOCUB6oc47LAyR
| 18,089
|
support no gpt_j_residual for gpt-neox
|
{
"login": "TopIdiot",
"id": 19198645,
"node_id": "MDQ6VXNlcjE5MTk4NjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/19198645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TopIdiot",
"html_url": "https://github.com/TopIdiot",
"followers_url": "https://api.github.com/users/TopIdiot/followers",
"following_url": "https://api.github.com/users/TopIdiot/following{/other_user}",
"gists_url": "https://api.github.com/users/TopIdiot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TopIdiot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TopIdiot/subscriptions",
"organizations_url": "https://api.github.com/users/TopIdiot/orgs",
"repos_url": "https://api.github.com/users/TopIdiot/repos",
"events_url": "https://api.github.com/users/TopIdiot/events{/privacy}",
"received_events_url": "https://api.github.com/users/TopIdiot/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18089). All of your documentation changes will be reflected on that endpoint.",
"@TopIdiot, would existing checkpoints (i.e., https://huggingface.co/EleutherAI/gpt-neox-20b) be usable with this configuration option, or would they output gibberish?",
"> @TopIdiot, would existing checkpoints (i.e., https://huggingface.co/EleutherAI/gpt-neox-20b) be usable with this configuration option, or would they output gibberish?\r\n\r\n@LysandreJik Yes, it can be used. The default value for gpt_j_residual is \"True\", so the exsiting checkpoints would enter the \"if self.gpt_j_residual\" branch. btw, I tested gpt-neox-20b, the result looks good for me.",
"The question is whether the model would give sensible results if you set it to `False`. Transformers is not a modular toolbox, we don't add options to models that make no sense with the pretrained checkpoints of that model, we add new models instead.",
"> The question is whether the model would give sensible results if you set it to `False`. Transformers is not a modular toolbox, we don't add options to models that make no sense with the pretrained checkpoints of that model, we add new models instead.\r\n\r\n@sgugger gpt_j_residual is an option which already existed in gpt-neox (i.e. https://github.com/EleutherAI/gpt-neox/blob/main/configs/20B.yml#L29)\r\n\r\nRecently, we trained EleutherAI/gpt-neox with option gpt_j_residual == False and successfully convert the checkpoint to torch. However, we found that the current version of huggingface's gpt-neox doesn't implement this part. We added it by ourself and got expected result. ",
"You can share the custom code of your model using the [code in the Hub](https://huggingface.co/docs/transformers/custom_models) API. This is typically the kind of changes we don't accept in existing model files (and the reason there is one for GPT-2, GPT-J, GPT-Neo and GPT-Neo-X which are all very similar).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,660
| 1,660
|
NONE
| null |
# What does this PR do?
Support "gpt_j_residual == False" for gpt-neox, which implement "else" branch in https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model/transformer.py#L627
## Who can review?
Anyone in the community is free to review the PR.
@sgugger @patrickvonplaten
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18089/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18089",
"html_url": "https://github.com/huggingface/transformers/pull/18089",
"diff_url": "https://github.com/huggingface/transformers/pull/18089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18089.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18088
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18088/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18088/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18088/events
|
https://github.com/huggingface/transformers/issues/18088
| 1,300,076,310
|
I_kwDOCUB6oc5NfZcW
| 18,088
|
RuntimeError - invalid multinomial distribution (with replacement=False, not enough non-negative category to sample)
|
{
"login": "zeke-john",
"id": 67245013,
"node_id": "MDQ6VXNlcjY3MjQ1MDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/67245013?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zeke-john",
"html_url": "https://github.com/zeke-john",
"followers_url": "https://api.github.com/users/zeke-john/followers",
"following_url": "https://api.github.com/users/zeke-john/following{/other_user}",
"gists_url": "https://api.github.com/users/zeke-john/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zeke-john/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zeke-john/subscriptions",
"organizations_url": "https://api.github.com/users/zeke-john/orgs",
"repos_url": "https://api.github.com/users/zeke-john/repos",
"events_url": "https://api.github.com/users/zeke-john/events{/privacy}",
"received_events_url": "https://api.github.com/users/zeke-john/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"is also get this warning: `UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\r\n next_indices = next_tokens // vocab_size`",
"@zeke-john, thanks for your issue! Please use tags responsibly, tagging everyone involved with GitHub won't guarantee you an answer.\r\n\r\nWe try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? We'll be unable to help you without you providing a complete (ideally small) reproducer, so without the long text in question it will be tough to find the issue for you. \r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,660
| 1,660
|
NONE
| null |
### System Info
whenever i set `do_sample=True`, i get the error: `RuntimeError('invalid multinomial distribution (with replacement=False, not enough non-negative category to sample)')`, i don't know why it is happening, but i want to use `do_sample=True` because it gives out more relevant results. Im using the Bart-Large-CNN for text summarization, with huggingface transformers version 4.16.2, any help would be greatly appreciated.
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoConfig
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSeq2SeqLM.from_pretrained(model_path)
max_chunk = 1024
current_chunk = 0
chunks = []
fulltext = ''' long text here'''
fulltext = fulltext.replace('.', '.<eos>')
fulltext = fulltext.replace('?', '?<eos>')
fulltext = fulltext.replace('!', '!<eos>')
sentences = fulltext.split('<eos>')
for sentence in sentences:
if len(chunks) == current_chunk + 1:
if len(chunks[current_chunk]) + len(sentence.split(' ')) <= max_chunk:
chunks[current_chunk].extend(sentence.split(' '))
else:
current_chunk += 1
chunks.append(sentence.split(' '))
else:
chunks.append(sentence.split(' '))
for chunk_id in range(len(chunks)):
chunks[chunk_id] = ' '.join(chunks[chunk_id])
chink_list = []
for chinks in chunks:
inputs = tokenizer(str(chinks), return_tensors="pt", truncation=True)
outputs = model.generate(inputs["input_ids"], do_sample=True)
chunk_summary = tokenizer.decode(outputs[0])
chunk_summary = str(chunk_summary)
chunk_summary = chunk_summary[:-4]
chunk_summary = chunk_summary[7:]
chink_list.append(chunk_summary)
summary = ' '.join(chink_list)
print( summary)
`
### Expected behavior
no errors and for it to run normally.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18088/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18087
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18087/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18087/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18087/events
|
https://github.com/huggingface/transformers/pull/18087
| 1,299,955,749
|
PR_kwDOCUB6oc47J5hs
| 18,087
|
[bloom] fix alibi device placement
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for the fix !",
"btw, we also discussed to change the alibi creation logic.\r\n\r\nit should create it once in init with a largish length (say 1k) and not change it again unless the input is longer. \r\n\r\nthen the device and dtype will be automatically handled correctly.\r\n\r\nthis is the logic that all `transformers` positional embeddings use.",
"Yes agreed !\nWe are already addressing these issues in this PR together with refactoring the whole attention block which looks too complicated : https://github.com/huggingface/transformers/pull/17866 \nHere alibi (including the shifting) and the attention mask is created only once "
] | 1,657
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
This PR fixes alibi device placement - currently it's the default device which breaks things at times - it has to be set explicitly to the correct device.
The problem emerged when trying to get DeepSpeed-Inference working.
Kudos to @RezaYazdaniAminabadi for discovering the problem.
cc: @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18087/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18087/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18087",
"html_url": "https://github.com/huggingface/transformers/pull/18087",
"diff_url": "https://github.com/huggingface/transformers/pull/18087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18087.patch",
"merged_at": 1657469507000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18086
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18086/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18086/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18086/events
|
https://github.com/huggingface/transformers/issues/18086
| 1,299,913,126
|
I_kwDOCUB6oc5Nexmm
| 18,086
|
AttributeError: 'TrainingArguments' object has no attribute 'generation_max_length'
|
{
"login": "CaffreyR",
"id": 84232793,
"node_id": "MDQ6VXNlcjg0MjMyNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaffreyR",
"html_url": "https://github.com/CaffreyR",
"followers_url": "https://api.github.com/users/CaffreyR/followers",
"following_url": "https://api.github.com/users/CaffreyR/following{/other_user}",
"gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions",
"organizations_url": "https://api.github.com/users/CaffreyR/orgs",
"repos_url": "https://api.github.com/users/CaffreyR/repos",
"events_url": "https://api.github.com/users/CaffreyR/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaffreyR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You need to use `Seq2SeqTrainingArguments` to go with `Seq2SeqTrainer`.",
"> You need to use `Seq2SeqTrainingArguments` to go with `Seq2SeqTrainer`.\r\nWell the thing is when I try to use `Seq2SeqTrainingArguments`, and the dataset value is tensor, it can train. However when I tried to use `TrainingArguments`, and the dataset value is not tensor, it can train.\r\n\r\nBut whatever way I code, the training loss is missed until the last epoch. \r\n\r\nWhat is the tricky here. Thanks! @sgugger \r\n\r\n```\r\n### dataset change\r\nimport torch\r\nclass Dataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings, labels=None):\r\n self.encodings = encodings\r\n self.labels = labels\r\n\r\n def __getitem__(self, idx):\r\n item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}\r\n if self.labels:\r\n item[\"labels\"] = torch.tensor(self.labels[idx]-1)\r\n return item\r\n\r\n def __len__(self):\r\n return len(self.encodings[\"input_ids\"])\r\n\r\nimport torch\r\nclass Dataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings, labels=None):\r\n self.encodings = encodings\r\n self.labels = labels\r\n\r\n def __getitem__(self, idx):\r\n item = {key: val[idx] for key, val in self.encodings.items()}\r\n if self.labels:\r\n item[\"labels\"] = self.labels[idx]-1\r\n return item\r\n\r\n def __len__(self):\r\n return len(self.encodings[\"input_ids\"])\r\n\r\ntrain_dataset = Dataset(X_train_tokenized, y_train)\r\nval_dataset = Dataset(X_val_tokenized, y_val)\r\n\r\n\r\nimport numpy as np\r\nfrom sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score\r\ndef compute_metrics(pred):\r\n print(pred)\r\n predict_res = torch.Tensor(pred.predictions[0]) # size:[验证集样本量, label的token长度, vocab大小]\r\n\r\n pred_ids = predict_res.argmax(dim=2)\r\n \r\n ## 2.处理 pred.label_ids\r\n labels_actual = torch.LongTensor(pred.label_ids)\r\n \r\n ## 3.计算accuracy\r\n total_num = labels_actual.shape[0]\r\n acc = torch.sum(torch.all(torch.eq(pred_ids, labels_actual), dim=1))/total_num\r\n return {'accuracy': acc}\r\n\r\n\r\n# Define Trainer\r\nargs = Seq2SeqTrainingArguments(\r\n output_dir=\"output\",\r\n evaluation_strategy=\"steps\",\r\n eval_steps=25,\r\n per_device_train_batch_size=8,\r\n per_device_eval_batch_size=8,\r\n num_train_epochs=100,\r\n seed=0,\r\n load_best_model_at_end=True,\r\n)\r\ntrainer = Trainer(\r\n# model=delta_model3,\r\n model=model,\r\n args=args,\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset,\r\n compute_metrics=compute_metrics,\r\n callbacks=[EarlyStoppingCallback(early_stopping_patience=3)],\r\n)\r\n\r\n# Train pre-trained model\r\ntrainer.train()\r\n```\r\n\r\n> The trainning log is as bellow\r\n<img width=\"450\" alt=\"image\" src=\"https://user-images.githubusercontent.com/84232793/178263827-c967b616-6279-484a-a265-5353b7241687.png\">\r\n",
"Please use the [forums](https://discuss.huggingface.co/) to debug your code as we keep issues for bugs and feature requests only. You didn't indicate you want to log the training loss every 25 steps in your training arguments, so it uses the default of 500.",
"> Please use the [forums](https://discuss.huggingface.co/) to debug your code as we keep issues for bugs and feature requests only. You didn't indicate you want to log the training loss every 25 steps in your training arguments, so it uses the default of 500.\r\n\r\nSorry about that! I will use the forums in the future! Thanks",
"i have the same problem here how did you resolve this issue\r\n"
] | 1,657
| 1,674
| 1,657
|
NONE
| null |
### System Info
```shell
transformers :4.20.1
platform: Colab
python : 3.7
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
"ade_corpus_v2" in Huggingface RAFT
### Reproduction
```
def compute_metrics(p):
pred, labels = p
pred = np.argmax(pred, axis=1)
accuracy = accuracy_score(y_true=labels, y_pred=pred)
recall = recall_score(y_true=labels, y_pred=pred)
precision = precision_score(y_true=labels, y_pred=pred)
f1 = f1_score(y_true=labels, y_pred=pred)
return {"accuracy": accuracy, "precision": precision, "recall": recall, "f1": f1}
# Define Trainer
args = TrainingArguments(
output_dir="output",
evaluation_strategy="steps",
eval_steps=500,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=100,
seed=0,
load_best_model_at_end=True,
)
from transformers import Seq2SeqTrainer
trainer = Seq2SeqTrainer(
# model=delta_model3,
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
compute_metrics=compute_metrics,
callbacks=[EarlyStoppingCallback(early_stopping_patience=3)],
)
# Train pre-trained model
trainer.train()
```
### Expected behavior
```shell
It went wrong.
>
/usr/local/lib/python3.7/dist-packages/transformers/optimization.py:310: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
FutureWarning,
***** Running training *****
Num examples = 40
Num Epochs = 100
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 500
[500/500 06:18, Epoch 100/100]
Step Training Loss Validation Loss
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-11-52043b3bb24a> in <module>()
33
34 # Train pre-trained model
---> 35 trainer.train()
3 frames
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1411 resume_from_checkpoint=resume_from_checkpoint,
1412 trial=trial,
-> 1413 ignore_keys_for_eval=ignore_keys_for_eval,
1414 )
1415
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1726 self.control = self.callback_handler.on_step_end(args, self.state, self.control)
1727
-> 1728 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
1729 else:
1730 self.control = self.callback_handler.on_substep_end(args, self.state, self.control)
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval)
1910 metrics = None
1911 if self.control.should_evaluate:
-> 1912 metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
1913 self._report_to_hp_search(trial, epoch, metrics)
1914
/usr/local/lib/python3.7/dist-packages/transformers/trainer_seq2seq.py in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix, max_length, num_beams)
66 dictionary also contains the epoch number which comes from the training state.
67 """
---> 68 self._max_length = max_length if max_length is not None else self.args.generation_max_length
69 self._num_beams = num_beams if num_beams is not None else self.args.generation_num_beams
70 return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
AttributeError: 'TrainingArguments' object has no attribute 'generation_max_length'
```
@sgugger
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18086/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18085
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18085/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18085/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18085/events
|
https://github.com/huggingface/transformers/issues/18085
| 1,299,901,174
|
I_kwDOCUB6oc5Neur2
| 18,085
|
Adding TF Implementation of BEiT
|
{
"login": "MadElf1337",
"id": 34575523,
"node_id": "MDQ6VXNlcjM0NTc1NTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/34575523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MadElf1337",
"html_url": "https://github.com/MadElf1337",
"followers_url": "https://api.github.com/users/MadElf1337/followers",
"following_url": "https://api.github.com/users/MadElf1337/following{/other_user}",
"gists_url": "https://api.github.com/users/MadElf1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MadElf1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MadElf1337/subscriptions",
"organizations_url": "https://api.github.com/users/MadElf1337/orgs",
"repos_url": "https://api.github.com/users/MadElf1337/repos",
"events_url": "https://api.github.com/users/MadElf1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/MadElf1337/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"cc @NielsRogge @amyeroberts ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @MadElf1337 do you have any updates? Are you still planning on contributing this model? ",
"Yep I’m still working on the model, had to keep it aside for a bit due to my uni exam schedule, but will start again the day my exams are over\r\n\r\n\r\nRegarding the updates, I am done with the architecture, have to write the functions for specific purposes(like segmentation) and the tests",
"Great - glad to hear you're still interested :) \r\n\r\nAs @NielsRogge pointed out, data2vec vision is an extension of BEiT. This means the porting should be a lot simpler! In our [pytorch BEiT implementation](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_vision.py#L310), you can see this from the `#Copied from` statements. Ideally the TF implementation would reflect this and be the same as our pytorch implementation, however TF data2vec vision is already implemented. So, we need to move the data2vec code to beit, and then add the necessary `#Copied from` statement in data2vec. Does this make sense? \r\n\r\nCould you open a draft PR for the model please so that the code is visible? \r\n\r\nGood luck with the last of your exams!",
"Yes I’ll open a draft PR to show the code that’s been done till date\r\n\r\nAnd thanks!"
] | 1,657
| 1,662
| null |
NONE
| null |
### Feature request
Addition of TF implementation of BEiT
### Motivation
I have always seen that there is a discrepancy in the availability of models for PyTorch and the models available in TensorFlow, and want to have models for usage in both backends.
### Your contribution
I will add the implementation of BEiT in TF :)
cc - @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18085/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18085/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/18084
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18084/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18084/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18084/events
|
https://github.com/huggingface/transformers/pull/18084
| 1,299,715,424
|
PR_kwDOCUB6oc47JMJf
| 18,084
|
Making Roformer models compatible with pre-trained Roformer v2 models
|
{
"login": "sijunhe",
"id": 11987277,
"node_id": "MDQ6VXNlcjExOTg3Mjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/11987277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sijunhe",
"html_url": "https://github.com/sijunhe",
"followers_url": "https://api.github.com/users/sijunhe/followers",
"following_url": "https://api.github.com/users/sijunhe/following{/other_user}",
"gists_url": "https://api.github.com/users/sijunhe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sijunhe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sijunhe/subscriptions",
"organizations_url": "https://api.github.com/users/sijunhe/orgs",
"repos_url": "https://api.github.com/users/sijunhe/repos",
"events_url": "https://api.github.com/users/sijunhe/events{/privacy}",
"received_events_url": "https://api.github.com/users/sijunhe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18084). All of your documentation changes will be reflected on that endpoint.",
"Thanks a lot for your PR, @sijunhe!\r\n\r\nIn this situation, we would favor adding a new model architecture instead of editing the existing one to add support for a newer model. \r\n\r\nThe gist of it is that:\r\n- If the changes are not part of the original codebase, original paper, or original pretrained weights\r\n- If the initial checkpoints cannot be loaded in the architecture that will be enabled\r\n\r\nthen a new model architecture is warranted. You can read about this aspect of our philosophy [here](https://huggingface.co/blog/transformers-design-philosophy).\r\n\r\nWe have a tool to allow you to generate a model exactly as RoFormer to add your contribution here: [add-new-model-like](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command). In this situation, the arguments that you add here likely don't need to exist: I suppose the rotary operations will always be there in v2, and the `rms_norm` will always be used as well. Is that correct?\r\n\r\nThanks!",
"Hi @LysandreJik. Thanks for sending over HF's philosophy. It was a good read!\r\n\r\nI'd agree with you. But in this case, the difference between RoFormer and RoFormer V2 is so small and I am not sure if it's worth a whole new model and tests. Technically, it's possible to load RoFormer V2 weights with the current RoFormer class and we would just have some redundant weights that would be randomly initialized. \r\n\r\nI think a counter example here is the BERT model and its 3 different kinds of position embeddings (absolute, relative_key and relative_key_query). And in this case, the architectural difference between RoFormer and RoFormer V2 is much smaller than BERT.",
"Good point regarding BERT, but that's actually a mistake from our part when the philosophy was still evolving :sweat_smile:. Same with GPT-2 and some arguments for it to scale better.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
# What does this PR do?
RoFormer v2 is a more recent and lightweight version of RoFormer. The differences between the two models are:
- RoFormer v2 removed the bias term from all the attention modules
- RoFormer v2 used a simple RMS norm instead of LayerNorm
Currently, loading a pre-trained RoFormer v2 model such as [this one](https://huggingface.co/junnyu/roformer_v2_chinese_char_base) with RoFormer will raise a lot of "newly initialized but not found in checkpoint" warning. This PR ensures that the redundant weights are not created when a RoFormer V2 config is provided.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@JunnYu who contributed RoFormer to HF
@patrickvonplaten who reviewed RoFormer before
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18084/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18084",
"html_url": "https://github.com/huggingface/transformers/pull/18084",
"diff_url": "https://github.com/huggingface/transformers/pull/18084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18084.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18083
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18083/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18083/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18083/events
|
https://github.com/huggingface/transformers/issues/18083
| 1,299,678,282
|
I_kwDOCUB6oc5Nd4RK
| 18,083
|
dictionary update sequence element #5 has length 1; 2 is required
|
{
"login": "skye95git",
"id": 41561936,
"node_id": "MDQ6VXNlcjQxNTYxOTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/41561936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skye95git",
"html_url": "https://github.com/skye95git",
"followers_url": "https://api.github.com/users/skye95git/followers",
"following_url": "https://api.github.com/users/skye95git/following{/other_user}",
"gists_url": "https://api.github.com/users/skye95git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skye95git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skye95git/subscriptions",
"organizations_url": "https://api.github.com/users/skye95git/orgs",
"repos_url": "https://api.github.com/users/skye95git/repos",
"events_url": "https://api.github.com/users/skye95git/events{/privacy}",
"received_events_url": "https://api.github.com/users/skye95git/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Are you sure you pasted the exact command you ran? I have no error when trying it on my side and the config is successfully updated. To use distributed training, just use the pytorch launcher instead of `python` to run your script, see [here](https://huggingface.co/docs/transformers/run_scripts#distributed-training-and-mixed-precision).",
"> Are you sure you pasted the exact command you ran? I have no error when trying it on my side and the config is successfully updated. To use distributed training, just use the pytorch launcher instead of `python` to run your script, see [here](https://huggingface.co/docs/transformers/run_scripts#distributed-training-and-mixed-precision).\r\n\r\nYes. I'm sure. Maybe I should change `--config_overrides vocab_size=52_000,max_position_embeddings=514,num_attention_heads=12,num_hidden_layers=12,type_vocab_size=1` to `--config_overrides \"vocab_size=52_000,max_position_embeddings=514,num_attention_heads=12,num_hidden_layers=12,type_vocab_size=1\"`? In other words, should quotes be added to the config_overrides parameter?",
"> Are you sure you pasted the exact command you ran? I have no error when trying it on my side and the config is successfully updated. To use distributed training, just use the pytorch launcher instead of `python` to run your script, see [here](https://huggingface.co/docs/transformers/run_scripts#distributed-training-and-mixed-precision).\r\n\r\nThanks. I successfully ran distributed training when continue pre-train, but `--per_device_train_batch_size` can only be set to a maximum of 8, increasing to 16 will report an error `CUDA out of memory`. But I use the LineByLineTextDataset to write the following script:\r\n```\r\ntokenizer = RobertaTokenizerFast.from_pretrained(\"roberta-base\")\r\nmodel = RobertaForMaskedLM.from_pretrained(\"roberta-base\")\r\nprint(model.num_parameters())\r\n\r\ntrain_dataset = LineByLineTextDataset(\r\n tokenizer=tokenizer,\r\n file_path=f\"{data_dir}/train_codes.txt\",\r\n block_size=128,\r\n)\r\n\r\ntest_dataset = LineByLineTextDataset(\r\n tokenizer=tokenizer,\r\n file_path=f\"{data_dir}/valid_codes.txt\",\r\n block_size=128,\r\n)\r\n\r\ndata_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=model_dir,\r\n overwrite_output_dir=True,\r\n num_train_epochs=50,\r\n per_gpu_train_batch_size=64,\r\n save_steps=5000,\r\n do_eval=True,\r\n logging_dir=log_dir,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=train_dataset,\r\n eval_dataset = test_dataset\r\n)\r\n\r\ntrainer.train()\r\ntrainer.save_model(model_dir)\r\ntokenizer.save_pretrained(tokenizer_dir)\r\n```\r\nUsing the same training data, my script can handle up to 64 batches per GPU, while RUN_mlm.py can handle only 8 batches per GPU. Why? \r\nCan pyTorch Launcher be used to run distributed training using `LineByLineTextDataset`?",
"\"Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.\"\r\n`per_device_train_batch_size` specifies the batch size to be processed by each GPU, right?",
"@sgugger I used the 'LineByLineTextDataset' script as above to continue pre-train Roberta on multiple cards in a single machine. It seemed to be an unbalanced load.\r\n\r\nIs the single-machine multi-card of LineByLineTextDataset implemented with `DataParallel`? Is there an implementation of `DistributedDataParallel`?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,660
| 1,660
|
NONE
| null |
### System Info
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.4.0-66-generic-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
transformers/examples/pytorch/language-modeling/run_mlm.py @LysandreJik @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I want to pre-train RoBERTa from scratch on my own dataset using `transformers/examples/pytorch/language-modeling/run_mlm.py`.
1. I run the command:
```
python run_mlm.py \
--model_type roberta \
--tokenizer_name /CodeSearchNet/code_txt/tokenizer \
--config_overrides vocab_size=52_000,max_position_embeddings=514,num_attention_heads=12,num_hidden_layers=12,type_vocab_size=1 \
--train_file /data_for_train_tokenizer/CodeSearchNet/train_codes.txt \
--validation_file /data_for_train_tokenizer/CodeSearchNet/valid_codes.txt \
--per_device_train_batch_size 64 \
--per_device_eval_batch_size 64 \
--num_train_epochs 100 \
--overwrite_output_dir \
--line_by_line \
--save_steps 5000 \
--do_train \
--do_eval \
--output_dir /CodeSearchNet/code_txt/model/pretrain_Roberta_from_scratch/CSN/single_file \
--logging_dir /CodeSearchNet/code_txt/log/pretrain_Roberta_from_scratch_CSN_single_file
```
There is an error:
```
07/09/2022 02:00:22 - WARNING - __main__ - You are instantiating a new config instance from scratch.
07/09/2022 02:00:22 - INFO - __main__ - Overriding config: vocab_size=52_000,max_position_embeddings=514,num_attention_heads=12,num_hidden_layers=12,type_vocab_size=1,
Traceback (most recent call last):
File "/transformers/examples/pytorch/language-modeling/run_mlm.py", line 612, in <module>
main()
File "/transformers/examples/pytorch/language-modeling/run_mlm.py", line 359, in main
config.update_from_string(model_args.config_overrides)
File "/transformers/src/transformers/configuration_utils.py", line 850, in update_from_string
d = dict(x.split("=") for x in update_str.split(","))
ValueError: dictionary update sequence element #5 has length 1; 2 is required
```
**How to set `config_overrides` in `run_mlm.py`?**
2. When I set `per_device_eval_batch_size 64`, there is an error:
```
RuntimeError: CUDA out of memory. Tried to allocate 21.48 GiB (GPU 0; 39.59 GiB total capacity; 26.26 GiB already allocated; 11.40 GiB free; 26.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
0%| | 0/175900 [00:27<?, ?it/s]
```
Load imbalance caused by data parallelism. **How to set up distributed data parallelism in trainer?**
### Expected behavior
Be able to train Roberta from scratch in DDP mode using large batch size.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18083/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18082
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18082/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18082/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18082/events
|
https://github.com/huggingface/transformers/issues/18082
| 1,299,638,463
|
I_kwDOCUB6oc5Ndui_
| 18,082
|
Export bert to onnx failed
|
{
"login": "nonstopfor",
"id": 47969037,
"node_id": "MDQ6VXNlcjQ3OTY5MDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/47969037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nonstopfor",
"html_url": "https://github.com/nonstopfor",
"followers_url": "https://api.github.com/users/nonstopfor/followers",
"following_url": "https://api.github.com/users/nonstopfor/following{/other_user}",
"gists_url": "https://api.github.com/users/nonstopfor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nonstopfor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nonstopfor/subscriptions",
"organizations_url": "https://api.github.com/users/nonstopfor/orgs",
"repos_url": "https://api.github.com/users/nonstopfor/repos",
"events_url": "https://api.github.com/users/nonstopfor/events{/privacy}",
"received_events_url": "https://api.github.com/users/nonstopfor/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"@[nonstopfor](https://github.com/nonstopfor), you can change dynamic_axes to fixed shape with onnx python API like the following:\r\n```\r\n import onnx\r\n model = onnx.load(\"input.onnx\")\r\n for tensor in model.graph.input:\r\n for dim_proto in tensor.type.tensor_type.shape.dim:\r\n if dim_proto.HasField(\"dim_param\"): # and dim_proto.dim_param == 'batch_size':\r\n dim_proto.Clear()\r\n dim_proto.dim_value = 32 # fixed batch size\r\n for tensor in model.graph.output:\r\n for dim_proto in tensor.type.tensor_type.shape.dim:\r\n if dim_proto.HasField(\"dim_param\"):\r\n dim_proto.Clear()\r\n dim_proto.dim_value = 32 # fixed batch size\r\n\r\n onnx.save(model, \"output.onnx\")\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am facing the same issue converting a custom implementation of DETR (transformer).\r\n@nonstopfor were you able to fix this?\r\n"
] | 1,657
| 1,667
| 1,660
|
NONE
| null |
### System Info
- `transformers` version: 4.17.0
- Platform: Linux-4.15.0-167-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- Onnx version: 1.12.0
- PyTorch version (GPU?): 1.10.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <True>
- Using distributed or parallel set-up in script?: <No>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Code:
```
import torch
from transformers import AutoModel
device = torch.device('cuda')
model = AutoModel.from_pretrained('bert-base-chinese')
model.to(device)
model.eval()
batch_size = 32
size = (batch_size, 256)
export_onnx_file = 'save/bert.onnx'
input_ids = torch.zeros(size=size, device=device, dtype=torch.long)
attention_mask = torch.ones(size=size, device=device, dtype=torch.float)
token_type_ids = torch.zeros(size=size, device=device, dtype=torch.long)
inputs = (input_ids, attention_mask, token_type_ids)
torch.onnx.export(model=model, args=inputs, f=export_onnx_file, verbose=False, opset_version=12,
do_constant_folding=True,
output_names = ['last_hidden_state', 'pooler_output'],
input_names=["input_ids", "attention_mask", "token_type_ids"])
```
### Expected behavior
Error info:
```
/home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/onnx/symbolic_helper.py:325: UserWarning: Type cannot be inferred, which might cause exported graph to produce incorrect results.
warnings.warn("Type cannot be inferred, which might cause exported graph to produce incorrect results.")
[W shape_type_inference.cpp:434] Warning: Constant folding in symbolic shape inference fails: Index is supposed to be an empty tensor or a vector
Exception raised from index_select_out_cuda_impl at /pytorch/aten/src/ATen/native/cuda/Indexing.cu:742 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7ff9245c7d62 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x5f (0x7ff9245c475f in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #2: void at::native::(anonymous namespace)::index_select_out_cuda_impl<float>(at::Tensor&, at::Tensor const&, long, at::Tensor const&) + 0x190d (0x7ff7a4e601bd in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #3: at::native::index_select_out_cuda(at::Tensor const&, long, at::Tensor const&, at::Tensor&) + 0x3d3 (0x7ff7a4dce0e3 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #4: at::native::index_select_cuda(at::Tensor const&, long, at::Tensor const&) + 0xd0 (0x7ff7a4dce610 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #5: <unknown function> + 0x25756d6 (0x7ff7a5d296d6 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #6: <unknown function> + 0x2575722 (0x7ff7a5d29722 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cuda_cu.so)
frame #7: at::_ops::index_select::redispatch(c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) + 0xb9 (0x7ff7f5617649 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #8: <unknown function> + 0x3253be3 (0x7ff7f6f95be3 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0x3254215 (0x7ff7f6f96215 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #10: at::_ops::index_select::call(at::Tensor const&, long, at::Tensor const&) + 0x166 (0x7ff7f5697296 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #11: torch::jit::onnx_constant_fold::runTorchBackendForOnnx(torch::jit::Node const*, std::vector<at::Tensor, std::allocator<at::Tensor> >&, int) + 0x1b5f (0x7ff8d8cf023f in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #12: <unknown function> + 0xbcea6a (0x7ff8d8d37a6a in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #13: torch::jit::ONNXShapeTypeInference(torch::jit::Node*, std::map<std::string, c10::IValue, std::less<std::string>, std::allocator<std::pair<std::string const, c10::IValue> > > const&, int) + 0xa8e (0x7ff8d8d3d30e in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #14: <unknown function> + 0xbd5e12 (0x7ff8d8d3ee12 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #15: <unknown function> + 0xb414c0 (0x7ff8d8caa4c0 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #16: <unknown function> + 0x2a5aa8 (0x7ff8d840eaa8 in /home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #45: __libc_start_main + 0xe7 (0x7ff93fb25c87 in /lib/x86_64-linux-gnu/libc.so.6)
(function ComputeConstantFolding)
Traceback (most recent call last):
File "onnx_tensorrt.py", line 425, in <module>
test_bert()
File "onnx_tensorrt.py", line 316, in test_bert
input_names=["input_ids", "attention_mask", "token_type_ids"])
File "/home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/onnx/__init__.py", line 320, in export
custom_opsets, enable_onnx_checker, use_external_data_format)
File "/home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 111, in export
custom_opsets=custom_opsets, use_external_data_format=use_external_data_format)
File "/home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 729, in _export
dynamic_axes=dynamic_axes)
File "/home/zhangzhexin/anaconda3/lib/python3.7/site-packages/torch/onnx/utils.py", line 545, in _model_to_graph
_export_onnx_opset_version)
RuntimeError: Index is supposed to be an empty tensor or a vector
```
However, if I set the dynamic_axes, there is no problem:
```
torch.onnx.export(model=model, args=inputs, f=export_onnx_file, verbose=False, opset_version=12,
do_constant_folding=True,
output_names = ['last_hidden_state', 'pooler_output'],
input_names=["input_ids", "attention_mask", "token_type_ids"],
dynamic_axes={"input_ids": {0: "batch_size"},
"attention_mask": {0: "batch_size"},
"token_type_ids": {0: "batch_size"},
})
```
Because I need to further convert onnx to tensorrt and my tensorrt version only supports fixed input shape, I don't want to set the dynamic_axes. So how to fix this problem when not setting the dynamic_axes?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18082/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18081
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18081/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18081/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18081/events
|
https://github.com/huggingface/transformers/pull/18081
| 1,299,629,775
|
PR_kwDOCUB6oc47I7O6
| 18,081
|
Added the timeout variable in training args to avoid socket timeouts in DDP calls
|
{
"login": "dvlshah",
"id": 16095226,
"node_id": "MDQ6VXNlcjE2MDk1MjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/16095226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dvlshah",
"html_url": "https://github.com/dvlshah",
"followers_url": "https://api.github.com/users/dvlshah/followers",
"following_url": "https://api.github.com/users/dvlshah/following{/other_user}",
"gists_url": "https://api.github.com/users/dvlshah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dvlshah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvlshah/subscriptions",
"organizations_url": "https://api.github.com/users/dvlshah/orgs",
"repos_url": "https://api.github.com/users/dvlshah/repos",
"events_url": "https://api.github.com/users/dvlshah/events{/privacy}",
"received_events_url": "https://api.github.com/users/dvlshah/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18081). All of your documentation changes will be reflected on that endpoint.",
"Hey @dvlshah, thank you for your PR! Would this parameter need to be used on https://github.com/huggingface/transformers/blob/ac98a88fbc6377f93e8b7fbd244b0c3331bb82a0/src/transformers/training_args.py#L1310?\r\n\r\ncc @sgugger ",
"> Hey @dvlshah, thank you for your PR! Would this parameter need to be used on\r\n> \r\n> https://github.com/huggingface/transformers/blob/ac98a88fbc6377f93e8b7fbd244b0c3331bb82a0/src/transformers/training_args.py#L1310\r\n> \r\n> ?\r\n> cc @sgugger\r\n\r\nThe idea is provide the option to use this parameter if they want in **torch.distributed.init_process_group(backend=self.xpu_backend, rank=rank, world_size=size)** fn call.",
"Thanks, but I'm not sure I follow the PR: why add a new `TrainingArguments` if it's not used anywhere?",
"@sgugger I need to add the timeout var in the torch.distributed.init_process_group call. Forgot the push the change in the PR.",
"Hey @dvlshah, did you push the changes?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
Overriding the default timeout defined by PyTorch in **torch.distributed.init_process_group** calls by introducing a timeout argument and prevents Socket Timeouts
Added a custom timeout argument in **src/transformers/training_args.py**. It can be used to override the timeout argument in the **init_process_group** function call to avoid socket timeouts for mapping or tokenizing huge datasets that may take a longer time.
The timeout variable is a **int** type and default value is set to **1800(s)** which is default value in torch.distributed.init_process_group fn.
The timeout datatype used in torch.distributed.init_process_group is a **datetime.timedelta** object that expects this timeout variable.
# What does this PR do?
Fixes #18054 #17106
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18081/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18081",
"html_url": "https://github.com/huggingface/transformers/pull/18081",
"diff_url": "https://github.com/huggingface/transformers/pull/18081.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18081.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18080
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18080/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18080/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18080/events
|
https://github.com/huggingface/transformers/issues/18080
| 1,299,580,180
|
I_kwDOCUB6oc5NdgUU
| 18,080
|
attention_mask bug when training Wav2Vec2ForCTC with DeepSpeed
|
{
"login": "ddobokki",
"id": 44228269,
"node_id": "MDQ6VXNlcjQ0MjI4MjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/44228269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddobokki",
"html_url": "https://github.com/ddobokki",
"followers_url": "https://api.github.com/users/ddobokki/followers",
"following_url": "https://api.github.com/users/ddobokki/following{/other_user}",
"gists_url": "https://api.github.com/users/ddobokki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddobokki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddobokki/subscriptions",
"organizations_url": "https://api.github.com/users/ddobokki/orgs",
"repos_url": "https://api.github.com/users/ddobokki/repos",
"events_url": "https://api.github.com/users/ddobokki/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddobokki/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Sorry for being so slow / late here @ddobokki ! \r\n\r\nI think your solution sounds reasonable:\r\n```\r\nbatch['attention_mask'] = batch['attention_mask'].to(torch.long)\r\n```\r\n\r\n=> `attention_mask` should be in `long` so this is a welcome change. Do you mind opening a PR for this? \r\n\r\nBTW, we do the same (casting to `long`) for similar inputs for pre-training: https://github.com/huggingface/transformers/blob/6268694e27f1fc0192ba24e4bec181061b4a9bf8/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L335",
"@patrickvonplaten Thank you for the comments!\r\nIt's a small change but i glad for contribution!\r\nI'll opening a PR."
] | 1,657
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.19.2
- Platform: Linux-4.15.0-144-generic-x86_64-with-glibc2.27
- Python version: 3.8.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patrickvonplaten
@stas00
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I experienced a problem when training Wav2Vec2ForCTC
if i preprocessed data to create an attention_mask, it's dtype is int32
here is simple_example
```
import torch
from transformers import Wav2Vec2FeatureExtractor
feature_extractor = Wav2Vec2FeatureExtractor(return_attention_mask=True)
data = [{'input_values':[0.1,0.1,0.1]},{'input_values':[0.2,0.2,0.2,0.2,0.2]}]
attn_mask = feature_extractor.pad(data,padding = "longest",return_tensors="pt")['attention_mask']
print(attn_mask.dtype)
-> torch.int32
```
it is caused problem when training Wav2Vec2ForCTC with deepspeed
_prepare_input method in trainer.py change int32 to float16 (if training fp16)
```
def _prepare_input(self, data: Union[torch.Tensor, Any]) -> Union[torch.Tensor, Any]:
"""
Prepares one `data` before feeding it to the model, be it a tensor or a nested list/dictionary of tensors.
"""
if isinstance(data, Mapping):
return type(data)({k: self._prepare_input(v) for k, v in data.items()})
elif isinstance(data, (tuple, list)):
return type(data)(self._prepare_input(v) for v in data)
elif isinstance(data, torch.Tensor):
kwargs = dict(device=self.args.device)
if self.deepspeed and data.dtype != torch.int64:
# NLP models inputs are int64 and those get adjusted to the right dtype of the
# embedding. Other models such as wav2vec2's inputs are already float and thus
# may need special handling to match the dtypes of the model
kwargs.update(dict(dtype=self.args.hf_deepspeed_config.dtype()))
return data.to(**kwargs)
return data
```
and forword in Wav2Vec2ForCTC is using sum of attention_mask values
```
loss = None
if labels is not None:
if labels.max() >= self.config.vocab_size:
raise ValueError(f"Label values must be <= vocab_size: {self.config.vocab_size}")
# retrieve loss input_lengths from attention_mask
attention_mask = (
attention_mask if attention_mask is not None else torch.ones_like(input_values, dtype=torch.long)
)
input_lengths = self._get_feat_extract_output_lengths(attention_mask.sum(-1)).to(torch.long) # Here!
```
because current attention_mask's dtype is float16(deepspeed), and length vector of the audio is long, attention_mask.sum(-1) has many 'inf' value and it make break training
Is this a known bug?
i solved this porblem to edit DataCollatorCTCWithPadding in [example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L265) like this
```
batch['attention_mask'] = batch['attention_mask'].to(torch.long)
```
but i want know other solution
### Expected behavior
maybe change attention_mask's dtype from FeatureExtractor or _prepare_input method's logic
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18080/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18079
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18079/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18079/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18079/events
|
https://github.com/huggingface/transformers/pull/18079
| 1,299,461,889
|
PR_kwDOCUB6oc47IZDw
| 18,079
|
Custom pipeline
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger what is your opinion on using attrs for validation with the representation of the pipeline task? I think it would be nice if it has a validation step here. (this means introducing `attrs` as a transformers' dependency).",
"Yes, the doc clearly states that you need to save your custom pipeline in a module and import it from there. I could add support from writing something from `__main__.py` but maybe in a followup PR that also deals with custom models/tokenziers/configs etc?",
"[Line 655](https://github.com/huggingface/transformers/blob/f4e172716b91b477ce3cddc9a253094b7121a4b8/src/transformers/pipelines/__init__.py#L655) in\r\nhttps://github.com/huggingface/transformers/blob/f4e172716b91b477ce3cddc9a253094b7121a4b8/src/transformers/pipelines/__init__.py#L649-L657\r\nwill call\r\nhttps://github.com/huggingface/transformers/blob/f4e172716b91b477ce3cddc9a253094b7121a4b8/src/transformers/pipelines/base.py#L257\r\n\r\nIn the current version, we have `model_class` not being an `Auto` class for `(TF) ImageClassificationPipelineTests`, and we get test failure `TypeError: ('Keyword argument not understood:', 'trust_remote_code')`\r\n\r\nhttps://github.com/huggingface/transformers/runs/7421505300?check_suite_focus=true\r\n\r\nAdding `TFAutoModelForImageClassification` in `src/transformers/pipelines/__init__.py` will fix the issue.\r\n"
] | 1,657
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
This PR adds the ability to support custom pipelines on the Hub and share it with everyone else. Like the code in the Hub feature for models, tokenizers etc., the user has to add `trust_remote_code=True` when they want to use it. Apart from this, the best way to get familiar with the feature is to look at the [added documentation](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18079/en/add_new_pipeline#adding-it-to-the-list-of-supported-tasks).
Note: this PR changes the newly added `PIPELINE_REGISTRY.register_pipeline` API to accept all the arguments one by one instead of inside a big dictionary. This makes the API easier to use in my opinion.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18079/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18079/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18079",
"html_url": "https://github.com/huggingface/transformers/pull/18079",
"diff_url": "https://github.com/huggingface/transformers/pull/18079.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18079.patch",
"merged_at": 1658224956000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18078
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18078/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18078/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18078/events
|
https://github.com/huggingface/transformers/pull/18078
| 1,299,400,063
|
PR_kwDOCUB6oc47IL0r
| 18,078
|
Make predict() close progress bars after finishing (#17952)
|
{
"login": "neverix",
"id": 46641404,
"node_id": "MDQ6VXNlcjQ2NjQxNDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/46641404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neverix",
"html_url": "https://github.com/neverix",
"followers_url": "https://api.github.com/users/neverix/followers",
"following_url": "https://api.github.com/users/neverix/following{/other_user}",
"gists_url": "https://api.github.com/users/neverix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neverix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neverix/subscriptions",
"organizations_url": "https://api.github.com/users/neverix/orgs",
"repos_url": "https://api.github.com/users/neverix/repos",
"events_url": "https://api.github.com/users/neverix/events{/privacy}",
"received_events_url": "https://api.github.com/users/neverix/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Not sure about the tests, should they be there for notebooks too? I'll go with this for now",
"I just triggered all them by pushing a copy of your branch in the main fork of the repo, circleCI is very finnicky. Let's check everything is green!",
"Failure is flaky, so this is good to merge, thanks again!"
] | 1,657
| 1,657
| 1,657
|
CONTRIBUTOR
| null |
Fixes #17952 by adding an `on_predict` callback.
## Who can review?
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18078/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18078",
"html_url": "https://github.com/huggingface/transformers/pull/18078",
"diff_url": "https://github.com/huggingface/transformers/pull/18078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18078.patch",
"merged_at": 1657313065000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18077
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18077/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18077/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18077/events
|
https://github.com/huggingface/transformers/pull/18077
| 1,299,098,903
|
PR_kwDOCUB6oc47HK4P
| 18,077
|
Fix slow CI by pinning resampy
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,657
| 1,657
|
COLLABORATOR
| null |
# What does this PR do?
The recent release of resampy (0.3.1) seems to suddenly make a lot of things (even unrelated to speech) very slow and the CI has several jobs timing out. This PR fixes that by pinning resampy to any previous version.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18077/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18077/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18077",
"html_url": "https://github.com/huggingface/transformers/pull/18077",
"diff_url": "https://github.com/huggingface/transformers/pull/18077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18077.patch",
"merged_at": 1657291886000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18076
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18076/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18076/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18076/events
|
https://github.com/huggingface/transformers/pull/18076
| 1,299,050,021
|
PR_kwDOCUB6oc47HARe
| 18,076
|
[Do not merge] debug Circleci
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657
| 1,662
| 1,661
|
COLLABORATOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18076/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18076",
"html_url": "https://github.com/huggingface/transformers/pull/18076",
"diff_url": "https://github.com/huggingface/transformers/pull/18076.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18076.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18075
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18075/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18075/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18075/events
|
https://github.com/huggingface/transformers/issues/18075
| 1,298,913,499
|
I_kwDOCUB6oc5Na9jb
| 18,075
|
Have 'Random Crop' option for truncation_side for Tokenizer
|
{
"login": "SantoshGuptaML",
"id": 57730245,
"node_id": "MDQ6VXNlcjU3NzMwMjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/57730245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SantoshGuptaML",
"html_url": "https://github.com/SantoshGuptaML",
"followers_url": "https://api.github.com/users/SantoshGuptaML/followers",
"following_url": "https://api.github.com/users/SantoshGuptaML/following{/other_user}",
"gists_url": "https://api.github.com/users/SantoshGuptaML/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SantoshGuptaML/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SantoshGuptaML/subscriptions",
"organizations_url": "https://api.github.com/users/SantoshGuptaML/orgs",
"repos_url": "https://api.github.com/users/SantoshGuptaML/repos",
"events_url": "https://api.github.com/users/SantoshGuptaML/events{/privacy}",
"received_events_url": "https://api.github.com/users/SantoshGuptaML/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@SantoshGuptaML Where would you want this to happen?\r\n\r\nOne option you have is to write a custom slow (non-fast) tokenizer. You can follow this example where the tokenizer is able to make random variations in output tokens: https://github.com/huggingface/transformers/pull/11149/files\r\nafaik rust is used for fast tokenizers, but the slow tokenizers are written in python.\r\n\r\nAnother option is you can subclass the DataCollator. This allows you to augment samples during training using the Trainer API instead of creating an augmented dataset ahead of time. Here's a working (albeit unoptimized) example of a custom data collator that does what you want.\r\n```\r\nfrom transformers import DefaultDataCollator\r\nfrom dataclasses import dataclass\r\nimport random\r\n\r\n\r\n@dataclass\r\nclass RandomCropDataCollator(DataCollatorWithPadding):\r\n random_truncation_token_length = 10\r\n\r\n def __call__(self, features):\r\n for f in features:\r\n original_token_length = len(f['input_ids'])\r\n start_truncation = random.randint(0, original_token_length-self.random_truncation_token_length)\r\n f['input_ids'] = f['input_ids'][:start_truncation] + f['input_ids'][start_truncation+self.random_truncation_token_length:]\r\n f['attention_mask'] = f['attention_mask'][:start_truncation] + f['attention_mask'][start_truncation+self.random_truncation_token_length:]\r\n end_shape = len(f['input_ids'])\r\n #print(original_token_length, \"-------->\", end_shape)\r\n return super().__call__(features)\r\n\r\n```\r\n`data_collator = RandomCropDataCollator(tokenizer)`\r\n[colab](https://colab.research.google.com/drive/1HRTDjuKw1TRTRlIT08MhZOXCbViGlpht?usp=sharing)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,657
| 1,661
| 1,661
|
NONE
| null |
### Feature request
Currently for the tokenizer, there's only two options for truncation_side, 'right' and 'left'.
I would like an option for 'random_crop' so that it takes a sequence of max length anywhere from the sequence.
### Motivation
As a way for data augmentation, some people might want a random crop rather than to consistently get a crop from one side or another.
This will vary the inputs into the model from the same data source.
### Your contribution
I'm not quite sure. I don't know Rust which I believe is what the tokenizer is based on.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18075/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.