url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/18574
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18574/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18574/comments
https://api.github.com/repos/huggingface/transformers/issues/18574/events
https://github.com/huggingface/transformers/pull/18574
1,335,688,523
PR_kwDOCUB6oc49A0jb
18,574
add opt "flush_denormal" in training_args.
{ "login": "sywangyi", "id": 36058628, "node_id": "MDQ6VXNlcjM2MDU4NjI4", "avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sywangyi", "html_url": "https://github.com/sywangyi", "followers_url": "https://api.github.com/users/sywangyi/followers", "following_url": "https://api.github.com/users/sywangyi/following{/other_user}", "gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions", "organizations_url": "https://api.github.com/users/sywangyi/orgs", "repos_url": "https://api.github.com/users/sywangyi/repos", "events_url": "https://api.github.com/users/sywangyi/events{/privacy}", "received_events_url": "https://api.github.com/users/sywangyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@yao-matrix @sgugger please help review", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18574). All of your documentation changes will be reflected on that endpoint.", "Thanks for your PR but I don't completely understand why this should leave in the Trainer? Users can just add `torch.set_flush_denormal(True)` to their script.", "Hi, I think not all the data scientists are familiar with the option, if we add it to the training arg. at least they could see it from --help and get the point. @sgugger\r\n\r\n> Thanks for your PR but I don't completely understand why this should leave in the Trainer? Users can just add `torch.set_flush_denormal(True)` to their script.\r\n\r\n", "Maybe it could be documented somewhere in this case, but I doubt adding yet another training argument is going to make it any more visible. There are currently 90 of them.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,660
1,666
1,663
CONTRIBUTOR
null
To solve the low performance issue caused by denormal numbers, user could enable this opt Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> # What does this PR do? [Denormal number](https://en.wikipedia.org/wiki/Denormal_number) is used to store extremely small numbers which are close to 0. Computations with denormal numbers are remarkably slower than normalized number. To solve the low performance issue caused by denormal numbers, users can use the following opt "flush_denormal" ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger, please help review it
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18574/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18574", "html_url": "https://github.com/huggingface/transformers/pull/18574", "diff_url": "https://github.com/huggingface/transformers/pull/18574.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18574.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18573
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18573/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18573/comments
https://api.github.com/repos/huggingface/transformers/issues/18573/events
https://github.com/huggingface/transformers/pull/18573
1,335,680,203
PR_kwDOCUB6oc49Ayxv
18,573
Fix resizing bug in OWL-ViT
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks for fixing! However, all \"owlvit\" checkpoints on the hub have `\"do_center_crop\": true` in their preprocessor_config.json.\r\n> \r\n> Should we update them after this PR is merged?\r\n\r\nGood point, yes, we should update them after the merge" ]
1,660
1,665
1,660
CONTRIBUTOR
null
# What does this PR do? Fixes a resizing issue in `OwlViTFeatureExtractor` that lead to the image/s getting resized along only one dimension and getting cropped along the other dimension later on in the preprocessing pipeline. The issue was due to defining the size as a single value instead of a tuple (768 instead of (768, 768)). The configuration files are updated and the `OwlViTProcessor `can correctly resize the input images now. This PR changes the default target `size` and sets default `do_center_crop` argument as False. Fixes # 18553 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? @NielsRogge could you take a look?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18573/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18573", "html_url": "https://github.com/huggingface/transformers/pull/18573", "diff_url": "https://github.com/huggingface/transformers/pull/18573.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18573.patch", "merged_at": 1660221863000 }
https://api.github.com/repos/huggingface/transformers/issues/18572
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18572/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18572/comments
https://api.github.com/repos/huggingface/transformers/issues/18572/events
https://github.com/huggingface/transformers/pull/18572
1,335,622,453
PR_kwDOCUB6oc49AmVR
18,572
Segformer TF: fix output size in documentation
{ "login": "joihn", "id": 11663917, "node_id": "MDQ6VXNlcjExNjYzOTE3", "avatar_url": "https://avatars.githubusercontent.com/u/11663917?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joihn", "html_url": "https://github.com/joihn", "followers_url": "https://api.github.com/users/joihn/followers", "following_url": "https://api.github.com/users/joihn/following{/other_user}", "gists_url": "https://api.github.com/users/joihn/gists{/gist_id}", "starred_url": "https://api.github.com/users/joihn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joihn/subscriptions", "organizations_url": "https://api.github.com/users/joihn/orgs", "repos_url": "https://api.github.com/users/joihn/repos", "events_url": "https://api.github.com/users/joihn/events{/privacy}", "received_events_url": "https://api.github.com/users/joihn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi,\r\n\r\nthanks for your PR. Could you also fix this in `modeling_segformer.py` for the PyTorch implementation?\r\n\r\nThanks!" ]
1,660
1,660
1,660
CONTRIBUTOR
null
Fixes # (issue) https://github.com/huggingface/transformers/issues/18557 ## Before submitting - [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @sayakpaul -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18572/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18572", "html_url": "https://github.com/huggingface/transformers/pull/18572", "diff_url": "https://github.com/huggingface/transformers/pull/18572.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18572.patch", "merged_at": 1660208377000 }
https://api.github.com/repos/huggingface/transformers/issues/18571
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18571/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18571/comments
https://api.github.com/repos/huggingface/transformers/issues/18571/events
https://github.com/huggingface/transformers/pull/18571
1,335,500,343
PR_kwDOCUB6oc49AMrf
18,571
Add type hints for ViLT models
{ "login": "donelianc", "id": 7807897, "node_id": "MDQ6VXNlcjc4MDc4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7807897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donelianc", "html_url": "https://github.com/donelianc", "followers_url": "https://api.github.com/users/donelianc/followers", "following_url": "https://api.github.com/users/donelianc/following{/other_user}", "gists_url": "https://api.github.com/users/donelianc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donelianc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donelianc/subscriptions", "organizations_url": "https://api.github.com/users/donelianc/orgs", "repos_url": "https://api.github.com/users/donelianc/repos", "events_url": "https://api.github.com/users/donelianc/events{/privacy}", "received_events_url": "https://api.github.com/users/donelianc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,660
1,660
1,660
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adding type hints for ` ViLT` model (PyTorch). ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? _Ran `make fixup` before last commit._ ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18571/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18571", "html_url": "https://github.com/huggingface/transformers/pull/18571", "diff_url": "https://github.com/huggingface/transformers/pull/18571.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18571.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18570
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18570/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18570/comments
https://api.github.com/repos/huggingface/transformers/issues/18570/events
https://github.com/huggingface/transformers/issues/18570
1,335,418,228
I_kwDOCUB6oc5PmN10
18,570
How to load a fine-tuned model and inference after running run_clip.py?
{ "login": "gongshaojie12", "id": 6407116, "node_id": "MDQ6VXNlcjY0MDcxMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/6407116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gongshaojie12", "html_url": "https://github.com/gongshaojie12", "followers_url": "https://api.github.com/users/gongshaojie12/followers", "following_url": "https://api.github.com/users/gongshaojie12/following{/other_user}", "gists_url": "https://api.github.com/users/gongshaojie12/gists{/gist_id}", "starred_url": "https://api.github.com/users/gongshaojie12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gongshaojie12/subscriptions", "organizations_url": "https://api.github.com/users/gongshaojie12/orgs", "repos_url": "https://api.github.com/users/gongshaojie12/repos", "events_url": "https://api.github.com/users/gongshaojie12/events{/privacy}", "received_events_url": "https://api.github.com/users/gongshaojie12/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "If I understand correctly, you have a (previously created) `clip-roberta` which is used to launch training.\r\nThe processor is not saved after the model is finetuned, just the model is saved.\r\nYou can copy the processor file(s) from `clip-roberta` to `clip-roberta-finetuned`.\r\n\r\nOtherwise, you can simply change\r\n\r\n```\r\nprocessor = AutoProcessor.from_pretrained(\"clip-roberta-finetuned\")\r\n```\r\nto\r\n```\r\nprocessor = AutoProcessor.from_pretrained(\"clip-roberta\")\r\n```", "Hi, @ydshieh thank you for your reply.\r\n\r\n1, When I copy the processor file(s) from clip-roberta to clip-roberta-finetuned, and the inference code remains the same. \r\nThe following error occurs:\r\n\r\n```\r\nD:\\software\\anaconda\\envs\\transformers\\python.exe D:/NLU/transformers/examples/pytorch/contrastive-image-text/predict.py\r\nTraceback (most recent call last):\r\n File \"D:\\NLU\\transformers\\examples\\pytorch\\contrastive-image-text\\predict.py\", line 6, in <module>\r\n processor = AutoProcessor.from_pretrained(\"clip-roberta-finetuned\")\r\n File \"D:\\software\\anaconda\\envs\\transformers\\lib\\site-packages\\transformers\\models\\auto\\processing_auto.py\", line 243, in from_pretrained\r\n return processor_class.from_pretrained(\r\n File \"D:\\software\\anaconda\\envs\\transformers\\lib\\site-packages\\transformers\\processing_utils.py\", line 182, in from_pretrained\r\n args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)\r\n File \"D:\\software\\anaconda\\envs\\transformers\\lib\\site-packages\\transformers\\processing_utils.py\", line 226, in _get_arguments_from_pretrained\r\n args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))\r\n File \"D:\\software\\anaconda\\envs\\transformers\\lib\\site-packages\\transformers\\models\\auto\\tokenization_auto.py\", line 607, in from_pretrained\r\n tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]\r\n File \"D:\\software\\anaconda\\envs\\transformers\\lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 573, in __getitem__\r\n raise KeyError(key)\r\nKeyError: <class 'transformers.models.vision_text_dual_encoder.configuration_vision_text_dual_encoder.VisionTextDualEncoderConfig'>\r\n```\r\n\r\n2,When I did not copy any files from `clip-roberta` to `clip-roberta-finetuned`, and changed the `processor` from `processor = AutoProcessor.from_pretrained(\"clip-roberta-finetuned\")` to `processor = AutoProcessor.from_pretrained(\"clip-roberta\")` as you asked. It works fine.", "There might be something wrong, I will take a look. But great to know it works in some way.\r\n", "@gongshaojie12 \r\n\r\nCould you check if you have copied all these files from `clip-roberta` to `clip-roberta-finetuned`:\r\n```\r\nconfig.json\r\nmerges.txt\r\npreprocessor_config.json\r\nspecial_tokens_map.json\r\ntokenizer.json\r\ntokenizer_config.json\r\nvocab.json\r\n```\r\n\r\nI don't have any issue when running `AutoProcessor.from_pretrained(\"clip-roberta-finetuned\")` when I copied all fiiles (of course, ignore the non-finetuned model file)", "Hi, @ydshieh thanks. It works fine." ]
1,660
1,660
1,660
NONE
null
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? Hi, @ydshieh after I run run_clip.py, how do I load the fine-tuned model and do inference? My inference code is as follows: ``` import requests from PIL import Image from transformers import AutoModel, AutoProcessor model = AutoModel.from_pretrained("clip-roberta-finetuned") processor = AutoProcessor.from_pretrained("clip-roberta-finetuned") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) print("auto model probs:", probs) ``` The following error occurred: ``` D:\software\anaconda\envs\transformers\python.exe D:/NLU/transformers/examples/pytorch/contrastive-image-text/predict.py Traceback (most recent call last): File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\feature_extraction_utils.py", line 402, in get_feature_extractor_dict resolved_feature_extractor_file = cached_path( File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\utils\hub.py", line 300, in cached_path raise EnvironmentError(f"file {url_or_filename} not found") OSError: file clip-roberta-finetuned\preprocessor_config.json not found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\NLU\transformers\examples\pytorch\contrastive-image-text\predict.py", line 6, in <module> processor = AutoProcessor.from_pretrained("clip-roberta-finetuned") File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\models\auto\processing_auto.py", line 249, in from_pretrained return PROCESSOR_MAPPING[type(config)].from_pretrained(pretrained_model_name_or_path, **kwargs) File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\processing_utils.py", line 182, in from_pretrained args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs) File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\processing_utils.py", line 226, in _get_arguments_from_pretrained args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs)) File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\models\auto\feature_extraction_auto.py", line 289, in from_pretrained config_dict, _ = FeatureExtractionMixin.get_feature_extractor_dict(pretrained_model_name_or_path, **kwargs) File "D:\software\anaconda\envs\transformers\lib\site-packages\transformers\feature_extraction_utils.py", line 443, in get_feature_extractor_dict raise EnvironmentError( OSError: Can't load feature extractor for 'clip-roberta-finetuned'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'clip-roberta-finetuned' is the correct path to a directory containing a preprocessor_config.json file Process finished with exit code 1 ``` ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction OSError: file clip-roberta-finetuned\preprocessor_config.json not found ### Expected behavior load and inference success
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18570/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18569
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18569/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18569/comments
https://api.github.com/repos/huggingface/transformers/issues/18569/events
https://github.com/huggingface/transformers/pull/18569
1,335,390,854
PR_kwDOCUB6oc48_1xM
18,569
The backticks in the example of transformers.BigBirdPegasusConfig documentation were not in the right spot…
{ "login": "brandonbiggs", "id": 34954680, "node_id": "MDQ6VXNlcjM0OTU0Njgw", "avatar_url": "https://avatars.githubusercontent.com/u/34954680?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brandonbiggs", "html_url": "https://github.com/brandonbiggs", "followers_url": "https://api.github.com/users/brandonbiggs/followers", "following_url": "https://api.github.com/users/brandonbiggs/following{/other_user}", "gists_url": "https://api.github.com/users/brandonbiggs/gists{/gist_id}", "starred_url": "https://api.github.com/users/brandonbiggs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brandonbiggs/subscriptions", "organizations_url": "https://api.github.com/users/brandonbiggs/orgs", "repos_url": "https://api.github.com/users/brandonbiggs/repos", "events_url": "https://api.github.com/users/brandonbiggs/events{/privacy}", "received_events_url": "https://api.github.com/users/brandonbiggs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18569). All of your documentation changes will be reflected on that endpoint.", "I can make that change, but when I run `make style` it changes a couple hundred files. Is that supposed to happen?", "No, it probably means you do not have the right black version (22.3.0).", "Oh yeah, I had a slightly newer version apparently. I had issues on my mac with the virtual environment. I'll get that fixed." ]
1,660
1,661
1,661
NONE
null
…, causing the documentation to be displayed in a weird way. I moved the backticks a few lines down in the documentation to ensure the documentation is formatted correctly. # What does this PR do? Fixes the python documentation example in transformers.BigBirdPegasusConfig <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Documentation: @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18569/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18569/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18569", "html_url": "https://github.com/huggingface/transformers/pull/18569", "diff_url": "https://github.com/huggingface/transformers/pull/18569.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18569.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18568
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18568/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18568/comments
https://api.github.com/repos/huggingface/transformers/issues/18568/events
https://github.com/huggingface/transformers/pull/18568
1,335,228,882
PR_kwDOCUB6oc48_UOC
18,568
Fix broken pipeline string
{ "login": "mrwyattii", "id": 18311180, "node_id": "MDQ6VXNlcjE4MzExMTgw", "avatar_url": "https://avatars.githubusercontent.com/u/18311180?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrwyattii", "html_url": "https://github.com/mrwyattii", "followers_url": "https://api.github.com/users/mrwyattii/followers", "following_url": "https://api.github.com/users/mrwyattii/following{/other_user}", "gists_url": "https://api.github.com/users/mrwyattii/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrwyattii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrwyattii/subscriptions", "organizations_url": "https://api.github.com/users/mrwyattii/orgs", "repos_url": "https://api.github.com/users/mrwyattii/repos", "events_url": "https://api.github.com/users/mrwyattii/events{/privacy}", "received_events_url": "https://api.github.com/users/mrwyattii/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for fixing!" ]
1,660
1,660
1,660
CONTRIBUTOR
null
# What does this PR do? #18494 introduced a bug in a cuda device string: `RuntimeError: Invalid device string: 'cuda:{device}'` This PR adds the missing `f` before the string <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @julien-c @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18568/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18568/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18568", "html_url": "https://github.com/huggingface/transformers/pull/18568", "diff_url": "https://github.com/huggingface/transformers/pull/18568.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18568.patch", "merged_at": 1660170499000 }
https://api.github.com/repos/huggingface/transformers/issues/18567
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18567/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18567/comments
https://api.github.com/repos/huggingface/transformers/issues/18567/events
https://github.com/huggingface/transformers/issues/18567
1,335,194,807
I_kwDOCUB6oc5PlXS3
18,567
Trainer Bug
{ "login": "rich-junwang", "id": 17483734, "node_id": "MDQ6VXNlcjE3NDgzNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/17483734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rich-junwang", "html_url": "https://github.com/rich-junwang", "followers_url": "https://api.github.com/users/rich-junwang/followers", "following_url": "https://api.github.com/users/rich-junwang/following{/other_user}", "gists_url": "https://api.github.com/users/rich-junwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/rich-junwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rich-junwang/subscriptions", "organizations_url": "https://api.github.com/users/rich-junwang/orgs", "repos_url": "https://api.github.com/users/rich-junwang/repos", "events_url": "https://api.github.com/users/rich-junwang/events{/privacy}", "received_events_url": "https://api.github.com/users/rich-junwang/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,660
1,663
1,663
NONE
null
### System Info I might be wrong, but looks this line here: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_seq2seq.py#L176 introduces a bug. _gen_kwargs may not be an attribute for an instance. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction If we have a trainer that directly calls the function `prediction_step`, we'll have this error ### Expected behavior no error is thrown when we call `prediction_step` function
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18567/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18566
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18566/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18566/comments
https://api.github.com/repos/huggingface/transformers/issues/18566/events
https://github.com/huggingface/transformers/pull/18566
1,335,126,690
PR_kwDOCUB6oc48--ld
18,566
Bump nbconvert from 6.0.1 to 6.3.0 in /examples/research_projects/visual_bert
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,660
1,660
1,660
CONTRIBUTOR
null
[//]: # (dependabot-start) ⚠️ **Dependabot is rebasing this PR** ⚠️ Rebasing might not happen immediately, so don't worry if this takes some time. Note: if you make any changes to this PR yourself, they will take precedence over the rebase. --- [//]: # (dependabot-end) Bumps [nbconvert](https://github.com/jupyter/nbconvert) from 6.0.1 to 6.3.0. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/jupyter/nbconvert/commit/cefe0bfe303e5e9e194c393cb9280c64a77b8219"><code>cefe0bf</code></a> Release 6.3.0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/a534fb901ff83e0b0c0c082ff47f3de01dc651b1"><code>a534fb9</code></a> Release 6.3.0b0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/87920c5a47c8ae99600be6c9b9b909ba440adce9"><code>87920c5</code></a> Add changelog for 6.3.0 (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1669">#1669</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/dd6d9c7d36d0a09db647a8fc993f7330388a1e48"><code>dd6d9c7</code></a> add slide numbering (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1654">#1654</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/5d2c5e2b79534c11678b73e707feb74d7827a557"><code>5d2c5e2</code></a> Update state filter (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1664">#1664</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/11ea5931f71fdaaaad8958f634132f45476bf006"><code>11ea593</code></a> fix: avoid closing the script tag early by escaping a forward slash (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1665">#1665</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/968c5fbabaf99f83d64720a1a6e90969052e978c"><code>968c5fb</code></a> Fix HTML templates mentioned in help docs (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1653">#1653</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/35c4d07eb7060b505412c0ad83886176fe8409fe"><code>35c4d07</code></a> Add a new output filter that excludes widgets if there is no state (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1643">#1643</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/c663c75339709c0e1c051d684dba0cf10fa9083e"><code>c663c75</code></a> 6.2.0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/fd1dd15b63bfd898c21c90b78165c4c00c448896"><code>fd1dd15</code></a> 6.2.0rc2</li> <li>Additional commits viewable in <a href="https://github.com/jupyter/nbconvert/compare/6.0.1...6.3.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=nbconvert&package-manager=pip&previous-version=6.0.1&new-version=6.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18566/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18566", "html_url": "https://github.com/huggingface/transformers/pull/18566", "diff_url": "https://github.com/huggingface/transformers/pull/18566.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18566.patch", "merged_at": 1660229251000 }
https://api.github.com/repos/huggingface/transformers/issues/18565
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18565/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18565/comments
https://api.github.com/repos/huggingface/transformers/issues/18565/events
https://github.com/huggingface/transformers/pull/18565
1,335,113,261
PR_kwDOCUB6oc48-7o1
18,565
Bump nbconvert from 6.0.1 to 6.3.0 in /examples/research_projects/lxmert
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,660
1,660
1,660
CONTRIBUTOR
null
Bumps [nbconvert](https://github.com/jupyter/nbconvert) from 6.0.1 to 6.3.0. <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/jupyter/nbconvert/commit/cefe0bfe303e5e9e194c393cb9280c64a77b8219"><code>cefe0bf</code></a> Release 6.3.0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/a534fb901ff83e0b0c0c082ff47f3de01dc651b1"><code>a534fb9</code></a> Release 6.3.0b0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/87920c5a47c8ae99600be6c9b9b909ba440adce9"><code>87920c5</code></a> Add changelog for 6.3.0 (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1669">#1669</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/dd6d9c7d36d0a09db647a8fc993f7330388a1e48"><code>dd6d9c7</code></a> add slide numbering (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1654">#1654</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/5d2c5e2b79534c11678b73e707feb74d7827a557"><code>5d2c5e2</code></a> Update state filter (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1664">#1664</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/11ea5931f71fdaaaad8958f634132f45476bf006"><code>11ea593</code></a> fix: avoid closing the script tag early by escaping a forward slash (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1665">#1665</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/968c5fbabaf99f83d64720a1a6e90969052e978c"><code>968c5fb</code></a> Fix HTML templates mentioned in help docs (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1653">#1653</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/35c4d07eb7060b505412c0ad83886176fe8409fe"><code>35c4d07</code></a> Add a new output filter that excludes widgets if there is no state (<a href="https://github-redirect.dependabot.com/jupyter/nbconvert/issues/1643">#1643</a>)</li> <li><a href="https://github.com/jupyter/nbconvert/commit/c663c75339709c0e1c051d684dba0cf10fa9083e"><code>c663c75</code></a> 6.2.0</li> <li><a href="https://github.com/jupyter/nbconvert/commit/fd1dd15b63bfd898c21c90b78165c4c00c448896"><code>fd1dd15</code></a> 6.2.0rc2</li> <li>Additional commits viewable in <a href="https://github.com/jupyter/nbconvert/compare/6.0.1...6.3.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=nbconvert&package-manager=pip&previous-version=6.0.1&new-version=6.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18565/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18565", "html_url": "https://github.com/huggingface/transformers/pull/18565", "diff_url": "https://github.com/huggingface/transformers/pull/18565.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18565.patch", "merged_at": 1660229239000 }
https://api.github.com/repos/huggingface/transformers/issues/18564
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18564/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18564/comments
https://api.github.com/repos/huggingface/transformers/issues/18564/events
https://github.com/huggingface/transformers/issues/18564
1,335,080,869
I_kwDOCUB6oc5Pk7el
18,564
Transformers Documentation translation to German (de)
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false } ]
[ "installation is WIP and will be done tomorrow in the open PR\r\nother sections will follow in future if its still to do then", "Great! Thank you @flozi00 🚀.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Will try to find time for more progress after sleeping a bit @ github-actions", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,660
1,665
1,665
CONTRIBUTOR
null
Hi! Let's bring the documentation to all the German-speaking community :) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know here if you'd like to translate any, and we'll add your name to the list. Some notes: - Please translate using a formal tone; "wir" and "sie." If possible, please reformulate the sentences to use the first person plural (wir) unless the sentence describes an action the user has to take. - Please translate in a gender-neutral way. - Add your translations to the `de` folder inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). - Register your translation in [de/_toctree.yml](https://github.com/huggingface/transformers/blob/main/docs/source/de/_toctree.yml); please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). - Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. - 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) @flozi00 - [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx). @flozi00 - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18564/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18563
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18563/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18563/comments
https://api.github.com/repos/huggingface/transformers/issues/18563/events
https://github.com/huggingface/transformers/pull/18563
1,334,974,275
PR_kwDOCUB6oc48-d0v
18,563
Properly move cache when it is not in default path
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,660
1,660
1,660
COLLABORATOR
null
# What does this PR do? This PR respects the user env variable for the cache move when they set something different from the default path, as it's not working as expected right now (reported internally by @stas00 )
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18563/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18563/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18563", "html_url": "https://github.com/huggingface/transformers/pull/18563", "diff_url": "https://github.com/huggingface/transformers/pull/18563.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18563.patch", "merged_at": 1660160763000 }
https://api.github.com/repos/huggingface/transformers/issues/18562
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18562/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18562/comments
https://api.github.com/repos/huggingface/transformers/issues/18562/events
https://github.com/huggingface/transformers/pull/18562
1,334,964,112
PR_kwDOCUB6oc48-brN
18,562
Adds timeout argument to training_args to avoid socket timeouts in DDP
{ "login": "gugarosa", "id": 4120639, "node_id": "MDQ6VXNlcjQxMjA2Mzk=", "avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gugarosa", "html_url": "https://github.com/gugarosa", "followers_url": "https://api.github.com/users/gugarosa/followers", "following_url": "https://api.github.com/users/gugarosa/following{/other_user}", "gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}", "starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions", "organizations_url": "https://api.github.com/users/gugarosa/orgs", "repos_url": "https://api.github.com/users/gugarosa/repos", "events_url": "https://api.github.com/users/gugarosa/events{/privacy}", "received_events_url": "https://api.github.com/users/gugarosa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hey @gugarosa, thanks for your PR! I'm asking Sylvain to review it as he's the maintainer of the `Trainer`, but he's on the leave for the next few weeks. He'll review your PR when he's back!\r\n\r\nThanks for your patience :pray: ", "No worries @LysandreJik! Thanks so much for the attention!", "You just need to run `make style` and we should be good!", "> You just need to run `make style` and we should be good!\r\n\r\nMy bad! I always forget to run it. Just squashed the previous commits and added the `make style`. Hopefully, it will pass all tests in a couple minutes!\r\n\r\nThanks for all the attention on this PR!" ]
1,660
1,662
1,662
CONTRIBUTOR
null
# What does this PR do? This PR follows the work done in #18081 and adds a `timeout` argument to `TrainingArgs` to avoid Socket Timeouts when using PyTorch's `torch.distributed.init_process_group`: https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group _`timeout` argument exists since 1.0.0: https://pytorch.org/docs/1.0.0/distributed.html. This prevents any regression._ Fixes #18054 #17106 and finishes the open PR #18081. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18562/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18562", "html_url": "https://github.com/huggingface/transformers/pull/18562", "diff_url": "https://github.com/huggingface/transformers/pull/18562.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18562.patch", "merged_at": 1662042833000 }
https://api.github.com/repos/huggingface/transformers/issues/18561
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18561/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18561/comments
https://api.github.com/repos/huggingface/transformers/issues/18561/events
https://github.com/huggingface/transformers/pull/18561
1,334,875,319
PR_kwDOCUB6oc48-IjB
18,561
Fixed the error so the default matches with the coment
{ "login": "Soham-West", "id": 110839578, "node_id": "U_kgDOBptHGg", "avatar_url": "https://avatars.githubusercontent.com/u/110839578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Soham-West", "html_url": "https://github.com/Soham-West", "followers_url": "https://api.github.com/users/Soham-West/followers", "following_url": "https://api.github.com/users/Soham-West/following{/other_user}", "gists_url": "https://api.github.com/users/Soham-West/gists{/gist_id}", "starred_url": "https://api.github.com/users/Soham-West/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Soham-West/subscriptions", "organizations_url": "https://api.github.com/users/Soham-West/orgs", "repos_url": "https://api.github.com/users/Soham-West/repos", "events_url": "https://api.github.com/users/Soham-West/events{/privacy}", "received_events_url": "https://api.github.com/users/Soham-West/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @xvjiarui @NielsRogge ", "LGTM", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18561). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,660
1,664
1,663
NONE
null
Made it so that the num_output_group matches with the default setting.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18561/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18561", "html_url": "https://github.com/huggingface/transformers/pull/18561", "diff_url": "https://github.com/huggingface/transformers/pull/18561.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18561.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18560
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18560/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18560/comments
https://api.github.com/repos/huggingface/transformers/issues/18560/events
https://github.com/huggingface/transformers/pull/18560
1,334,859,629
PR_kwDOCUB6oc48-FO4
18,560
raise atol for MT5OnnxConfig
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I need to check the `# Copied` issue.", "Hi, @ydshieh. Does this PR fix the two failing tests?\r\n\r\nIf not, I can work on it too.", "The test is still running. I think it should be fine :-) but will let you know once the test run is finished. Thanks." ]
1,660
1,662
1,660
COLLABORATOR
null
# What does this PR do? MT5 is newly added to ONNX tests, but currently failed with ``` AssertionError: mt5, seq2seq-lm -> Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.000148773193359375 AssertionError: mt5, seq2seq-lm-with-past -> Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.00020599365234375 ``` [failed job run](https://github.com/huggingface/transformers/runs/7718274393?check_suite_focus=true)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18560/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18560/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18560", "html_url": "https://github.com/huggingface/transformers/pull/18560", "diff_url": "https://github.com/huggingface/transformers/pull/18560.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18560.patch", "merged_at": 1660164119000 }
https://api.github.com/repos/huggingface/transformers/issues/18559
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18559/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18559/comments
https://api.github.com/repos/huggingface/transformers/issues/18559/events
https://github.com/huggingface/transformers/pull/18559
1,334,837,434
PR_kwDOCUB6oc48-BGW
18,559
[WIP]Add TF BEiT Implementation
{ "login": "MadElf1337", "id": 34575523, "node_id": "MDQ6VXNlcjM0NTc1NTIz", "avatar_url": "https://avatars.githubusercontent.com/u/34575523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MadElf1337", "html_url": "https://github.com/MadElf1337", "followers_url": "https://api.github.com/users/MadElf1337/followers", "following_url": "https://api.github.com/users/MadElf1337/following{/other_user}", "gists_url": "https://api.github.com/users/MadElf1337/gists{/gist_id}", "starred_url": "https://api.github.com/users/MadElf1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MadElf1337/subscriptions", "organizations_url": "https://api.github.com/users/MadElf1337/orgs", "repos_url": "https://api.github.com/users/MadElf1337/repos", "events_url": "https://api.github.com/users/MadElf1337/events{/privacy}", "received_events_url": "https://api.github.com/users/MadElf1337/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@gante @amyeroberts Here's the WIP draft of BEiT!\r\n\r\nPlease tell me if I have done anything wrong, I'll make the changes right away!\r\n\r\nThanks!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18559). All of your documentation changes will be reflected on that endpoint.", "Hi @MadElf1337 - thanks for opening a PR and for adding this model! Outline looks good. \r\n\r\nAs a quick overview, I see two main things that you'll want to add (alongside docs and tests): \r\n* `# Copied from` in the TF data2vec model definition\r\n* `TFBeitForXxx` classes\r\n\r\nLooking forward to seeing the full PR and having this model available for our TF users :) ", "@amyeroberts Sure! I'll make the changes!", "@amyeroberts @gante So I think I'm done with the model, can you just look over it once while I'll finish writing the tests?", "@MadElf1337 From a quick glance, the model code looks fine 👍 As always, the devil is in the details, so you likely come across issues in the tests. Lets us know if you get stuck in a particular test (tip: `breakpoint()` + comparing to PT are your friends)\r\n\r\nWill do an in-depth review when the tests are added.", "@MadElf1337 As discussed on the issue #18085 [here](https://github.com/huggingface/transformers/issues/18085#issuecomment-1210544100) for this model, we want to copy the relevant code in data2vec to `modeling_tf_beit.py`, then add the necessary `#Copied from` statements in `modeling_tf_data2vec.py` i.e. `modeling_tf_beit.py` and modeling_tf_data2vec.py` should have the same structure and equivalent `#Copied from` statements as in `modeling_beit.py` and `modeling_data2vec.py`. Let me know if any of this isn't clear or you need any help. ", "Yeah yeah it was clear, just wanted to see if the broad architecture was written correctly or not, once I complete the tests(I’m a bit stuck on the attention output test for tf), I’ll do the formatting, add the comments and then ask for a complete review", "If you follow the same structure as the pytorch data2vec vision and beit, including the copied from statements, then almost all of the architecture considerations will be taken care of for you, and it will be easier for us as reviewers. \r\n\r\nIf you need any help with the tests, let us know and we can try and lend a hand. ", "Yeah so as I said, I just am stuck on the seq_len part in the attention output for TF, since that is one thing which is present in data2vec but not in BEIT, So just need to figure out that test", "Hey @MadElf1337 -- we've just released a guide for TF conversions, might come handy to you :D \r\n\r\nhttps://huggingface.co/docs/transformers/main/en/add_tensorflow_model", "Yep thanks!\r\n\r\nMostly done with the tests as well, just a little hiccup that will be solved soon, else I’ll make sure to ask for help!", "@gante @amyeroberts Terribly sorry for the delay, had to deal with some personal stuff that could not be avoided.\r\n\r\nI think I'm done writing the tests and the model, can I get a review to see if I've missed anything/done anything wrong?\r\n\r\nThanks!\r\n\r\n(Also I'll add the comments of #Copied from TFData2vec in the final commit)", "@amyeroberts @gante \r\n\r\nCan I get a review please?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18559). All of your documentation changes will be reflected on that endpoint.", "@amyeroberts Thanks for the review!\r\n\r\n1) As suggested I've added the comments of #Copied from...(Sorry that you had to ask twice, I thought they were just comments and did not know that it was a part of the review process)\r\n\r\n2) I've also added the missing code and the torch references have been changed!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18559). All of your documentation changes will be reflected on that endpoint.", "Hi @MadElf1337 - thanks for the updates and iterating so quickly after review. \r\n\r\nThere's still a few files that need to be added for the model to be importable and fully integrated into the library. The guidelines in the document @gante details these. Here's a [recent model PR for reference](https://github.com/huggingface/transformers/pull/17826). As the overall architecture looks good, this is the next step for this PR. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18559). All of your documentation changes will be reflected on that endpoint.", "@amyeroberts @gante So I've done everything as specified in the docs(I think), can I get a review to see if I've missed anything?", "Hey @amyeroberts @gante Can I get a review please?", "@MadElf1337 Thanks for the update! \r\n\r\nThe next stage for this PR is getting all of the tests running - the fun part! The tests aren't running at the moment as the models can't be imported: \r\n\r\n```\r\nE ImportError: cannot import name 'TFBeitForImageClassification' from 'transformers' (/home/circleci/transformers/src/transformers/__init__.py)\r\n```\r\n\r\nOne thing I can see that needs to be added is included the beit models in `import_structure` in the `__init__.py` e.g. [here](https://github.com/huggingface/transformers/blob/7319850902ba9b2a44c36ccddd044f98abd9b597/src/transformers/__init__.py#L205). \r\n\r\nSome of the tests that are failing e.g. `check_code_quality` you can fix and/or find the issues by running `make fixup` locally. \r\n\r\nFinally, the ` # Copied from` statements should be added to the data2vec vision model in `modeling_tf_data2vec_vision.py` \r\nand the ones in `modeling_tf_beit.py` removed. \r\n`# Copied from transformers.models.beit.modeling_beit.TFBeitModelOutputWithPooling with Beit->Data2VecVision`\r\n", "@amyeroberts Thanks for the review!\r\n\r\nI can see that the original repo does not have the import structures in __init__.py, however I have added those to the init file in my dev branch, which is why it is showing a conflict for the same file", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey, can I know what to do next to solve the merge conflict?", "Hey @MadElf1337 -- You will have to rebase your PR with main :)\r\n\r\n1. Get the latest main\r\n```\r\ngit checkout main\r\ngit pull\r\n```\r\n\r\n2. Rebase\r\n```\r\ngit checkout your_branch\r\ngit rebase origin/main\r\n```\r\n\r\n3. Handle conflicts manually (i.e. keep the desired changes and remove the unwanted ones in the conflicting files, and follow the instructions that git gives you)\r\n\r\n4. Force-push your changes (force to avoid GitHub showing a diff of 666 files)\r\n```\r\ngit push -u origin your_branch -f\r\n```", "There, I think I've solved the conflict but the test errors are occurring due to errors in data2vecvision", "@MadElf1337 [Some of the failures](https://app.circleci.com/pipelines/github/huggingface/transformers/55390/workflows/2a766a55-113a-4c4e-8a4f-604926bcf9c4/jobs/668146) are because the `# Copied from` statements point to a path that doesn't exist e.g. \r\n`# Copied from transformers.models.data2vec.modeling_data2vec_vision.TFData2VecVisionEmbeddings with Data2VecVision->Beit` is copying the object `TFData2VecVisionEmbeddings` but is referring to the pytorch modeling file `transformers.models.data2vec.modeling_data2vec_vision`. \r\n\r\nNote: The copied from statement should be in the `modeling_tf_data2vec_vision.py` file and should copy from the beit model e.g. `# Copied from transformers.models.beit.modeling_tf_beit.TFBeitEmbeddings with Beit->Data2VecVision`. There shouldn't be any `# Copied from` comments in the BEiT modeling file `modeling_tf_beit.py`.\r\n\r\nIf you run `make fixup` locally in the repo, you'll be able to reproduce the `check_copies` and it will make \r\n the `check_code_quality` checks pass. ", "Ah I see, I will fix that right away", "So the tests run locally, but whenever I run the `make fix-copies` command it changes the docstring in data2vec from data2vec to beit, thus throwing the style change errors.\r\n\r\nHow do I go about fixing this?" ]
1,660
1,708
null
NONE
null
Porting BEiT model from PyTorch to TensorFlow backend # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #18085 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @gante @LysandreJik @NielsRogge <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18559/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18559/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18559", "html_url": "https://github.com/huggingface/transformers/pull/18559", "diff_url": "https://github.com/huggingface/transformers/pull/18559.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18559.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18558
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18558/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18558/comments
https://api.github.com/repos/huggingface/transformers/issues/18558/events
https://github.com/huggingface/transformers/issues/18558
1,334,786,109
I_kwDOCUB6oc5Pjzg9
18,558
Module 'seqeval' doesn't exist on the Hugging Face Hub either
{ "login": "datquocnguyen", "id": 2412555, "node_id": "MDQ6VXNlcjI0MTI1NTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2412555?v=4", "gravatar_id": "", "url": "https://api.github.com/users/datquocnguyen", "html_url": "https://github.com/datquocnguyen", "followers_url": "https://api.github.com/users/datquocnguyen/followers", "following_url": "https://api.github.com/users/datquocnguyen/following{/other_user}", "gists_url": "https://api.github.com/users/datquocnguyen/gists{/gist_id}", "starred_url": "https://api.github.com/users/datquocnguyen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/datquocnguyen/subscriptions", "organizations_url": "https://api.github.com/users/datquocnguyen/orgs", "repos_url": "https://api.github.com/users/datquocnguyen/repos", "events_url": "https://api.github.com/users/datquocnguyen/events{/privacy}", "received_events_url": "https://api.github.com/users/datquocnguyen/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Now I can make `run_ner.py` running by changing the line 508 from `metric = evaluate.load(\"seqeval\")` to `metric = evaluate.load(\"/absoluate/path/to/seqeval.py\")`. But I think this is still a potential bug as the fine-tuning script had worked well before, without changing the file `run_ner.py`. ", "I meet this problem too(but for squad dataset). Seems that they have not fixed it yet.", "I ran into this problem, too. I think this problem may be caused by the version of `transformers` package.\r\n\r\nHere are my two solutions to work around it:\r\n\r\n- Install the latest version of `transformers` from source, which version is `4.32.0.dev0`, then install the corresponding dependencies through `pip install -r requirements.txt`\r\n- If you have problem install the latest version of `transformers`, then you could replace the evaluation part with the following codes:\r\n\r\n\r\n**BEFORE**\r\n```\r\nmetric = evaluate.load(\"seqeval\")\r\n\r\ndef compute_metrics(p):\r\n predictions, labels = p\r\n predictions = np.argmax(predictions, axis=2)\r\n\r\n # Remove ignored index (special tokens)\r\n true_predictions = [\r\n [label_list[p] for (p, l) in zip(prediction, label) if l != -100]\r\n for prediction, label in zip(predictions, labels)\r\n ]\r\n true_labels = [\r\n [label_list[l] for (p, l) in zip(prediction, label) if l != -100]\r\n for prediction, label in zip(predictions, labels)\r\n ]\r\n\r\n results = metric.compute(predictions=true_predictions, references=true_labels)\r\n if data_args.return_entity_level_metrics:\r\n # Unpack nested dictionaries\r\n final_results = {}\r\n for key, value in results.items():\r\n if isinstance(value, dict):\r\n for n, v in value.items():\r\n final_results[f\"{key}_{n}\"] = v\r\n else:\r\n final_results[key] = value\r\n return final_results\r\n else:\r\n return {\r\n \"precision\": results[\"overall_precision\"],\r\n \"recall\": results[\"overall_recall\"],\r\n \"f1\": results[\"overall_f1\"],\r\n \"accuracy\": results[\"overall_accuracy\"],\r\n }\r\n```\r\n\r\n**AFTER**\r\n```\r\nfrom seqeval.metrics import accuracy_score\r\nfrom seqeval.metrics import classification_report\r\nfrom seqeval.metrics import f1_score\r\n\r\ndef compute_metrics(p):\r\n predictions, labels = p\r\n predictions = np.argmax(predictions, axis=2)\r\n \r\n # Remove ignored index (special tokens)\r\n true_predictions = [\r\n\t [label_list[p] for (p, l) in zip(prediction, label) if l != -100]\r\n\t for prediction, label in zip(predictions, labels)\r\n ]\r\n true_labels = [\r\n\t [label_list[l] for (p, l) in zip(prediction, label) if l != -100]\r\n\t for prediction, label in zip(predictions, labels)\r\n ]\r\n results = {\r\n\t 'accuracy': accuracy_score(true_labels, true_predictions),\r\n\t 'f1': f1_score(true_labels, true_predictions),\r\n\t 'classification_report': classification_report(true_labels, true_predictions)\r\n }\r\n return results\r\n```" ]
1,660
1,689
1,660
CONTRIBUTOR
null
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.4.0-77-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.9.1+cu111 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @LysandreJik Please help with this issue. Thank you very much. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` cd /lustre/scratch/client/vinai/users/datnq9/transformers/examples/pytorch/token-classification python3 run_ner.py \ --task_name $TASK_NAME \ --model_name_or_path $BERT_MODEL \ --output_dir $OUTPUT_DIR \ --seed $SEED \ --per_device_train_batch_size $BATCH_SIZE \ --tokenizer_name $BERT_MODEL \ --num_train_epochs $NUM_EPOCHS \ --learning_rate $PEAK_LR \ --warmup_steps $WARMUP \ --train_file $TRAIN_FILE \ --validation_file $DEV_FILE \ --test_file $TEST_FILE \ --do_train \ --do_eval \ --do_predict \ --text_column_name words \ --label_column_name tags \ --evaluation_strategy epoch \ --save_strategy epoch \ --save_total_limit 3 \ --metric_for_best_model $METRIC \ --load_best_model_at_end ``` ### Expected behavior ``` Traceback (most recent call last): File "run_ner.py", line 630, in <module> main() File "run_ner.py", line 508, in main metric = evaluate.load("seqeval") File "/lustre/scratch/client/vinai/users/datnq9/miniconda3/lib/python3.7/site-packages/evaluate/loading.py", line 703, in load path, module_type=module_type, revision=revision, download_config=download_config, download_mode=download_mode File "/lustre/scratch/client/vinai/users/datnq9/miniconda3/lib/python3.7/site-packages/evaluate/loading.py", line 655, in evaluation_module_factory ) from None FileNotFoundError: Couldn't find a module script at /lustre/scratch/client/vinai/users/datnq9/transformers/examples/pytorch/token-classification/seqeval/seqeval.py. Module 'seqeval' doesn't exist on the Hugging Face Hub either. ``` I already installed "seqeval" as well as "evaluate" packages. Thus I am not sure why this issue happened.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18558/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18557
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18557/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18557/comments
https://api.github.com/repos/huggingface/transformers/issues/18557/events
https://github.com/huggingface/transformers/issues/18557
1,334,731,062
I_kwDOCUB6oc5PjmE2
18,557
Segformer ouput size
{ "login": "joihn", "id": 11663917, "node_id": "MDQ6VXNlcjExNjYzOTE3", "avatar_url": "https://avatars.githubusercontent.com/u/11663917?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joihn", "html_url": "https://github.com/joihn", "followers_url": "https://api.github.com/users/joihn/followers", "following_url": "https://api.github.com/users/joihn/following{/other_user}", "gists_url": "https://api.github.com/users/joihn/gists{/gist_id}", "starred_url": "https://api.github.com/users/joihn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joihn/subscriptions", "organizations_url": "https://api.github.com/users/joihn/orgs", "repos_url": "https://api.github.com/users/joihn/repos", "events_url": "https://api.github.com/users/joihn/events{/privacy}", "received_events_url": "https://api.github.com/users/joihn/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "cc @NielsRogge -- is the right logits shape `(batch_size, num_labels, height, width)` or `(batch_size, num_labels, height/4, width/4)`?\r\n\r\n@joihn depending on @NielsRogge answer, would you like to open a PR to fix the documentation? :) The PyTorch model has the same comments, that may need to be fixed.", "Hi,\r\n\r\nYes it should be `(batch_size, num_labels, height/4, width/4)`.", "PR merged" ]
1,660
1,660
1,660
CONTRIBUTOR
null
### System Info thanks for this repo, Segformer `output size` is `input_size/4,` as mentionned here https://github.com/huggingface/transformers/blob/main/src/transformers/models/segformer/modeling_tf_segformer.py#L780 However, this line of documentation is wrong: https://github.com/huggingface/transformers/blob/main/src/transformers/models/segformer/modeling_tf_segformer.py#L850 By the way, what would be the easiest way to augment the ouput size, adding upsampling layers at the end ? ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction read the code :) there is an inconsitency in the doc :) ### Expected behavior the doc. should be consistent
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18557/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18556
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18556/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18556/comments
https://api.github.com/repos/huggingface/transformers/issues/18556/events
https://github.com/huggingface/transformers/pull/18556
1,334,621,692
PR_kwDOCUB6oc489UUR
18,556
[Title]: Fix the ner example for tenforflow
{ "login": "jack-cx", "id": 6050491, "node_id": "MDQ6VXNlcjYwNTA0OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6050491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jack-cx", "html_url": "https://github.com/jack-cx", "followers_url": "https://api.github.com/users/jack-cx/followers", "following_url": "https://api.github.com/users/jack-cx/following{/other_user}", "gists_url": "https://api.github.com/users/jack-cx/gists{/gist_id}", "starred_url": "https://api.github.com/users/jack-cx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jack-cx/subscriptions", "organizations_url": "https://api.github.com/users/jack-cx/orgs", "repos_url": "https://api.github.com/users/jack-cx/repos", "events_url": "https://api.github.com/users/jack-cx/events{/privacy}", "received_events_url": "https://api.github.com/users/jack-cx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18556). All of your documentation changes will be reflected on that endpoint." ]
1,660
1,660
1,660
NONE
null
[Detail]: The MODEL_MAPPING should change to TF_MODEL_MAPPING in tensorflow platform. [To do]: None # What does this PR do? To fix the problem that ner example in tensorflow dir runs failed. the error message is : (tensorflow) ➜ token-classification git:(main) ✗ python run_ner.py \ --model_name_or_path bert-base-uncased \ --dataset_name conll2003 \ --output_dir /tmp/test-ner Traceback (most recent call last): File "/Users/qcc/OpenSource/transformers/examples/tensorflow/token-classification/run_ner.py", line 57, in <module> MODEL_CONFIG_CLASSES = list(MODEL_MAPPING.keys()) AttributeError: 'NoneType' object has no attribute 'keys' <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [*] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18556/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18556", "html_url": "https://github.com/huggingface/transformers/pull/18556", "diff_url": "https://github.com/huggingface/transformers/pull/18556.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18556.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18555
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18555/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18555/comments
https://api.github.com/repos/huggingface/transformers/issues/18555/events
https://github.com/huggingface/transformers/pull/18555
1,334,589,057
PR_kwDOCUB6oc489NPE
18,555
TensorFlow MobileViT
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Didn't realize that re-requesting a review from @gante would result in removing @amyeroberts and @sgugger from the reviewer list. Please know that it was completely unintentional. ", "@sayakpaul no worries :)", "Thanks for another great model addition @sayakpaul ! ", "@sayakpaul assuming it is passing the slow tests, it is ready for the TF weights.\r\n\r\nThe super complex instructions to do it are as follows:\r\n1. Make sure you have the latest version of the hub installed (`pip install huggingface_hub -U`) and that you are logged in to HF with a write token (`huggingface-cli login`)\r\n2. Run `transformers-cli pt-to-tf --model-name foo/bar` from this branch :D\r\n3. In the Hub PR, tag `@joaogante, @lysandre` ", "Super simple (complex?) question:\r\n\r\nWhat is the format of `foo/bar`?", "The same as the model name on the hub, e.g. [this model](https://huggingface.co/apple/mobilevit-small/tree/main) would be `apple/mobilevit-small`\r\n\r\nP.S.: I edited the comment above with a 3rd step :D", "The CLI might fail due to the conversion error being above the threshold -- let us know if that happens. There is a PR open with a flag to overwrite the error threshold.", "> The CLI might fail due to the conversion error being above the threshold -- let us know if that happens. There is a PR open with a flag to overwrite the error threshold.\r\n\r\nFailing due to this. Need that flag. ", "@sayakpaul it is now merged (https://github.com/huggingface/transformers/pull/18752). You can use `--max-error` to change the limit.\r\n\r\nThis flag should be used with care. What are the differences you're seeing?", "@gante, I think I have a clue as to why the `5e-5` threshold is being crossed during cross-loading. \r\n\r\nMobileViT model has these two components: unfolding and folding. They interpolate the intermediate feature maps. I checked ([Colab Notebook](https://colab.research.google.com/gist/sayakpaul/be24f152d91d0f1cbe95d5cea9ae8b14/scratchpad.ipynb)) the output consistency of `nn.functional.interpolate` and `tf.image.resize` with the same argument values. You'd notice that the outputs assert when `atol` is 1e-5, otherwise (higher `atol`) it fails. \r\n\r\nI suspect this inconsistency has a compounding effect and is the major reason the cross-loading fails with `5e-5`. \r\n\r\nI created PRs for adding the TF weights. Navigable from here: https://huggingface.co/apple. \r\n\r\nCc: @amyeroberts ", "@gante \r\n\r\n@hollance merged my PRs for the TF weights of MobileViT (thanks!). https://github.com/huggingface/transformers/pull/18555/commits/82079a74268c8b633e61064058362a0e6e53294c removes the `from_pt` argument. \r\n\r\nNothing seems to be remaining now. Up to you (or anyone having merging privileges) to take the reigns. ", "> [...] the output consistency of `nn.functional.interpolate` and `tf.image.resize` with the same argument values.\r\n\r\nThis might be due to the `align_corners` option. I once wrote a long blog post about this difference between PyTorch and TF. https://machinethink.net/blog/coreml-upsampling/ Not sure if that's the same issue but it seems likely.", "> > [...] the output consistency of `nn.functional.interpolate` and `tf.image.resize` with the same argument values.\r\n> \r\n> This might be due to the `align_corners` option. I once wrote a long blog post about this difference between PyTorch and TF. https://machinethink.net/blog/coreml-upsampling/ Not sure if that's the same issue but it seems likely.\r\n\r\nVery well! If we need to deal with the inconsistencies between `tf.image.resize` and `nn.functional.interpolate` I suggest we do that in a separate PR 'cause various vision models would benefit from that (ViT for example). ", "@gante WDYT?", "@sayakpaul regarding the PR, all good on my end, but we still need approval from @sgugger :D\r\n\r\nAs for the `tf.image.resize` -- yeah, it would be nice to standardize for all models. Would you be interested in working on it? In any case, I'd like to ask you to open an issue, so we don't forget to track it! ", "> As for the tf.image.resize -- yeah, it would be nice to standardize for all models. Would you be interested in working on it? In any case, I'd like to ask you to open an issue, so we don't forget to track it!\r\n\r\nOn it, sir!", "@amyeroberts @gante \r\n\r\nPlease take note of the changes in https://github.com/huggingface/transformers/pull/18555/commits/32cfd30cee185a090a80a6604b850c639b04203b. \r\n\r\nInitially, when I tested TFLite conversion it didn't require any spec for [SELECT operations](https://www.tensorflow.org/lite/guide/ops_select) but now they're failing with a specification for the SELECT ops. What is more surprising is that the TFLite interpreter is treating `tf.Conv2D` to be a SELECT op. Hence I have raised https://github.com/tensorflow/tensorflow/issues/57550. ", "(retriggered failing job, seems like a spurious failure)", "Yeah probably nothing related to the PR? ", "The build doc job failure is not spurious. There seems to be a problem with an example bloc introduced by this PR.", "Let me see if removing comments from the example block does the trick. Because when the job wasn't failing the example block didn't have any comments. ", "No, it didn't help :( Any suggestions to try out? ", "> The build doc job failure is not spurious. There seems to be a problem with an example bloc introduced by this PR.\r\n\r\nMy bad :D read the failure bottom to top, so I didn't notice the `mobilevit` errors " ]
1,660
1,662
1,662
MEMBER
null
This PR implements the MobileViT model in TensorFlow. ## Interesting points * The classification and segmentation models provided with MobileViT are fully compatible with TensorFlow Lite. Therefore, I have included sample code in the model documentation showing how to perform the TensorFlow Lite conversion (~4 lines of code). * TFLlite versions of the smallest checkpoints for classification and semantic segmentation are 1MB and 2MBs, respectively. I believe this will be quite beneficial to the TinyML community. ## TODOs - [x] Hosting of the TF checkpoints on the Hub. (Can I do it now? If so, I need resources that show how to do that.) - [x] Remove `from_pt` wherever needed. @amyeroberts @gante @sgugger up for review!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18555/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18555", "html_url": "https://github.com/huggingface/transformers/pull/18555", "diff_url": "https://github.com/huggingface/transformers/pull/18555.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18555.patch", "merged_at": 1662042915000 }
https://api.github.com/repos/huggingface/transformers/issues/18554
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18554/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18554/comments
https://api.github.com/repos/huggingface/transformers/issues/18554/events
https://github.com/huggingface/transformers/issues/18554
1,334,512,527
I_kwDOCUB6oc5PiwuP
18,554
Illegal instruction: 4 error when importing TextClassificationPipeline
{ "login": "ehsong", "id": 5554659, "node_id": "MDQ6VXNlcjU1NTQ2NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5554659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ehsong", "html_url": "https://github.com/ehsong", "followers_url": "https://api.github.com/users/ehsong/followers", "following_url": "https://api.github.com/users/ehsong/following{/other_user}", "gists_url": "https://api.github.com/users/ehsong/gists{/gist_id}", "starred_url": "https://api.github.com/users/ehsong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ehsong/subscriptions", "organizations_url": "https://api.github.com/users/ehsong/orgs", "repos_url": "https://api.github.com/users/ehsong/repos", "events_url": "https://api.github.com/users/ehsong/events{/privacy}", "received_events_url": "https://api.github.com/users/ehsong/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "hi @ehsong ,\r\n\r\nAre you on Mac M1 ? Do you mind running `transformers-cli env` and printing the output here ?\r\n\r\nWhat code are you using the trigger the issue ?\r\n\r\nI googled and found this : https://stackoverflow.com/questions/14268887/what-is-the-illegal-instruction-4-error-and-why-does-mmacosx-version-min-10\r\nI can't tell you exactly what's the issue, but it seems to be the environment you're running in that's causing this.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,660
1,663
1,663
NONE
null
### System Info ``` Name: transformers Version: 4.22.0.dev0 Name: tensorflow Version: 2.5.0 Name: torch Version: 1.12.1 Python 3.9.12 ``` I keep getting the error 'Illegal instruction: 4 ' when trying to import TextClassificationPipeline from transformers, does this have anything to do with the versions of the dependencies? It is quite confusing which versions of the packages are compatible with TextClassificationPipeline, I had the same error when with tensorflow version 2.9 ### Who can help? @Narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction - ### Expected behavior -
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18554/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18553
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18553/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18553/comments
https://api.github.com/repos/huggingface/transformers/issues/18553/events
https://github.com/huggingface/transformers/issues/18553
1,334,300,956
I_kwDOCUB6oc5Ph9Ec
18,553
OWL-ViT outputs are offset for non-square images
{ "login": "segments-tobias", "id": 89590365, "node_id": "MDQ6VXNlcjg5NTkwMzY1", "avatar_url": "https://avatars.githubusercontent.com/u/89590365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/segments-tobias", "html_url": "https://github.com/segments-tobias", "followers_url": "https://api.github.com/users/segments-tobias/followers", "following_url": "https://api.github.com/users/segments-tobias/following{/other_user}", "gists_url": "https://api.github.com/users/segments-tobias/gists{/gist_id}", "starred_url": "https://api.github.com/users/segments-tobias/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/segments-tobias/subscriptions", "organizations_url": "https://api.github.com/users/segments-tobias/orgs", "repos_url": "https://api.github.com/users/segments-tobias/repos", "events_url": "https://api.github.com/users/segments-tobias/events{/privacy}", "received_events_url": "https://api.github.com/users/segments-tobias/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I just saw that @alaradirik acknowledged this issue in the [Community tab of the Spaces demo](https://huggingface.co/spaces/adirik/OWL-ViT/discussions/1), but I'll keep this issue open, so it's easier for others to find.", "I can also confirm that @cceyda 's finding works for me, i.e. doing\r\n```python\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\ninput_image = image.resize((768, 768))\r\ninputs = processor(text=texts, images=input_image, return_tensors=\"pt\")\r\n```\r\nwhile all other code is kept the same. It's thus not a bug in the `post_process()` method.\r\n![image](https://user-images.githubusercontent.com/89590365/183862229-10f48f5d-9847-42b8-a6b8-a74d5ef603bd.png)\r\n", "Hi @segments-tobias, thank for opening the PR! @cceyda's PR fixed the demo and I confirmed that the`post_process()` method works fine. The following code prints the boundary boxes correctly:\r\n\r\n```\r\nimport cv2\r\nimport numpy as np\r\nimport torch\r\n\r\nfrom urllib.request import urlopen\r\nfrom transformers import OwlViTProcessor, OwlViTForObjectDetection\r\n\r\nprocessor = OwlViTProcessor.from_pretrained(\"google/owlvit-base-patch32\")\r\nmodel = OwlViTForObjectDetection.from_pretrained(\"google/owlvit-base-patch32\")\r\n\r\n# Download image\r\nurl = \"https://images.unsplash.com/photo-1517448922956-1efc1c6cc09c\"\r\narray = np.asarray(bytearray(urlopen(url).read()), dtype=np.uint8)\r\nimage = cv2.cvtColor(cv2.imdecode(arr, -1), cv2.COLOR_BGR2RGB)\r\n\r\n# Text queries\r\ntexts = [[\"flag\", \"car\", \"person\", \"sidewalk\", \"bicycle\"]]\r\n\r\n# Target image sizes (height, width) to rescale box predictions [batch_size, 2]\r\ntarget_sizes = torch.Tensor([image.shape[:2]])\r\nimg_input = cv2.resize(image, (768, 768), interpolation = cv2.INTER_AREA)\r\ninputs = processor(text=texts, images=img_input, return_tensors=\"pt\")\r\n\r\nwith torch.no_grad():\r\n outputs = model(**inputs)\r\n\r\n# Convert outputs (bounding boxes and class logits) to COCO API\r\nresults = processor.post_process(outputs=outputs, target_sizes=target_sizes)\r\n\r\ni = 0 # Retrieve predictions for the first image for the corresponding text queries\r\ntext = texts[i]\r\nboxes, scores, labels = results[i][\"boxes\"], results[i][\"scores\"], results[i][\"labels\"]\r\n\r\nfont = cv2.FONT_HERSHEY_SIMPLEX\r\nscore_threshold = 0.05\r\n\r\nfor box, score, label in zip(boxes, scores, labels):\r\n box = [int(i) for i in box.tolist()]\r\n\r\n if score >= score_threshold:\r\n image = cv2.rectangle(image, box[:2], box[2:], (255,0,0), 5)\r\n if box[3] + 25 > 768:\r\n y = box[3] - 10\r\n else:\r\n y = box[3] + 25\r\n\r\n image = cv2.putText(\r\n image, text[label], (box[0], y), font, 1, (255,0,0), 2, cv2.LINE_AA\r\n )\r\n```\r\n\r\nI think there is an issue in `OwlViTFeatureExtractor` as omitting the manual resizing line causes unexpected outputs. I'll double check this and open a fix PR shortly.", "Great! Yes, would be great to be able to leave out the resizing line", "yes, the `OwlViTFeatureExtractor ` is already supposed to be doing resizing according to this line [here](https://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/src/transformers/models/owlvit/feature_extraction_owlvit.py#L197) but it isn't working for some reason I haven't debugged.", "@segments-tobias @cceyda thank you both for your input! The issue was due to defining the size as a single value instead of a tuple (768 instead of (768, 768)) in `OwlViTFeatureExtractor`. This led to the image/s getting resized along only one dimension and getting cropped along the other dimension later on in the preprocessing pipeline.\r\n\r\nThe configuration files are updated and the `OwlViTProcessor` can correctly resize the input images now. I'll open another PR to update the default values in `OwlViTFeatureExtractor` but I'm closing this issue as it is fixed." ]
1,660
1,660
1,660
NONE
null
### System Info - `transformers` version: 4.21.1 - Platform: Linux-5.10.43.3-microsoft-standard-WSL2-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: NA ### Who can help? @alaradirik @sgugger @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Using the [code snippet](https://huggingface.co/google/owlvit-base-patch32) for OWL-ViT on a large Unsplash image ([https://images.unsplash.com/photo-1517448922956-1efc1c6cc09c](https://images.unsplash.com/photo-1517448922956-1efc1c6cc09c)) gives an incorrect result. The bounding boxes seem offset. When cropping the image, the result is actually correct. ```python import requests from PIL import Image import torch from transformers import OwlViTProcessor, OwlViTForObjectDetection processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") url = "https://images.unsplash.com/photo-1517448922956-1efc1c6cc09c" image = Image.open(requests.get(url, stream=True).raw) texts = [["flag", "car", "person", "sidewalk", "bicycle"]] inputs = processor(text=texts, images=image, return_tensors="pt") outputs = model(**inputs) # Target image sizes (height, width) to rescale box predictions [batch_size, 2] target_sizes = torch.Tensor([image.size[::-1]]) # Convert outputs (bounding boxes and class logits) to COCO API results = processor.post_process(outputs=outputs, target_sizes=target_sizes) i = 0 # Retrieve predictions for the first image for the corresponding text queries text = texts[i] boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"] ``` This is the result: note that the yellow flag is detected, but the bounding box is offset. ![image](https://user-images.githubusercontent.com/89590365/183857750-74f624b3-d852-46db-a1ba-9da04c02600f.png) ### Expected behavior The `post_process()` method should correctly rescale the bounding boxes to the original image size. See the Spaces demo (which uses cropping), which shows the flag detection at the right position. ![image](https://user-images.githubusercontent.com/89590365/183858925-200d1c58-c851-4577-8518-47c8b79c7d88.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18553/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18553/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18552
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18552/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18552/comments
https://api.github.com/repos/huggingface/transformers/issues/18552/events
https://github.com/huggingface/transformers/issues/18552
1,334,165,201
I_kwDOCUB6oc5Phb7R
18,552
wav2vec2 : No MSELoss implementation
{ "login": "LaurenceYozi", "id": 107841252, "node_id": "U_kgDOBm2G5A", "avatar_url": "https://avatars.githubusercontent.com/u/107841252?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LaurenceYozi", "html_url": "https://github.com/LaurenceYozi", "followers_url": "https://api.github.com/users/LaurenceYozi/followers", "following_url": "https://api.github.com/users/LaurenceYozi/following{/other_user}", "gists_url": "https://api.github.com/users/LaurenceYozi/gists{/gist_id}", "starred_url": "https://api.github.com/users/LaurenceYozi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LaurenceYozi/subscriptions", "organizations_url": "https://api.github.com/users/LaurenceYozi/orgs", "repos_url": "https://api.github.com/users/LaurenceYozi/repos", "events_url": "https://api.github.com/users/LaurenceYozi/events{/privacy}", "received_events_url": "https://api.github.com/users/LaurenceYozi/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @LaurenceYozi\r\n\r\nIndeed! The MSE loss is not implemented. Would you like to open a PR to add this? You can refer to BERT for the loss function/labels logic:\r\nhttps://github.com/huggingface/transformers/blob/cfd623a859890c6d106610d3c688064eadc7bd61/src/transformers/models/bert/modeling_bert.py#L1578-L1598\r\nYou should be able to copy this almost one-for-one from BERT to Wav2Vec2!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,660
1,665
1,665
NONE
null
### System Info modeling_wav2vec2.py : 1822 lines only implements CrossEntropyLoss() if labels is not None: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1)) labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction model = AutoModelForAudioClassification.from_pretrained( "facebook/wav2vec2-base", num_labels=1) ### Expected behavior Calculate MSELoss
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18552/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18551
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18551/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18551/comments
https://api.github.com/repos/huggingface/transformers/issues/18551/events
https://github.com/huggingface/transformers/pull/18551
1,333,910,647
PR_kwDOCUB6oc4869QK
18,551
PEGASUS-X
{ "login": "zphang", "id": 1668462, "node_id": "MDQ6VXNlcjE2Njg0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zphang", "html_url": "https://github.com/zphang", "followers_url": "https://api.github.com/users/zphang/followers", "following_url": "https://api.github.com/users/zphang/following{/other_user}", "gists_url": "https://api.github.com/users/zphang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zphang/subscriptions", "organizations_url": "https://api.github.com/users/zphang/orgs", "repos_url": "https://api.github.com/users/zphang/repos", "events_url": "https://api.github.com/users/zphang/events{/privacy}", "received_events_url": "https://api.github.com/users/zphang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @ArthurZucker ", "On it 🤗", "I can follow up on the rest of the feedback this weekend / early next week: most of it looks manageable.\r\n\r\nOne comment on `DimensionInfo`: I use it to capture all of the shape-related attributes I need for the various reshapes/padding: it felt cleaner/more manageable for me to keep it all in one data structure than to pass them around individually. I can expand the attributes to the full readable names as you mention above, but I think it's useful to keep the dataclass. Let me know what you think: I'm fine either way.", "> I can follow up on the rest of the feedback this weekend / early next week: most of it looks manageable.\r\n> \r\n> One comment on `DimensionInfo`: I use it to capture all of the shape-related attributes I need for the various reshapes/padding: it felt cleaner/more manageable for me to keep it all in one data structure than to pass them around individually. I can expand the attributes to the full readable names as you mention above, but I think it's useful to keep the dataclass. Let me know what you think: I'm fine either way.\r\n\r\nThanks for the quick comment! For me it's mostly the single uppercase letters that I would like to change. Ok for me to keep the class, even if we haven't done it before for models like LongT5, Longformer or BigBird. Think overall I'd prefer to not have the class at all, but ok for me to leave it if you feel stongly about it @zphang :-) \r\nJust it'd be super nice to write out the single upper-case letters", "Let me know if there is anything else I need to address!", "Let me ping Peter Liu on this. He should be able to pull and push to the Google org. I will update the paths in the PR when it is ready.", "Thanks for making the change! Test failures seem unrelated :-) Merging!", "Hi @zphang Thank you for adding this model! We have a few failing tests for this model, which could be found on [this CI job run page](https://github.com/huggingface/transformers/runs/8173676224?check_suite_focus=true). You can click [View raw logs] on the icon at the top-right corner.\r\n\r\n- One issue is the missing checkpoint `pegasus-x-base`:\r\n ```bash\r\n pegasus-x-base is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\n ```\r\n Do you know where is the correct checkpoint?\r\n\r\n- Another test failure is `test_seq_to_seq_generation`, where the model outputs \r\n ```\r\n PEGASUSX PEGASUS PEGASUS PEGASUS-X PEGASUS PEGASUS-X PEGASUS PEGASUS-X PEGASUS PEGASUS PEGASUS\r\n ```\r\n Could you check if you get the expected values `we investigate the performance` on your side, and/or (if possible) why this non-sense output occurs?\r\n\r\n- For the remaining failure `test_torchscript_output_attentions`, we will fix it on our side.\r\n\r\nThank you in advance!", "Here the PR to correct the naming: https://github.com/huggingface/transformers/pull/18896/files", "Fix in #19025", "Thanks for sharing this model @zphang!\r\nDo you intend to release the fine-tuned checkpoints? (pubmed-large, arxiv-large, govreport-large, etc)?", "The FLAX weights of the fine-tuned models can be found here\r\nhttps://github.com/google-research/pegasus/tree/main/pegasus/flax#checkpoints\r\n\r\nAnd the FLAX to HF conversion script can be found here\r\nhttps://github.com/google-research/pegasus/blob/main/pegasus/flax/checkpoint_conversion/convert_from_flax_to_hf.py\r\n\r\nI'll try to convert the models over and upload them to HF hub this week." ]
1,660
1,667
1,662
CONTRIBUTOR
null
# What does this PR do? Adds [PEGASUS-X](https://arxiv.org/abs/2208.04347) implementation. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten, @patil-suraj --- Note: The models are currently hosted on https://huggingface.co/zphang but should be transferred to the Google organization shortly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18551/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18551", "html_url": "https://github.com/huggingface/transformers/pull/18551", "diff_url": "https://github.com/huggingface/transformers/pull/18551.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18551.patch", "merged_at": 1662141243000 }
https://api.github.com/repos/huggingface/transformers/issues/18550
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18550/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18550/comments
https://api.github.com/repos/huggingface/transformers/issues/18550/events
https://github.com/huggingface/transformers/pull/18550
1,333,891,537
PR_kwDOCUB6oc4865QL
18,550
Update philosophy to include other preprocessing classes
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,660
1,660
1,660
MEMBER
null
This PR removes the emphasis on NLP and focuses more on `transformers` being designed for all modalities.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18550/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18550", "html_url": "https://github.com/huggingface/transformers/pull/18550", "diff_url": "https://github.com/huggingface/transformers/pull/18550.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18550.patch", "merged_at": 1660155640000 }
https://api.github.com/repos/huggingface/transformers/issues/18549
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18549/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18549/comments
https://api.github.com/repos/huggingface/transformers/issues/18549/events
https://github.com/huggingface/transformers/issues/18549
1,333,784,900
I_kwDOCUB6oc5Pf_FE
18,549
fail to import import transformers.trainer due to libssl.so.10: cannot open shared object file: No such file or directory
{ "login": "xxiexuezhi", "id": 23486817, "node_id": "MDQ6VXNlcjIzNDg2ODE3", "avatar_url": "https://avatars.githubusercontent.com/u/23486817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xxiexuezhi", "html_url": "https://github.com/xxiexuezhi", "followers_url": "https://api.github.com/users/xxiexuezhi/followers", "following_url": "https://api.github.com/users/xxiexuezhi/following{/other_user}", "gists_url": "https://api.github.com/users/xxiexuezhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/xxiexuezhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xxiexuezhi/subscriptions", "organizations_url": "https://api.github.com/users/xxiexuezhi/orgs", "repos_url": "https://api.github.com/users/xxiexuezhi/repos", "events_url": "https://api.github.com/users/xxiexuezhi/events{/privacy}", "received_events_url": "https://api.github.com/users/xxiexuezhi/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I am not exacly sure how solve it. But bascially I tried all combination from pip and conda and somehow it works.", "I don't think this should be closed as I'm getting the same error on `continuumio/anaconda3` after `conda install -c huggingface transformers` but `pip install transformers` did work.", "Got the same error with `conda install -c huggingface transfformers`. And thank you @Utopiah , the pip works.", "I am getting the following error with pip,\r\n`RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):\r\ncannot import name 'BertTokenizerFast' from 'transformers.models.bert' (/home/pranav.mac/anaconda3/lib/python3.9/site-packages/transformers/models/bert/__init__.py)`", "> I am getting the following error with pip, `RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): cannot import name 'BertTokenizerFast' from 'transformers.models.bert' (/home/pranav.mac/anaconda3/lib/python3.9/site-packages/transformers/models/bert/__init__.py)`\r\n\r\nI worked when I did `pip uninstall tokenizers` and `pip install transformers`", "Got the same error with conda install -c huggingface transfformers. ", "I was able to solve this by uninstalling torch", "> > I am getting the following error with pip, `RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): cannot import name 'BertTokenizerFast' from 'transformers.models.bert' (/home/pranav.mac/anaconda3/lib/python3.9/site-packages/transformers/models/bert/__init__.py)`\r\n> \r\n> I worked when I did `pip uninstall tokenizers` and `pip install transformers`\r\n\r\nWorked for me", "> > I am getting the following error with pip, `RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): cannot import name 'BertTokenizerFast' from 'transformers.models.bert' (/home/pranav.mac/anaconda3/lib/python3.9/site-packages/transformers/models/bert/__init__.py)`\r\n> \r\n> I worked when I did `pip uninstall tokenizers` and `pip install transformers`\r\n\r\nThese two commands worked for me with the error 'libssl.so.10: cannot open shared object file'. ", "Same error here on miniconda/linux(ubunut) when I run conda install. As others have said, the error goes away with : `pip uninstall tokenizers ` and ` pip install transformers`.", "> > I am getting the following error with pip, `RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): cannot import name 'BertTokenizerFast' from 'transformers.models.bert' (/home/pranav.mac/anaconda3/lib/python3.9/site-packages/transformers/models/bert/__init__.py)`\r\n> \r\n> I worked when I did `pip uninstall tokenizers` and `pip install transformers`\r\n\r\nworked for me", "> > I am getting the following error with pip, `RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): cannot import name 'BertTokenizerFast' from 'transformers.models.bert' (/home/pranav.mac/anaconda3/lib/python3.9/site-packages/transformers/models/bert/__init__.py)`\r\n> \r\n> I worked when I did `pip uninstall tokenizers` and `pip install transformers`\r\n\r\nThis solves libssl problem altogether. GREEEAAAAT!", "using pip installation from here : https://huggingface.co/docs/diffusers/installation\r\nworked for me." ]
1,660
1,705
1,660
NONE
null
### System Info Traceback (most recent call last): File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1002, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 843, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/trainer.py", line 66, in <module> from .data.data_collator import DataCollator, DataCollatorWithPadding, default_data_collator File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/data/__init__.py", line 19, in <module> from .data_collator import ( File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/data/data_collator.py", line 21, in <module> from ..models.bert import BertTokenizer, BertTokenizerFast File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/models/__init__.py", line 19, in <module> from . import ( File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/models/mt5/__init__.py", line 40, in <module> from ..t5.tokenization_t5_fast import T5TokenizerFast File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 23, in <module> from ...tokenization_utils_fast import PreTrainedTokenizerFast File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 24, in <module> import tokenizers.pre_tokenizers as pre_tokenizers_fast File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/tokenizers/__init__.py", line 79, in <module> from .tokenizers import ( ImportError: libssl.so.10: cannot open shared object file: No such file or directory The above exception was the direct cause of the following exception: Traceback (most recent call last): File "test.py", line 3, in <module> from transformers import Trainer, TrainingArguments, EvalPrediction File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 992, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/xxie92/anaconda3/envs/sema/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1004, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback): libssl.so.10: cannot open shared object file: No such file or directory ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Expected behavior I installed transformers following the official online website. steps: I create a new env using conda. And install it using conda. it give this error. i also tried from pip to install. the same error appear. From the error message, it seems tokenizer package may be the problem. But I am not sure how to solve it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18549/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18549/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18548
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18548/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18548/comments
https://api.github.com/repos/huggingface/transformers/issues/18548/events
https://github.com/huggingface/transformers/pull/18548
1,333,779,735
PR_kwDOCUB6oc486hDw
18,548
Update documentation build section
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,660
1,660
1,660
MEMBER
null
This PR updates the `build_doc` with the `build_pr_documentation` job and how to see where things went wrong if the job fails.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18548/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18548/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18548", "html_url": "https://github.com/huggingface/transformers/pull/18548", "diff_url": "https://github.com/huggingface/transformers/pull/18548.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18548.patch", "merged_at": 1660087375000 }
https://api.github.com/repos/huggingface/transformers/issues/18547
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18547/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18547/comments
https://api.github.com/repos/huggingface/transformers/issues/18547/events
https://github.com/huggingface/transformers/pull/18547
1,333,511,548
PR_kwDOCUB6oc485oiT
18,547
Fix memory leak issue in `torch_fx` tests
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "- without new process\r\n - 2~3 minutes for 100 runs\r\n - 15 MB leak per run\r\n\r\n- with `fork`\r\n - 5 minutes for 100 runs\r\n - 1 MB leak per run\r\n - hangs if `MKL_NUM_THREADS` > 1\r\n\r\n- with `spawn`\r\n - 30 minutes for 100 runs\r\n - 1 MB leak per run", "When , using the new process approach, in some cases, setting `ulimit -n 2048` is necessary.\r\n(For example, running the same test with a loop)\r\n\r\nOtherwise, we might get the following error:\r\n```bash\r\ntests/models/bart/test_modeling_bart.py::BartModelTest::test_torch_fx Traceback (most recent call last):\r\n File \"/usr/lib/python3.9/multiprocessing/queues.py\", line 245, in _feed\r\n File \"/usr/lib/python3.9/multiprocessing/reduction.py\", line 51, in dumps\r\n File \"/home/yih_dar_huggingface_co/.local/lib/python3.9/site-packages/torch/multiprocessing/reductions.py\", line 358, in reduce_storage\r\nRuntimeError: unable to open shared memory object </torch_46201_690006289_939> in read-write mode: Too many open files (24)\r\n```\r\n\r\nMore details:\r\n```bash\r\n> ???\r\n\r\ntests/test_modeling_common.py:769: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_modeling_common.py:866: in _create_and_check_torch_fx_tracing\r\n ???\r\n/usr/lib/python3.9/multiprocessing/process.py:121: in start\r\n ???\r\n/usr/lib/python3.9/multiprocessing/context.py:277: in _Popen\r\n ???\r\n/usr/lib/python3.9/multiprocessing/popen_fork.py:19: in __init__\r\n ???\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <multiprocessing.popen_fork.Popen object at 0x7fa12a499820>, process_obj = <ForkProcess name='ForkProcess-10' parent=46201 initial>\r\n\r\n> ???\r\nE OSError: [Errno 24] Too many open files\r\n\r\n/usr/lib/python3.9/multiprocessing/popen_fork.py:64: OSError\r\n```\r\n\r\nThis seems to relate to torch multiprocessing: https://discuss.pytorch.org/t/runtimeerror-unable-to-open-shared-memory-object-depending-on-the-model/116090\r\n\r\nAnother related issue (not torch): https://github.com/lava-nc/lava/issues/71", "With GPU, we have to use `spawn`, otherwise\r\n\r\n```\r\nProcess ForkProcess-1:\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.8/multiprocessing/process.py\", line 315, in _bootstrap\r\n self.run()\r\n File \"/usr/lib/python3.8/multiprocessing/process.py\", line 108, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/transformers/tests/test_modeling_common.py\", line 143, in _run_torch_jit\r\n model, input_names, filtered_inputs = in_queue.get(timeout=30)\r\n File \"/usr/lib/python3.8/multiprocessing/queues.py\", line 116, in get\r\n return _ForkingPickler.loads(res)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/reductions.py\", line 112, in rebuild_cuda_tensor\r\n torch.cuda._lazy_init()\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py\", line 207, in _lazy_init\r\n raise RuntimeError(\r\nRuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method\r\n```", "I think it's safe to only run those tests on CPU.\r\nAlso, when running locally it takes ~ 1 min (althought I agree my machine might be more powerful).", "@michaelbenayoun \r\n \r\nI move (almost) the whole testing logic to the child process. On more advantage here is to create the model in the child process, so we don't need to pass it between the process.\r\n\r\nNow running 100 times, we have only (per run) `0.15 MB increase of memory usage`.", "@michaelbenayoun You are right, some model overwrites `_create_and_check_torch_fx_tracing`. This won't fail this PR however: those models will just run the `test_torch_fx*` tests in the current manner (i.e. not in the child process). I will take a look if those overwritting are necessary. In any case, we can merge this PR as it is (if you are happy with it), and I will work on those models later.", "I think it's okay now with the changes you've made!", "> I think it's okay now with the changes you've made!\r\n\r\nWould love to have a approval from you, @michaelbenayoun.\r\nBut no need to rush - as long as you finally happy with the change and click the button.", "ready for @sgugger and/or @LysandreJik to have a final check 🚀 ", "I will merge this afternoon, after adding a short command in `_create_and_check_torch_fx_tracing` explaining why we need this change, with a link to #18525", "Hi @michaelbenayoun, I just saw that I fixed a similar issue a few months ago\r\n\r\nhttps://github.com/huggingface/transformers/blob/fbf382c84da4506484a23e85bd8540da5192ff4e/tests/test_modeling_common.py#L719\r\n\r\n(for `_create_and_check_torchscript`). I am going to change this PR to simply apply that fix. Is it OK for you?", "Changed the PR to simply call `clear_torch_jit_class_registry`. Test failure is irrelevant to this PR - merge now." ]
1,660
1,661
1,661
COLLABORATOR
null
# What does this PR do? ~~**Question**: On GPU VMs, we have to use `spawn`, see [here](https://github.com/huggingface/transformers/pull/18547#issuecomment-1210375716). However, it still hangs with `spawn` (I can't figure out this yet). Should we have 2 branches: one using new process for CPU VM (on CircleCI), and another one using the original approach (no new process - for GPU VM, like on scheduled CI)?~~ **I might have a solution!** --> send the model to the child process in CPU and send to CUDA device there. ~I am going to try `torch.multiprocessing` first.~ not working neither ---- Run torch_fx tests in a spawn process to avoid [memory issue](https://github.com/huggingface/transformers/issues/18525#issue-1331914135). - See [this comment](https://github.com/huggingface/transformers/pull/18547#issuecomment-1210260525) for the effect - The reason to use `JoinableQueue` instead of `Queue` for the outputs: https://discuss.pytorch.org/t/using-torch-tensor-over-multiprocessing-queue-process-fails/2847
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18547/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18547", "html_url": "https://github.com/huggingface/transformers/pull/18547", "diff_url": "https://github.com/huggingface/transformers/pull/18547.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18547.patch", "merged_at": 1661766200000 }
https://api.github.com/repos/huggingface/transformers/issues/18546
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18546/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18546/comments
https://api.github.com/repos/huggingface/transformers/issues/18546/events
https://github.com/huggingface/transformers/pull/18546
1,333,469,507
PR_kwDOCUB6oc485ffj
18,546
TF: XLA-trainable DeBERTa v2
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@gante I think it's better to replace \r\n\r\n```python\r\nflat_x = tf.reshape(x, (-1, x.shape[-1]))\r\nflat_indices = tf.reshape(indices, (-1, indices.shape[-1]))\r\ngathered = tf.gather(flat_x, flat_indices, batch_dims=1)\r\ngathered = tf.reshape(gathered, shape_list(indices))\r\n```\r\nwith \r\n```python\r\ngathered = tf.gather(x,indices,batch_dims=2)\r\n```\r\nwhich gives the same numerical results and the same performance according to my tests\r\n\r\nhttps://github.com/huggingface/transformers/issues/18239#issuecomment-1193126061", "@WissamAntoun thank you for pointing it out, I completely missed it in the original thread! 🙏 Will make the change\r\n\r\nEDIT: this change also makes it ~10% faster 👍 " ]
1,660
1,660
1,660
MEMBER
null
# What does this PR do? As discussed in https://github.com/huggingface/transformers/issues/18476 and https://github.com/huggingface/transformers/issues/18239, there are two problems while training DeBERTa v2 with TensorFlow: 1. `TFDebertaV2StableDropout` doesn't work at training time (actually, its logic is only triggered at training time, so it doesn't work at all :D) 2. TF complains about unknown shapes in `take_along_axis` (forward and backward passes, when the batch dim is `None`) This PR fixes both problems above :) Problem 1. is got a straightforward fix. The gradient propagation code didn't have the right gradient shapes -- this PR simplifies and fixes it by moving all functions inside the special dropout class (compare to the original PT implementation [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L173) -- also notice how much more elegant TF's code is ;)). Problem 2. is tricker. The exception gets fixed with the addition of a `shape_list`, but the code is super slow on TPU. This PR adds an if/else pair of branches, one that is efficient on TPU, the other on GPU :) _____________________________________________________ ⚠️ These exceptions were not caught because deberta v2 and v3 rely on special config options -- e.g. https://huggingface.co/microsoft/deberta-v3-base/blob/main/config.json#L14 How can we ensure we properly test these configurations?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18546/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18546/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18546", "html_url": "https://github.com/huggingface/transformers/pull/18546", "diff_url": "https://github.com/huggingface/transformers/pull/18546.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18546.patch", "merged_at": 1660132641000 }
https://api.github.com/repos/huggingface/transformers/issues/18545
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18545/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18545/comments
https://api.github.com/repos/huggingface/transformers/issues/18545/events
https://github.com/huggingface/transformers/pull/18545
1,333,406,463
PR_kwDOCUB6oc485See
18,545
Preserve hub-related kwargs in AutoModel.from_pretrained
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,660
1,660
1,660
COLLABORATOR
null
# What does this PR do? As was reported in #18537, when using `AutoConfig` inside the `AutoModel.from_pretrained` method, some kwargs are deleted and not passed to the `from_pretrained` method of the model. This PR makes sure they are preserved for those calls. Fixes #18537
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18545/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18545", "html_url": "https://github.com/huggingface/transformers/pull/18545", "diff_url": "https://github.com/huggingface/transformers/pull/18545.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18545.patch", "merged_at": 1660132819000 }
https://api.github.com/repos/huggingface/transformers/issues/18544
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18544/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18544/comments
https://api.github.com/repos/huggingface/transformers/issues/18544/events
https://github.com/huggingface/transformers/pull/18544
1,333,306,248
PR_kwDOCUB6oc48488C
18,544
german docs translation
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think it's preferable to use the more formal option, but in most cases, I'd prefer to reformulate the sentences to use the first person plural (wir) unless the sentence actually describes an action the user has to take. We're using the same style for the French translation which also has two pronouns for \"you\".", "okay, then i will rewrite to \"wir\" and \"sie\"\r\n\r\nAt the moment its still mixed with du und sie", "Let me know when you're done and thanks a lot for diving into German translation! (Sorry should have begun with that!)", "Thank you, @flozi00, for starting the German translation! 🤗 We created a new issue to track German translations (#18564). \r\n\r\n@sgugger LGTM once the translation is done. ", "ready to review @sgugger " ]
1,660
1,660
1,660
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger i am not sure about using "du / ihr" or "sie". "Du" is more personal, while "sie" is more formal
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18544/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18544", "html_url": "https://github.com/huggingface/transformers/pull/18544", "diff_url": "https://github.com/huggingface/transformers/pull/18544.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18544.patch", "merged_at": 1660225947000 }
https://api.github.com/repos/huggingface/transformers/issues/18543
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18543/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18543/comments
https://api.github.com/repos/huggingface/transformers/issues/18543/events
https://github.com/huggingface/transformers/issues/18543
1,333,093,537
I_kwDOCUB6oc5PdWSh
18,543
Typo in configuration
{ "login": "ariG23498", "id": 36856589, "node_id": "MDQ6VXNlcjM2ODU2NTg5", "avatar_url": "https://avatars.githubusercontent.com/u/36856589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ariG23498", "html_url": "https://github.com/ariG23498", "followers_url": "https://api.github.com/users/ariG23498/followers", "following_url": "https://api.github.com/users/ariG23498/following{/other_user}", "gists_url": "https://api.github.com/users/ariG23498/gists{/gist_id}", "starred_url": "https://api.github.com/users/ariG23498/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ariG23498/subscriptions", "organizations_url": "https://api.github.com/users/ariG23498/orgs", "repos_url": "https://api.github.com/users/ariG23498/repos", "events_url": "https://api.github.com/users/ariG23498/events{/privacy}", "received_events_url": "https://api.github.com/users/ariG23498/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nthanks for spotting, feel free to fix it in #18020 ", "Will do!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,660
1,663
1,663
CONTRIBUTOR
null
Hey @NielsRogge I found an inconsistency in the documentation and the code for configuration of GroupViT. The default for `num_output_groups` is `[64, 8, 8]` (notice the last element in the list), while that documented is `[64, 8, 0]`. It would be great if we could make the two consistent. https://github.com/huggingface/transformers/blob/8cb5ecd912e09301be126c6ce6e9a22ca7153da4/src/transformers/models/groupvit/configuration_groupvit.py#L158 https://github.com/huggingface/transformers/blob/8cb5ecd912e09301be126c6ce6e9a22ca7153da4/src/transformers/models/groupvit/configuration_groupvit.py#L204
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18543/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18542
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18542/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18542/comments
https://api.github.com/repos/huggingface/transformers/issues/18542/events
https://github.com/huggingface/transformers/pull/18542
1,333,036,490
PR_kwDOCUB6oc484Ce0
18,542
[Test] Fix redirected links issue
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This should be fixed upstream (there's an open issue IIRC)", "Okay I see! I think that you are referring to this issue: https://github.com/huggingface/transformers/issues/17582 posting it here for visibility!\r\nI can't see any related PR to this issue for now, maybe it is hidden in another PR? \r\n\r\nEDIT: it will be fixed once `transformers` will use `huggingface_hub` behind the scenes for loading the models", "It should be fixed on the Hugging Face Hub side at this stage (the issue reported incorrectly that it works for `huggingfCe_hub` tools but it does not), there is nothing left to do in Transformers.", "note that in the meantime you can always opt to re-rename your repos if it's a big issue" ]
1,660
1,660
1,660
CONTRIBUTOR
null
# What does this PR do? This PR tries to address the issue of loading a model when the original link is redirected. This happened for BLOOM models where the repo ids has been changed but the code does not take into account redirected links. I am not sure how to properly test if this does not break anything so I am putting this PR as a test PR, so feel free to ignore it. Now loading BLOOM models with old naming works ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "bigscience/bloom-350m" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` Also this can be done probably on the `huggingface_hub` level but I am not sure Related to #18531
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18542/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18542", "html_url": "https://github.com/huggingface/transformers/pull/18542", "diff_url": "https://github.com/huggingface/transformers/pull/18542.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18542.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18541
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18541/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18541/comments
https://api.github.com/repos/huggingface/transformers/issues/18541/events
https://github.com/huggingface/transformers/pull/18541
1,332,997,048
PR_kwDOCUB6oc4836A1
18,541
Minor update of `run_call_with_unpacked_inputs`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for making the change 👍 " ]
1,660
1,662
1,660
COLLABORATOR
null
# What does this PR do? Use `type(self).__name__` instead of `str(self).lower()`. This is a follow-up of [this comment](https://github.com/huggingface/transformers/pull/18097#discussion_r926907848) by @gante.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18541/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18541/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18541", "html_url": "https://github.com/huggingface/transformers/pull/18541", "diff_url": "https://github.com/huggingface/transformers/pull/18541.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18541.patch", "merged_at": 1660048421000 }
https://api.github.com/repos/huggingface/transformers/issues/18540
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18540/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18540/comments
https://api.github.com/repos/huggingface/transformers/issues/18540/events
https://github.com/huggingface/transformers/pull/18540
1,332,911,923
PR_kwDOCUB6oc483nrS
18,540
BART - Fix attention mask device issue on copied models
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I guess the reason we did not see it yet for other models using the same attention mask pre-processing function is that those models does not support `device_map=auto` yet (tried it with PegasusForCausalLM only) ", "BART slow tests are passing! Merging now" ]
1,660
1,660
1,660
CONTRIBUTOR
null
# What does this PR do? This PR fixes a small issue when combining `device_map=auto` and OPT. When running the script below (tested it on my VM + Google Colab) (`pip install accelerate && pip install transformers`) ``` from transformers import AutoModelForCausalLM, AutoTokenizer MAX_NEW_TOKENS = 128 model_name = "facebook/opt-2.7b" text = "Hello my name is" tokenizer = AutoTokenizer.from_pretrained(model_name) input_ids = tokenizer(text, return_tensors="pt").input_ids model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto') generated_ids = model.generate(input_ids, max_length=MAX_NEW_TOKENS) print(model.hf_device_map) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` We are getting: ``` 8 frames [/usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py](https://localhost:8080/#) in _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length) 533 expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) 534 combined_attention_mask = ( --> 535 expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask 536 ) 537 RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! ``` This is because `_expand_mask` creates the mask on the cpu whereas `combined_attention_mask` is always created on the same device as `inputs_embeds`. This PR fixes this issue Thanks @ArthurZucker ! cc @sgugger All OPT slow tests are passing with this fix!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18540/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18540/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18540", "html_url": "https://github.com/huggingface/transformers/pull/18540", "diff_url": "https://github.com/huggingface/transformers/pull/18540.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18540.patch", "merged_at": 1660049238000 }
https://api.github.com/repos/huggingface/transformers/issues/18539
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18539/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18539/comments
https://api.github.com/repos/huggingface/transformers/issues/18539/events
https://github.com/huggingface/transformers/issues/18539
1,332,784,120
I_kwDOCUB6oc5PcKv4
18,539
Thoughts on updating package metadata
{ "login": "ofek", "id": 9677399, "node_id": "MDQ6VXNlcjk2NzczOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/9677399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ofek", "html_url": "https://github.com/ofek", "followers_url": "https://api.github.com/users/ofek/followers", "following_url": "https://api.github.com/users/ofek/following{/other_user}", "gists_url": "https://api.github.com/users/ofek/gists{/gist_id}", "starred_url": "https://api.github.com/users/ofek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ofek/subscriptions", "organizations_url": "https://api.github.com/users/ofek/orgs", "repos_url": "https://api.github.com/users/ofek/repos", "events_url": "https://api.github.com/users/ofek/events{/privacy}", "received_events_url": "https://api.github.com/users/ofek/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,660
1,663
1,663
NONE
null
### Feature request Switch to modern Python packaging standards. ### Motivation The Python packaging ecosystem has standardized on the interface for build backends ([PEP 517](https://peps.python.org/pep-0517/)/[PEP 660](https://peps.python.org/pep-0660/)) and the format for metadata declaration ([PEP 621](https://peps.python.org/pep-0621/)/[PEP 631](https://peps.python.org/pep-0631/)). As a result, the execution of `setup.py` files is now [deprecated](https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html). So, I'm spending my free time updating important projects so that they are modernized and set an example for others 😄 ### Your contribution I'll open a PR to show what that would look like.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18539/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18539/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18538
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18538/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18538/comments
https://api.github.com/repos/huggingface/transformers/issues/18538/events
https://github.com/huggingface/transformers/issues/18538
1,332,721,088
I_kwDOCUB6oc5Pb7XA
18,538
AttributeError: 'LayoutLMForTokenClassification' object has no attribute 'config'
{ "login": "blueprintparadise", "id": 29310954, "node_id": "MDQ6VXNlcjI5MzEwOTU0", "avatar_url": "https://avatars.githubusercontent.com/u/29310954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/blueprintparadise", "html_url": "https://github.com/blueprintparadise", "followers_url": "https://api.github.com/users/blueprintparadise/followers", "following_url": "https://api.github.com/users/blueprintparadise/following{/other_user}", "gists_url": "https://api.github.com/users/blueprintparadise/gists{/gist_id}", "starred_url": "https://api.github.com/users/blueprintparadise/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blueprintparadise/subscriptions", "organizations_url": "https://api.github.com/users/blueprintparadise/orgs", "repos_url": "https://api.github.com/users/blueprintparadise/repos", "events_url": "https://api.github.com/users/blueprintparadise/events{/privacy}", "received_events_url": "https://api.github.com/users/blueprintparadise/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi,\r\n\r\nThis question seems better suited for our [forum](https://discuss.huggingface.co/). Would you be able to post your question there?\r\n\r\nThanks!", "ok sir" ]
1,660
1,660
1,660
NONE
null
### System Info Adding image embeddings to layoutLM makes the model unconvertable After following the - https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Add_image_embeddings_to_LayoutLM.ipynb I wanted to convert the .pt model to onnx. The issue is that the changes made in the notebook do not allow for the model conversion to work. New model - --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- import torch.nn as nn from transformers.models.layoutlm import LayoutLMModel, LayoutLMConfig from transformers.modeling_outputs import TokenClassifierOutput import torchvision from torchvision.ops import RoIAlign class LayoutLMForTokenClassification(nn.Module): def __init__(self, output_size=(3,3), spatial_scale=14/224, sampling_ratio=2 ): super().__init__() # LayoutLM base model + token classifier self.num_labels = len(label2idx) self.layoutlm = LayoutLMModel.from_pretrained("microsoft/layoutlm-base-uncased", num_labels=self.num_labels) self.dropout = nn.Dropout(self.layoutlm.config.hidden_dropout_prob) self.classifier = nn.Linear(self.layoutlm.config.hidden_size, self.num_labels) # backbone + roi-align + projection layer model = torchvision.models.resnet101(pretrained=True) self.backbone = nn.Sequential(*(list(model.children())[:-3])) self.roi_align = RoIAlign(output_size, spatial_scale=spatial_scale, sampling_ratio=sampling_ratio) self.projection = nn.Linear(in_features=1024*3*3, out_features=self.layoutlm.config.hidden_size) def forward( self, input_ids, bbox, attention_mask, token_type_ids, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, resized_images=None, # shape (N, C, H, W), with H = W = 224 resized_and_aligned_bounding_boxes=None, # single torch tensor that also contains the batch index for every bbox at image size 224 output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels - 1]``. """ return_dict = return_dict if return_dict is not None else self.layoutlm.config.use_return_dict # first, forward pass on LayoutLM outputs = self.layoutlm( input_ids=input_ids, bbox=bbox, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = outputs[0] # next, send resized images of shape (batch_size, 3, 224, 224) through backbone to get feature maps of images # shape (batch_size, 1024, 14, 14) feature_maps = self.backbone(resized_images) # next, use roi align to get feature maps of individual (resized and aligned) bounding boxes # shape (batch_size*seq_len, 1024, 3, 3) device = input_ids.device resized_bounding_boxes_list = [] for i in resized_and_aligned_bounding_boxes: resized_bounding_boxes_list.append(i.float().to(device)) feat_maps_bboxes = self.roi_align(input=feature_maps, # we pass in a list of tensors # We have also added -0.5 for the first two coordinates and +0.5 for the last two coordinates, # see https://stackoverflow.com/questions/60060016/why-does-roi-align-not-seem-to-work-in-pytorch rois=resized_bounding_boxes_list ) # next, reshape + project to same dimension as LayoutLM. batch_size = input_ids.shape[0] seq_len = input_ids.shape[1] feat_maps_bboxes = feat_maps_bboxes.view(batch_size, seq_len, -1) # Shape (batch_size, seq_len, 1024*3*3) projected_feat_maps_bboxes = self.projection(feat_maps_bboxes) # Shape (batch_size, seq_len, hidden_size) # add those to the sequence_output - shape (batch_size, seq_len, hidden_size) sequence_output += projected_feat_maps_bboxes sequence_output = self.dropout(sequence_output) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fct = nn.CrossEntropyLoss() if attention_mask is not None: active_loss = attention_mask.view(-1) == 1 active_logits = logits.view(-1, self.num_labels)[active_loss] active_labels = labels.view(-1)[active_loss] loss = loss_fct(active_logits, active_labels) else: loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) if not return_dict: output = (logits,) + outputs[2:] return ((loss,) + output) if loss is not None else output return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) ERROR ------------------------------------------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------------------------------------------------------- 3 from transformers.onnx import export 4 def save_onnx(save_path): 5 onnx_config = LayoutLMOnnxConfig(model.config) 6 export(preprocessor=tokenizer, model=model.cpu(), config=onnx_config, output=Path(save_path),opset=11) [/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in __getattr__(self, name) 1206 return modules[name] 1207 raise AttributeError("'{}' object has no attribute '{}'".format( 1208 type(self).__name__, name)) 1209 1210 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None: AttributeError: 'LayoutLMForTokenClassification' object has no attribute 'config' Please help.@NielsRogge ### Who can help? @NielsRogge @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Step 1 . Run this notebook - https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Add_image_embeddings_to_LayoutLM.ipynb?authuser=4#scrollTo=Vr4sG80hu6rC Step 2 - Run the model conversion code - from pathlib import Path from transformers.models.layoutlm import LayoutLMOnnxConfig from transformers.onnx import export def save_onnx(save_path): onnx_config = LayoutLMOnnxConfig(model.config) export(preprocessor=tokenizer, model=model.cpu(), config=onnx_config, output=Path(save_path),opset=11) print("Save model as ONNX") save_onnx('/content/data/model/model.onnx') I have also tried this method, but the output is blank.------------------------------------------------------- def save_onnx(save_path): configuration = LayoutLMConfig() onnx_config = LayoutLMOnnxConfig(configuration) export(preprocessor=tokenizer, model=model.cpu(), config=onnx_config, output=Path(save_path),opset=11) Please let me know if you will need anything else. ### Expected behavior The converted onnx model is produced in the instructed directory
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18538/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18538/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18537
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18537/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18537/comments
https://api.github.com/repos/huggingface/transformers/issues/18537/events
https://github.com/huggingface/transformers/issues/18537
1,332,563,034
I_kwDOCUB6oc5PbUxa
18,537
AutoModel(s) do not respect the `revision` flag while loading custom models
{ "login": "ankrgyl", "id": 565363, "node_id": "MDQ6VXNlcjU2NTM2Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ankrgyl", "html_url": "https://github.com/ankrgyl", "followers_url": "https://api.github.com/users/ankrgyl/followers", "following_url": "https://api.github.com/users/ankrgyl/following{/other_user}", "gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions", "organizations_url": "https://api.github.com/users/ankrgyl/orgs", "repos_url": "https://api.github.com/users/ankrgyl/repos", "events_url": "https://api.github.com/users/ankrgyl/events{/privacy}", "received_events_url": "https://api.github.com/users/ankrgyl/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "cc @sgugger ", "Thanks for flagging! The PR linked above should solve this.", "Appreciate the quick turnaround :)" ]
1,660
1,660
1,660
CONTRIBUTOR
null
### System Info - `transformers` version: 4.21.1 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.10.5 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?:no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForImageClassification m = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision="ed94a7c6247d8aedce4647f00f20de6875b5b292" ) # It will print: # Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. ``` I stepped through the code and observed that `AutoConfig.from_pretrained` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/auto_factory.py#L423) swallows the `revision` from `kwargs`, meaning that later on line [433](https://github.com/huggingface/transformers/blob/main/src/transformers/models/auto/auto_factory.py#L433) it's no longer there. I believe the same issue applies to `use_auth_token`. ### Expected behavior I think the revision should propagate to both the configuration and model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18537/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18536
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18536/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18536/comments
https://api.github.com/repos/huggingface/transformers/issues/18536/events
https://github.com/huggingface/transformers/pull/18536
1,332,509,345
PR_kwDOCUB6oc482Rpu
18,536
Propose file change
{ "login": "NoelBram", "id": 49926511, "node_id": "MDQ6VXNlcjQ5OTI2NTEx", "avatar_url": "https://avatars.githubusercontent.com/u/49926511?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NoelBram", "html_url": "https://github.com/NoelBram", "followers_url": "https://api.github.com/users/NoelBram/followers", "following_url": "https://api.github.com/users/NoelBram/following{/other_user}", "gists_url": "https://api.github.com/users/NoelBram/gists{/gist_id}", "starred_url": "https://api.github.com/users/NoelBram/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NoelBram/subscriptions", "organizations_url": "https://api.github.com/users/NoelBram/orgs", "repos_url": "https://api.github.com/users/NoelBram/repos", "events_url": "https://api.github.com/users/NoelBram/events{/privacy}", "received_events_url": "https://api.github.com/users/NoelBram/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18536). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,660
1,663
1,663
NONE
null
I am looking to start contributing to OSS on GitHub and trying it out first with some simple grammar fixes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18536/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18536/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18536", "html_url": "https://github.com/huggingface/transformers/pull/18536", "diff_url": "https://github.com/huggingface/transformers/pull/18536.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18536.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18535
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18535/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18535/comments
https://api.github.com/repos/huggingface/transformers/issues/18535/events
https://github.com/huggingface/transformers/pull/18535
1,332,494,659
PR_kwDOCUB6oc482OY1
18,535
Update Metrics in docs with Evaluate
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,660
1,660
MEMBER
null
This PR updates the fine-tuning tutorial to use Evaluate instead of Metrics 🙂
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18535/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18535", "html_url": "https://github.com/huggingface/transformers/pull/18535", "diff_url": "https://github.com/huggingface/transformers/pull/18535.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18535.patch", "merged_at": 1660064292000 }
https://api.github.com/repos/huggingface/transformers/issues/18534
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18534/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18534/comments
https://api.github.com/repos/huggingface/transformers/issues/18534/events
https://github.com/huggingface/transformers/pull/18534
1,332,404,376
PR_kwDOCUB6oc4816fy
18,534
Use commit hash to look in cache instead of calling head
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,660
1,660
COLLABORATOR
null
# What does this PR do? This PR tries to limit the calls to requests.head made for cached models every time we try to load them. Currently on the main branch, a call to the following objects results to the following number of underlying calls to the API: - AutoConfig: 1 (fiou) - AutoModel: 2 (model + config) - AutoTokenizer: 9 (multiple tokenizer files and multiple calls to config) - pipeline: 13 (all of the above + one extra call to config) - a sharded model: number of shards + 2 This is a bit excessive, so this PR reduces this to the maximum it can by using the commit hash of the first file downloaded: if it's the same as something we have in the cache, then all files in that subfolder with the same commit hash are up to date. As you can see in the tests it does not completely succeed, because we can't detect with this reasoning if a file does not exist in the repo: if it's not in the cache, it could be because it's still not downloaded yet. But still it reduces the number of calls seen above to: - AutoConfig: 1 - AutoModel: 1 - AutoTokenizer: between 2 and 4 depending on the tokenizer - pipeline: between 2 and 4 depending on the tokenizer - a sharded model: 2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18534/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18534/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18534", "html_url": "https://github.com/huggingface/transformers/pull/18534", "diff_url": "https://github.com/huggingface/transformers/pull/18534.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18534.patch", "merged_at": 1660146919000 }
https://api.github.com/repos/huggingface/transformers/issues/18533
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18533/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18533/comments
https://api.github.com/repos/huggingface/transformers/issues/18533/events
https://github.com/huggingface/transformers/pull/18533
1,332,171,845
PR_kwDOCUB6oc481HzN
18,533
Add ConvNeXt Mask R-CNN
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,659
1,667
1,667
CONTRIBUTOR
null
# What does this PR do? This PR is an initial draft for implementing the classic Mask R-CNN framework with ConvNeXt as backbone. The framework is implemented in a single script, with the exception of 3 files (for now): * assign_result.py * losses.py * mask_target.py As we have a one model, one file policy, I'm reimplementing ConvNeXT leveraging Copied from statements. So `ConvNextMaskRCNNModel` is almost identical to `ConvNextModel`. This way, the backbone used for object detection stays independent from the original one. In this case for instance, extra layernorms are added after each stage. There's a dependency on torchvision, which is used for NMS (non-maximum suppression, a postprocessing algorithm used by both the RPN head and the RoI head). To do: - [x] update NumPy logic to pure PyTorch (i.e. channels first everywhere) - see branch `add_convnext_maskrcnn_torch_shapes` - [ ] update outputs of model to have channels first (no NumPy)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18533/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18533/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18533", "html_url": "https://github.com/huggingface/transformers/pull/18533", "diff_url": "https://github.com/huggingface/transformers/pull/18533.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18533.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18532
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18532/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18532/comments
https://api.github.com/repos/huggingface/transformers/issues/18532/events
https://github.com/huggingface/transformers/pull/18532
1,332,085,347
PR_kwDOCUB6oc4801Hv
18,532
Update perf_train_gpu_one.mdx
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
CONTRIBUTOR
null
Fixes doc newlines (which is causing markdown parser errors) preview rendering correctly: <img width="500" alt="Screenshot 2022-08-08 at 18 35 08" src="https://user-images.githubusercontent.com/11827707/183468189-0be58ab5-b1fa-4a98-a4f4-ae4751960933.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18532/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18532/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18532", "html_url": "https://github.com/huggingface/transformers/pull/18532", "diff_url": "https://github.com/huggingface/transformers/pull/18532.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18532.patch", "merged_at": 1659983615000 }
https://api.github.com/repos/huggingface/transformers/issues/18531
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18531/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18531/comments
https://api.github.com/repos/huggingface/transformers/issues/18531/events
https://github.com/huggingface/transformers/pull/18531
1,332,079,285
PR_kwDOCUB6oc480zzy
18,531
Update BLOOM parameter counts
{ "login": "Muennighoff", "id": 62820084, "node_id": "MDQ6VXNlcjYyODIwMDg0", "avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Muennighoff", "html_url": "https://github.com/Muennighoff", "followers_url": "https://api.github.com/users/Muennighoff/followers", "following_url": "https://api.github.com/users/Muennighoff/following{/other_user}", "gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions", "organizations_url": "https://api.github.com/users/Muennighoff/orgs", "repos_url": "https://api.github.com/users/Muennighoff/repos", "events_url": "https://api.github.com/users/Muennighoff/events{/privacy}", "received_events_url": "https://api.github.com/users/Muennighoff/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Muennighoff !\r\nThanks for the fix, just FI the original model sizes were taken from: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml/smaller_models \r\nAnd I am afraid changing model names can lead to some breaking changes (thinking especially of all the Spaces that are using these models)\r\nI think maybe it's safer to rename the models as they were and discuss how we can fix that here ", "I think it's fine as old links still work\r\n```\r\nNew: Automatic Redirection\r\nAll links to this model will automatically redirect to the new location, including git operations. However, to avoid confusion, we recommend updating any existing local clones to point to the new repository URL. To do so, you can use the following command: git remote set-url origin {NEW_URL}\r\n```\r\n\r\n", "_The documentation is not available anymore as the PR was closed or merged._", "Ok if this is the case sounds good to me! 💪 Thanks for the fix!", "Note that the spaces will probably still break; As e.g. `AutoTokenizer.from_pretrained(\"bigscience/bloom-350m\")` no longer works", "Wait I think you might have broken old links.\r\n ```\r\n \r\nTraceback (most recent call last):\r\n File \"/Users/thomas/code/bigscience/transformers-Official/src/transformers/configuration_utils.py\", line 619, in _get_config_dict\r\n resolved_config_file = cached_path(\r\n File \"/Users/thomas/code/bigscience/transformers-Official/src/transformers/utils/hub.py\", line 285, in cached_path\r\n output_path = get_from_cache(\r\n File \"/Users/thomas/code/bigscience/transformers-Official/src/transformers/utils/hub.py\", line 509, in get_from_cache\r\n raise OSError(\r\nOSError: Distant resource does not have an ETag, we won't be able to reliably ensure reproducibility.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/thomas/code/bigscience/transformers-Official/src/transformers/models/auto/auto_factory.py\", line 423, in from_pretrained\r\n config, kwargs = AutoConfig.from_pretrained(\r\n File \"/Users/thomas/code/bigscience/transformers-Official/src/transformers/models/auto/configuration_auto.py\", line 731, in from_pretrained\r\n config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/Users/thomas/code/bigscience/transformers-Official/src/transformers/configuration_utils.py\", line 557, in get_config_dict\r\n config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/Users/thomas/code/bigscience/transformers-Official/src/transformers/configuration_utils.py\", line 659, in _get_config_dict\r\n raise EnvironmentError(\r\nOSError: Can't load config for 'bigscience/bloom-350m'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bigscience/bloom-350m' is the correct path to a directory containing a config.json file\r\n```\r\n\r\nI'm using `transformers=4.21.0`", "Yes I can confirm this breaks loading the model using `pipeline` and tokenizers as well (using transformers=4.21.0 and Google Colab). \r\n```\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\r\n\r\nMAX_NEW_TOKENS = 128\r\nmodel_name = \"bigscience/bloom-350m\"\r\ntext = \"Hello my name is\"\r\n\r\npipe = pipeline(task=\"text-generation\", model=model_name)\r\n```\r\n\r\n```\r\nOSError Traceback (most recent call last)\r\n[/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py](https://localhost:8080/#) in _get_config_dict(cls, pretrained_model_name_or_path, **kwargs)\r\n 655 except EnvironmentError:\r\n 656 raise EnvironmentError(\r\n--> 657 f\"Can't load config for '{pretrained_model_name_or_path}'. If you were trying to load it from \"\r\n 658 \"'https://huggingface.co/models', make sure you don't have a local directory with the same name. \"\r\n 659 f\"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory \"\r\n\r\nOSError: Can't load config for 'bigscience/bloom-350m'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bigscience/bloom-350m' is the correct path to a directory containing a config.json file\r\n```\r\nDoes not work also for models\r\n```\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name, device_map=\"auto\")\r\n```\r\n\r\nCould you point us on how you got:\r\n\r\n```\r\n\r\nNew: Automatic Redirection\r\nAll links to this model will automatically redirect to the new location, including git operations. However, to avoid confusion, we recommend updating any existing local clones to point to the new repository URL. To do so, you can use the following command: git remote set-url origin {NEW_URL}\r\n```\r\nWe can probably fix it through a PR ", "> I think it's fine as old links still work\r\n> \r\n> ```\r\n> New: Automatic Redirection\r\n> All links to this model will automatically redirect to the new location, including git operations. However, to avoid confusion, we recommend updating any existing local clones to point to the new repository URL. To do so, you can use the following command: git remote set-url origin {NEW_URL}\r\n> ```\r\n\r\nThis just means that the old URLs still work, i.e. https://huggingface.co/bigscience/bloom-350m\r\n(It's from the Settings screen on the Hub).\r\n\r\nThe model names need to be updated (which is not a bug I think).", "I'd say this is a breaking change. @sgugger does the `from_pretrained` method not take in account redirection?", "I addressed a potential fix in: https://github.com/huggingface/transformers/pull/18542 now I can load BLOOM models with old links but I am not sure if this breaks anything else (maybe let's wait for a review and the results of the CI tests there)", "`huggingface_hub` does not take into account redirections in its download methods. The issue was given low priority from what I understand, you can bug folks internally to show it's a bit important :-)", "Let's merge this?\r\nI think the damage is done & reverting now would just cause more damage. I will communicate such a change more extensively next time, sorry for the inconveniences caused. " ]
1,659
1,660
1,660
CONTRIBUTOR
null
Update parameter counts of BLOOM models. The original counts were incorrect & have already been updated on the hub. I can't add reviewers, but @younesbelkada @thomasw21 may want to review Script for counting: ```python def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) count_parameters(AutoModelForCausalLM.from_pretrained("bigscience/bloom-350m")) ``` 🌸🤗
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18531/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18531", "html_url": "https://github.com/huggingface/transformers/pull/18531", "diff_url": "https://github.com/huggingface/transformers/pull/18531.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18531.patch", "merged_at": 1660325778000 }
https://api.github.com/repos/huggingface/transformers/issues/18530
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18530/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18530/comments
https://api.github.com/repos/huggingface/transformers/issues/18530/events
https://github.com/huggingface/transformers/issues/18530
1,332,045,752
I_kwDOCUB6oc5PZWe4
18,530
[New Model] Donut: Document Understanding Transformer
{ "login": "WaterKnight1998", "id": 41203448, "node_id": "MDQ6VXNlcjQxMjAzNDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/41203448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WaterKnight1998", "html_url": "https://github.com/WaterKnight1998", "followers_url": "https://api.github.com/users/WaterKnight1998/followers", "following_url": "https://api.github.com/users/WaterKnight1998/following{/other_user}", "gists_url": "https://api.github.com/users/WaterKnight1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/WaterKnight1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WaterKnight1998/subscriptions", "organizations_url": "https://api.github.com/users/WaterKnight1998/orgs", "repos_url": "https://api.github.com/users/WaterKnight1998/repos", "events_url": "https://api.github.com/users/WaterKnight1998/events{/privacy}", "received_events_url": "https://api.github.com/users/WaterKnight1998/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "See #18488 ", "Cool to see you working there, thank you very much =D" ]
1,659
1,659
1,659
NONE
null
### Model description Donut doughnut, Document understanding transformer, is a new method of document understanding that utilizes an OCR-free end-to-end Transformer model. Donut does not require off-the-shelf OCR engines/APIs, yet it shows state-of-the-art performances on various visual document understanding tasks, such as visual document classification or information extraction (a.k.a. document parsing). In addition, we present SynthDoG dog, Synthetic Document Generator, that helps the model pre-training to be flexible on vairous languages and domains. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Code @clovaai : https://github.com/clovaai/donut Weights: - https://huggingface.co/naver-clova-ix/donut-base - https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v1-2560 - https://huggingface.co/naver-clova-ix/donut-base-finetuned-zhtrainticket - https://huggingface.co/naver-clova-ix/donut-base-finetuned-rvlcdip - https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v1 - https://huggingface.co/naver-clova-ix/donut-base-finetuned-docvqa - https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18530/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18530/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18529
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18529/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18529/comments
https://api.github.com/repos/huggingface/transformers/issues/18529/events
https://github.com/huggingface/transformers/pull/18529
1,332,014,565
PR_kwDOCUB6oc480l1j
18,529
Fix ORTTrainer failure on DeBERTa(base/v2/sew_d) fp16 training
{ "login": "JingyaHuang", "id": 44135271, "node_id": "MDQ6VXNlcjQ0MTM1Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/44135271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JingyaHuang", "html_url": "https://github.com/JingyaHuang", "followers_url": "https://api.github.com/users/JingyaHuang/followers", "following_url": "https://api.github.com/users/JingyaHuang/following{/other_user}", "gists_url": "https://api.github.com/users/JingyaHuang/gists{/gist_id}", "starred_url": "https://api.github.com/users/JingyaHuang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JingyaHuang/subscriptions", "organizations_url": "https://api.github.com/users/JingyaHuang/orgs", "repos_url": "https://api.github.com/users/JingyaHuang/repos", "events_url": "https://api.github.com/users/JingyaHuang/events{/privacy}", "received_events_url": "https://api.github.com/users/JingyaHuang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18529). All of your documentation changes will be reflected on that endpoint.", "close as it turned to be too messy even after rebasing. " ]
1,659
1,661
1,660
CONTRIBUTOR
null
# What does this PR do? __Context__ It was reported in optimum https://github.com/huggingface/optimum/issues/305 that the training on DeBERTa with optimum.onnxruntime.ORTTrainer is broken. After investigation, the break comes from two causes: * At that time `XDropOut` didn't have a symbolic function. And it has been implemented by @garymm in https://github.com/huggingface/transformers/pull/17502 and has been merged to the main of transformers. * The implementation of DeBERTa have some numpy/math operations that led to incorrect export. This will be fixed in https://github.com/huggingface/transformers/pull/18272. However with those two fixes, the fp32 training will work, but the mixed-precision training will fail due to mismatched inputs dtype for some `Matmul` nodes. In https://github.com/huggingface/transformers/pull/18272, some `sqrt` results are cast to `fp32`, and they need to be re-casted to fp16 before `Matmul` ops, and this PR is supposed to add the re-cast part. Fixes #https://github.com/huggingface/optimum/issues/305 ## Who can review? @LysandreJik @patrickvonplaten @lewtun
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18529/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18529/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18529", "html_url": "https://github.com/huggingface/transformers/pull/18529", "diff_url": "https://github.com/huggingface/transformers/pull/18529.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18529.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18528
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18528/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18528/comments
https://api.github.com/repos/huggingface/transformers/issues/18528/events
https://github.com/huggingface/transformers/issues/18528
1,332,004,992
I_kwDOCUB6oc5PZMiA
18,528
[New Model] LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding
{ "login": "WaterKnight1998", "id": 41203448, "node_id": "MDQ6VXNlcjQxMjAzNDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/41203448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WaterKnight1998", "html_url": "https://github.com/WaterKnight1998", "followers_url": "https://api.github.com/users/WaterKnight1998/followers", "following_url": "https://api.github.com/users/WaterKnight1998/following{/other_user}", "gists_url": "https://api.github.com/users/WaterKnight1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/WaterKnight1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WaterKnight1998/subscriptions", "organizations_url": "https://api.github.com/users/WaterKnight1998/orgs", "repos_url": "https://api.github.com/users/WaterKnight1998/repos", "events_url": "https://api.github.com/users/WaterKnight1998/events{/privacy}", "received_events_url": "https://api.github.com/users/WaterKnight1998/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi,\r\nthanks for your great effort. Contact me if any problem encountered :)", "Closing as it has been added in #19450 " ]
1,659
1,666
1,666
NONE
null
### Model description Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing. However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited. To address this issue, we propose a simple yet effective Language-independent Layout Transformer (LiLT) for structured document understanding. LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models. Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure. Code and model are publicly available at [this https URL](https://github.com/jpWang/LiLT). ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Code @jpWang : https://github.com/jpWang/LiLT Weights (Author @ManuelFay ): - https://huggingface.co/manu/lilt-camembert-dit-base-hf - https://huggingface.co/manu/lilt-camembert-base - https://huggingface.co/manu/lilt-camembert-dit-base - https://huggingface.co/manu/lilt-infoxlm-base
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18528/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18528/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18527
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18527/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18527/comments
https://api.github.com/repos/huggingface/transformers/issues/18527/events
https://github.com/huggingface/transformers/pull/18527
1,331,933,312
PR_kwDOCUB6oc480UVe
18,527
unpin resampy
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Looks good! The test running time is also normal." ]
1,659
1,659
1,659
COLLABORATOR
null
# What does this PR do? unpin resampy
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18527/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18527", "html_url": "https://github.com/huggingface/transformers/pull/18527", "diff_url": "https://github.com/huggingface/transformers/pull/18527.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18527.patch", "merged_at": 1659973451000 }
https://api.github.com/repos/huggingface/transformers/issues/18526
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18526/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18526/comments
https://api.github.com/repos/huggingface/transformers/issues/18526/events
https://github.com/huggingface/transformers/pull/18526
1,331,918,723
PR_kwDOCUB6oc480RLN
18,526
Specify en in doc-builder README example
{ "login": "ankrgyl", "id": 565363, "node_id": "MDQ6VXNlcjU2NTM2Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ankrgyl", "html_url": "https://github.com/ankrgyl", "followers_url": "https://api.github.com/users/ankrgyl/followers", "following_url": "https://api.github.com/users/ankrgyl/following{/other_user}", "gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions", "organizations_url": "https://api.github.com/users/ankrgyl/orgs", "repos_url": "https://api.github.com/users/ankrgyl/repos", "events_url": "https://api.github.com/users/ankrgyl/events{/privacy}", "received_events_url": "https://api.github.com/users/ankrgyl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
CONTRIBUTOR
null
# What does this PR do? Corrects a small typo in the docs README Fixes #18508 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18526/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18526", "html_url": "https://github.com/huggingface/transformers/pull/18526", "diff_url": "https://github.com/huggingface/transformers/pull/18526.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18526.patch", "merged_at": 1659968538000 }
https://api.github.com/repos/huggingface/transformers/issues/18525
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18525/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18525/comments
https://api.github.com/repos/huggingface/transformers/issues/18525/events
https://github.com/huggingface/transformers/issues/18525
1,331,914,135
I_kwDOCUB6oc5PY2WX
18,525
[Summary] Regarding memory issue in tests
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" }, { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
open
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "**TensorFlow hangs if a TF model is forked**\r\n\r\nThis will hangs\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom transformers import TFDistilBertModel, DistilBertConfig\r\nimport multiprocessing\r\n\r\nconfig = DistilBertConfig()\r\nconfig.n_layers = 1\r\nconfig.n_heads = 2\r\nconfig.dim = 4\r\nconfig.hidden_dim = 4\r\n\r\nmodel = TFDistilBertModel(config)\r\n\r\ndef func(i):\r\n\r\n print(f\"func with arg {i}: start\")\r\n inputs = tf.ones(shape=(2, 3), dtype=tf.int32)\r\n outputs = model(inputs)\r\n print(f\"func with arg {i}: done\")\r\n return outputs\r\n\r\nprint(\"start\")\r\nwith multiprocessing.Pool(processes=1) as pool:\r\n r = pool.map(func=func, iterable=range(16))\r\n\r\nprint(\"all done\")\r\nprint(len(r))\r\n```", "**Strange hanging with TensorFlow Probability**\r\n\r\nRunning the test with `--forked`\r\n```\r\npython3 -m pytest --forked -n 2 --max-worker-restart=0 --dist=loadfile -s --make-reports=tests_tf tests/models/auto/test_modeling_tf_auto.py | tee tests_output.txt\r\n```\r\nwith `tensorflow-probability` installed will hang. After uninstalling `tensorflow-probability`, the tests finish quickly.\r\n\r\n\r\n(I am not sure what happens with `tensorflow-probability` here though)\r\n\r\n----\r\n\r\nActually, running the following also hangs:\r\n```\r\npython3 -m pytest --forked -v test_tf.py\r\n```\r\nwith `test_tf.py` being\r\n```\r\nfrom transformers import TFAutoModelWithLMHead\r\n\r\n#import tensorflow_probability as tfp\r\nfrom transformers.models.tapas.modeling_tf_tapas import TF_TAPAS_PRETRAINED_MODEL_ARCHIVE_LIST\r\n\r\ndef test_foo():\r\n model = TFAutoModelWithLMHead.from_pretrained(\"julien-c/dummy-unknown\")\r\n```", "**--forked hang with Flax tests**\r\n\r\nRunning the following test with `--forked` will hang\r\n\r\n```python\r\npython3 -m pytest --forked -v test_flax.py\r\n```\r\n\r\nwith `test_flax.py` being\r\n\r\n```python\r\ndef test_flax_foo():\r\n\r\n from transformers import FlaxDistilBertModel, DistilBertConfig\r\n import numpy as np\r\n\r\n config = DistilBertConfig()\r\n config.n_layers = 1\r\n config.n_heads = 2\r\n config.dim = 4\r\n config.hidden_dim = 4\r\n model = FlaxDistilBertModel(config)\r\n```", "cc @LysandreJik for reading :-)", "To ease the debugging process, the code snippet below is a self-contained script for running `FlaxBart`. The results looks like\r\n\r\n(`mem_FlaxBartForConditionalGeneration.json`, the memory usage in `MB`)\r\n```python \r\n[\r\n 157772.0,\r\n 823724.0,\r\n 850768.0,\r\n 878004.0,\r\n 905340.0,\r\n 933288.0,\r\n 959816.0,\r\n 986800.0,\r\n 1013596.0,\r\n 1041560.0,\r\n 1067088.0,\r\n 1095960.0,\r\n 1121640.0,\r\n 1149596.0,\r\n 1175144.0,\r\n 1203396.0,\r\n 1228764.0,\r\n 1256536.0,\r\n 1282528.0,\r\n 1309668.0,\r\n 1337724.0,\r\n 1362584.0,\r\n 1390300.0,\r\n 1417172.0,\r\n 1443084.0,\r\n 1471568.0,\r\n 1494896.0,\r\n 1500424.0,\r\n 1512176.0,\r\n 1519920.0,\r\n 1529484.0\r\n]\r\n```\r\n\r\nHere is the code snippet to run `test_beam_search_generate`.\r\n(This removes all `unittest` elements, and running without pytest)\r\n\r\n```python run_flax_bart.py\r\nimport copy\r\nimport json\r\nimport numpy as np\r\nimport os\r\nimport psutil\r\nimport random\r\nimport jax.numpy as jnp\r\nfrom jax import jit\r\n\r\nfrom transformers import BartConfig, FlaxBartModel, FlaxBartForConditionalGeneration, FlaxBartForSequenceClassification, FlaxBartForQuestionAnswering\r\n\r\n\r\ndef ids_tensor(shape, vocab_size, rng=None):\r\n \"\"\"Creates a random int32 tensor of the shape within the vocab size.\"\"\"\r\n if rng is None:\r\n rng = random.Random()\r\n\r\n total_dims = 1\r\n for dim in shape:\r\n total_dims *= dim\r\n\r\n values = []\r\n for _ in range(total_dims):\r\n values.append(rng.randint(0, vocab_size - 1))\r\n\r\n output = np.array(values, dtype=jnp.int32).reshape(shape)\r\n\r\n return output\r\n\r\n\r\ndef random_attention_mask(shape, rng=None):\r\n attn_mask = ids_tensor(shape, vocab_size=2, rng=rng)\r\n # make sure that at least one token is attended to for each batch\r\n attn_mask[:, -1] = 1\r\n return attn_mask\r\n\r\n\r\ndef shift_tokens_right(input_ids: np.array, pad_token_id: int, decoder_start_token_id: int) -> np.ndarray:\r\n \"\"\"\r\n Shift input ids one token to the right.\r\n \"\"\"\r\n shifted_input_ids = np.zeros_like(input_ids)\r\n shifted_input_ids[:, 1:] = input_ids[:, :-1]\r\n shifted_input_ids[:, 0] = decoder_start_token_id\r\n\r\n shifted_input_ids = np.where(shifted_input_ids == -100, pad_token_id, shifted_input_ids)\r\n return shifted_input_ids\r\n\r\n\r\ndef prepare_bart_inputs_dict(\r\n config,\r\n input_ids,\r\n decoder_input_ids=None,\r\n attention_mask=None,\r\n decoder_attention_mask=None,\r\n head_mask=None,\r\n decoder_head_mask=None,\r\n cross_attn_head_mask=None,\r\n):\r\n if attention_mask is None:\r\n attention_mask = np.where(input_ids != config.pad_token_id, 1, 0)\r\n if decoder_attention_mask is None:\r\n decoder_attention_mask = np.where(decoder_input_ids != config.pad_token_id, 1, 0)\r\n if head_mask is None:\r\n head_mask = np.ones((config.encoder_layers, config.encoder_attention_heads))\r\n if decoder_head_mask is None:\r\n decoder_head_mask = np.ones((config.decoder_layers, config.decoder_attention_heads))\r\n if cross_attn_head_mask is None:\r\n cross_attn_head_mask = np.ones((config.decoder_layers, config.decoder_attention_heads))\r\n return {\r\n \"input_ids\": input_ids,\r\n \"decoder_input_ids\": decoder_input_ids,\r\n \"attention_mask\": attention_mask,\r\n \"decoder_attention_mask\": attention_mask,\r\n }\r\n\r\n\r\nclass FlaxBartModelTester:\r\n def __init__(\r\n self,\r\n parent,\r\n batch_size=13,\r\n seq_length=7,\r\n is_training=True,\r\n use_labels=False,\r\n vocab_size=99,\r\n hidden_size=16,\r\n num_hidden_layers=2,\r\n num_attention_heads=4,\r\n intermediate_size=4,\r\n hidden_act=\"gelu\",\r\n hidden_dropout_prob=0.1,\r\n attention_probs_dropout_prob=0.1,\r\n max_position_embeddings=32,\r\n eos_token_id=2,\r\n pad_token_id=1,\r\n bos_token_id=0,\r\n initializer_range=0.02,\r\n ):\r\n self.parent = parent\r\n self.batch_size = batch_size\r\n self.seq_length = seq_length\r\n self.is_training = is_training\r\n self.use_labels = use_labels\r\n self.vocab_size = vocab_size\r\n self.hidden_size = hidden_size\r\n self.num_hidden_layers = num_hidden_layers\r\n self.num_attention_heads = num_attention_heads\r\n self.intermediate_size = intermediate_size\r\n self.hidden_act = hidden_act\r\n self.hidden_dropout_prob = hidden_dropout_prob\r\n self.attention_probs_dropout_prob = attention_probs_dropout_prob\r\n self.max_position_embeddings = max_position_embeddings\r\n self.eos_token_id = eos_token_id\r\n self.pad_token_id = pad_token_id\r\n self.bos_token_id = bos_token_id\r\n self.initializer_range = initializer_range\r\n\r\n def prepare_config_and_inputs(self):\r\n input_ids = np.clip(ids_tensor([self.batch_size, self.seq_length - 1], self.vocab_size), 3, self.vocab_size)\r\n input_ids = np.concatenate((input_ids, 2 * np.ones((self.batch_size, 1), dtype=np.int64)), -1)\r\n\r\n decoder_input_ids = shift_tokens_right(input_ids, 1, 2)\r\n\r\n config = BartConfig(\r\n vocab_size=self.vocab_size,\r\n d_model=self.hidden_size,\r\n encoder_layers=self.num_hidden_layers,\r\n decoder_layers=self.num_hidden_layers,\r\n encoder_attention_heads=self.num_attention_heads,\r\n decoder_attention_heads=self.num_attention_heads,\r\n encoder_ffn_dim=self.intermediate_size,\r\n decoder_ffn_dim=self.intermediate_size,\r\n dropout=self.hidden_dropout_prob,\r\n attention_dropout=self.attention_probs_dropout_prob,\r\n max_position_embeddings=self.max_position_embeddings,\r\n eos_token_id=self.eos_token_id,\r\n bos_token_id=self.bos_token_id,\r\n pad_token_id=self.pad_token_id,\r\n initializer_range=self.initializer_range,\r\n use_cache=False,\r\n )\r\n inputs_dict = prepare_bart_inputs_dict(config, input_ids, decoder_input_ids)\r\n return config, inputs_dict\r\n\r\n def prepare_config_and_inputs_for_common(self):\r\n config, inputs_dict = self.prepare_config_and_inputs()\r\n return config, inputs_dict\r\n\r\n\r\nclass FlaxBartModelTest:\r\n is_encoder_decoder = True\r\n\r\n def __init__(self, model_class):\r\n self.model_tester = FlaxBartModelTester(self)\r\n self.model_class = model_class\r\n\r\n def _prepare_for_class(self, inputs_dict, model_class):\r\n inputs_dict = copy.deepcopy(inputs_dict)\r\n\r\n # hack for now until we have AutoModel classes\r\n if \"ForMultipleChoice\" in model_class.__name__:\r\n inputs_dict = {\r\n k: jnp.broadcast_to(v[:, None], (v.shape[0], self.model_tester.num_choices, v.shape[-1]))\r\n if isinstance(v, (jnp.ndarray, np.ndarray))\r\n else v\r\n for k, v in inputs_dict.items()\r\n }\r\n\r\n return inputs_dict\r\n\r\n def _get_input_ids_and_config(self):\r\n config, inputs = self.model_tester.prepare_config_and_inputs_for_common()\r\n\r\n # cut to half length & take max batch_size 3\r\n max_batch_size = 2\r\n sequence_length = inputs[\"input_ids\"].shape[-1] // 2\r\n input_ids = inputs[\"input_ids\"][:max_batch_size, :sequence_length]\r\n\r\n attention_mask = jnp.ones_like(input_ids)\r\n attention_mask = attention_mask[:max_batch_size, :sequence_length]\r\n\r\n # generate max 5 tokens\r\n max_length = input_ids.shape[-1] + 5\r\n if config.eos_token_id is not None and config.pad_token_id is None:\r\n # hack to allow generate for models such as GPT2 as is done in `generate()`\r\n config.pad_token_id = config.eos_token_id\r\n return config, input_ids, attention_mask, max_length\r\n\r\n def test_hidden_states_output(self):\r\n def check_hidden_states_output(inputs_dict, config, model_class):\r\n model = model_class(config)\r\n model_inputs = self._prepare_for_class(inputs_dict, model_class)\r\n outputs = model(**model_inputs)\r\n\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n\r\n inputs_dict[\"output_hidden_states\"] = True\r\n check_hidden_states_output(inputs_dict, config, self.model_class)\r\n\r\n # check that output_hidden_states also work using config\r\n del inputs_dict[\"output_hidden_states\"]\r\n config.output_hidden_states = True\r\n\r\n check_hidden_states_output(inputs_dict, config, self.model_class)\r\n\r\n def test_beam_search_generate(self):\r\n config, input_ids, _, max_length = self._get_input_ids_and_config()\r\n config.do_sample = False\r\n config.max_length = max_length\r\n config.num_beams = 2\r\n\r\n model = self.model_class(config)\r\n\r\n generation_outputs = model.generate(input_ids).sequences\r\n jit_generate = jit(model.generate)\r\n jit_generation_outputs = jit_generate(input_ids).sequences\r\n\r\n\r\nif __name__ == \"__main__\":\r\n\r\n all_model_classes = (\r\n (\r\n # FlaxBartModel,\r\n FlaxBartForConditionalGeneration,\r\n # FlaxBartForSequenceClassification,\r\n # FlaxBartForQuestionAnswering,\r\n )\r\n )\r\n\r\n for model_class in all_model_classes:\r\n\r\n test = FlaxBartModelTest(model_class)\r\n all_rss = []\r\n\r\n p = psutil.Process(os.getpid())\r\n m = p.memory_full_info()\r\n rss = m.rss / 1024\r\n all_rss.append(rss)\r\n\r\n for i in range(30):\r\n\r\n # This is fine\r\n # test.test_hidden_states_output()\r\n\r\n # Mem. leak\r\n test.test_beam_search_generate()\r\n\r\n m = p.memory_full_info()\r\n rss = m.rss / 1024\r\n all_rss.append(rss)\r\n\r\n fn = f\"mem_{model_class.__name__}.json\"\r\n\r\n with open(fn, \"w\") as fp:\r\n json.dump(all_rss, fp, ensure_ascii=False, indent=4)\r\n```", "Thanks for summarizing all the info, @ydshieh!", "To debug `test_torch_fx` more easily:\r\n\r\nwith `n_iter = 500`:\r\n - with new process: + 60 MB\r\n - without new process: + 1700 MB\r\n - without `scripted(**filtered_inputs)`: + 400 MB\r\n - without `scripted(**filtered_inputs)` and `torch.jit.script(traced_model)`: + 30 MB\r\n\r\n```python3\r\nimport copy\r\nimport torch\r\nimport tempfile\r\nimport os\r\nimport json\r\nimport pickle\r\nimport psutil\r\nimport multiprocessing\r\n\r\nfrom transformers.utils.fx import symbolic_trace\r\nfrom transformers import BartConfig, BartModel\r\n\r\n\r\ntorch_device = \"cpu\"\r\nmodel_class = BartModel\r\nconfig_dict = {\r\n \"activation_dropout\": 0.0,\r\n \"activation_function\": \"gelu\",\r\n \"attention_dropout\": 0.1,\r\n \"bos_token_id\": 0,\r\n \"classifier_dropout\": 0.0,\r\n \"d_model\": 16,\r\n \"decoder_attention_heads\": 4,\r\n \"decoder_ffn_dim\": 4,\r\n \"decoder_layerdrop\": 0.0,\r\n \"decoder_layers\": 2,\r\n \"decoder_start_token_id\": 2,\r\n \"dropout\": 0.1,\r\n \"encoder_attention_heads\": 4,\r\n \"encoder_ffn_dim\": 4,\r\n \"encoder_layerdrop\": 0.0,\r\n \"encoder_layers\": 2,\r\n \"eos_token_id\": 2,\r\n \"forced_eos_token_id\": None,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\",\r\n \"2\": \"LABEL_2\"\r\n },\r\n \"init_std\": 0.02,\r\n \"is_encoder_decoder\": True,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1,\r\n \"LABEL_2\": 2\r\n },\r\n \"max_position_embeddings\": 20,\r\n \"model_type\": \"bart\",\r\n \"num_hidden_layers\": 2,\r\n \"pad_token_id\": 1,\r\n \"scale_embedding\": False,\r\n \"transformers_version\": \"4.22.0.dev0\",\r\n \"use_cache\": True,\r\n \"vocab_size\": 99\r\n}\r\nconfig = BartConfig(**config_dict)\r\ninputs = {\r\n 'input_ids': torch.tensor([\r\n [22, 30, 84, 13, 46, 95, 2],\r\n [74, 91, 58, 38, 3, 48, 2],\r\n [43, 32, 21, 60, 12, 42, 2],\r\n [20, 24, 75, 46, 62, 55, 2],\r\n [59, 91, 36, 57, 40, 36, 2],\r\n [23, 24, 33, 70, 13, 93, 2],\r\n [15, 4, 11, 45, 5, 87, 2],\r\n [78, 76, 67, 38, 3, 46, 2],\r\n [ 3, 31, 35, 85, 81, 46, 2],\r\n [47, 45, 97, 80, 75, 91, 2],\r\n [92, 49, 42, 65, 74, 98, 2],\r\n [67, 37, 84, 88, 55, 57, 2],\r\n [24, 53, 44, 36, 45, 24, 2],\r\n ], dtype=torch.int32),\r\n 'decoder_input_ids': torch.tensor([\r\n [50, 56, 84, 91, 16, 49, 54],\r\n [ 2, 71, 62, 39, 27, 4, 93],\r\n [73, 45, 61, 63, 35, 25, 7],\r\n [27, 33, 23, 86, 13, 49, 32],\r\n [74, 36, 46, 83, 18, 40, 22],\r\n [45, 69, 41, 3, 29, 56, 49],\r\n [ 3, 38, 8, 52, 17, 55, 15],\r\n [63, 79, 42, 64, 62, 39, 40],\r\n [28, 59, 69, 14, 77, 45, 36],\r\n [56, 55, 82, 35, 66, 51, 19],\r\n [18, 96, 43, 34, 16, 69, 94],\r\n [68, 65, 52, 17, 77, 78, 54],\r\n [68, 57, 74, 42, 60, 13, 91]\r\n ]),\r\n 'attention_mask': torch.tensor([\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True]\r\n ], dtype=torch.bool),\r\n 'decoder_attention_mask': torch.tensor([\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True],\r\n [True, True, True, True, True, True, True]\r\n ], dtype=torch.bool),\r\n 'head_mask': torch.tensor([[1., 1., 1., 1.], [1., 1., 1., 1.]]),\r\n 'decoder_head_mask': torch.tensor([[1., 1., 1., 1.], [1., 1., 1., 1.]]),\r\n 'cross_attn_head_mask': torch.tensor([[1., 1., 1., 1.], [1., 1., 1., 1.]])\r\n}\r\n\r\n\r\ndef _config_zero_init(config):\r\n configs_no_init = copy.deepcopy(config)\r\n for key in configs_no_init.__dict__.keys():\r\n if \"_range\" in key or \"_std\" in key or \"initializer_factor\" in key or \"layer_scale\" in key:\r\n setattr(configs_no_init, key, 1e-10)\r\n return configs_no_init\r\n\r\ndef _run_torch_jit(in_queue, out_queue):\r\n\r\n model, input_names, filtered_inputs = in_queue.get()\r\n traced_model = symbolic_trace(model, input_names)\r\n # blocked if forked\r\n with torch.no_grad():\r\n traced_output = traced_model(**filtered_inputs)\r\n\r\n # Test that the model can be TorchScripted\r\n scripted = torch.jit.script(traced_model)\r\n with torch.no_grad():\r\n scripted_output = scripted(**filtered_inputs)\r\n\r\n out_queue.put((traced_model, scripted_output))\r\n out_queue.join()\r\n\r\n\r\ndef create_and_check_torch_fx_tracing(model_class, config, inputs, n_iter=100, with_new_proc=False):\r\n\r\n configs_no_init = _config_zero_init(config) # To be sure we have no Nan\r\n configs_no_init.return_dict = False\r\n\r\n model = model_class(config=configs_no_init)\r\n model.to(torch_device)\r\n model.eval()\r\n\r\n model.config.use_cache = False\r\n input_names = [\r\n \"attention_mask\",\r\n \"decoder_attention_mask\",\r\n \"decoder_input_ids\",\r\n \"input_features\",\r\n \"input_ids\",\r\n \"input_values\",\r\n ]\r\n\r\n filtered_inputs = {k: v for (k, v) in inputs.items() if k in input_names}\r\n input_names = list(filtered_inputs.keys())\r\n\r\n model_output = model(**filtered_inputs)\r\n\r\n all_rss = []\r\n\r\n p = psutil.Process(os.getpid())\r\n m = p.memory_full_info()\r\n rss = m.rss / 1024\r\n all_rss.append(rss)\r\n\r\n for i in range(n_iter):\r\n\r\n print(f\"idx: {i} - start\")\r\n\r\n if not with_new_proc:\r\n\r\n traced_model = symbolic_trace(model, input_names)\r\n with torch.no_grad():\r\n traced_output = traced_model(**filtered_inputs)\r\n\r\n # Test that the model can be TorchScripted\r\n scripted = torch.jit.script(traced_model)\r\n with torch.no_grad():\r\n scripted_output = scripted(**filtered_inputs)\r\n\r\n else:\r\n\r\n ctx = multiprocessing.get_context('spawn')\r\n\r\n in_queue = ctx.Queue()\r\n out_queue = ctx.JoinableQueue()\r\n\r\n in_queue.put((model, input_names, filtered_inputs))\r\n\r\n process = ctx.Process(target=_run_torch_jit, args=(in_queue, out_queue))\r\n process.start()\r\n traced_model, scripted_output = out_queue.get()\r\n out_queue.task_done()\r\n process.join()\r\n\r\n\r\n print(f\"idx: {i} - end\")\r\n print(\"=\" * 40)\r\n\r\n m = p.memory_full_info()\r\n rss = m.rss / 1024\r\n all_rss.append(rss)\r\n\r\n fn = f\"torch_jit_script_mem_with_new_proc={with_new_proc}.json\"\r\n\r\n with open(fn, \"w\") as fp:\r\n json.dump(all_rss, fp, ensure_ascii=False, indent=4)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n\r\n create_and_check_torch_fx_tracing(model_class, config, inputs, n_iter=500, with_new_proc=True)\r\n create_and_check_torch_fx_tracing(model_class, config, inputs, n_iter=500, with_new_proc=False)\r\n\r\n```", "@patil-suraj @sanchit-gandhi @patrickvonplaten \r\n\r\nWe have memory leak issue in some Flax tests. Basically, I observed this happens for `test_beam_search_generate`, `test_beam_search_generate_attn_mask` and `test_beam_search_generate_logits_warper`, but there might be more.\r\nEach call to them increase memory usage by 10~30 MB.\r\n\r\nThe CircleCI job run page also shows memory issue in Flax testing (https://app.circleci.com/pipelines/github/huggingface/transformers/45317/workflows/5bcb8b8a-776c-4c58-ad99-cf2700304c05/jobs/528556/resources)\r\n\r\nTo reproduce, see [here](https://github.com/huggingface/transformers/issues/18525#issuecomment-1209063895) for `test_beam_search_generate`.\r\n\r\nNot very urgent, but we will have trouble once models are added. Could you have a look, please? Let me know if you need more information.", "Hey @ydshieh, \r\n\r\nI'm a bit under water at the moment - I'll put the issue on my TODO-list, but I can't promise to find time to look into it very soon.\r\nThis link: https://app.circleci.com/pipelines/github/huggingface/transformers/45317/workflows/5bcb8b8a-776c-4c58-ad99-cf2700304c05/jobs/528556/resources doesn't seem to show anything useful to me. \r\n\r\nAlso just to understand better, are the flax tests running on GPU or CPU?" ]
1,659
1,663
null
COLLABORATOR
null
### Description This is a short summary of the memory issue in our tests ### The following tests definitely have memory issues - PyTorch (increase ~`15 MB` each call): - test_torch_fx - test_torch_fx_output_loss - TensorFlow: - test_xla_fit - test_xla_generate_fast (increase ~`100 MB` each call) - test_xla_generate_slow - test_xla_mode - test_onnx_runtime_optimize (increase ~`8 MB` each call) - test_dataset_conversion (increase ~`0.2 M`B each call) - **Flax**: - **Almost all test methods have memory issue!** - [The CircleCI job run page](https://app.circleci.com/pipelines/github/huggingface/transformers/45317/workflows/5bcb8b8a-776c-4c58-ad99-cf2700304c05/jobs/528556/resources) demonstrates this issue too ### Some tests are also suspicious, but need more investigations. - For example, the test `test_graph_mode` have the following memory *difference* in consecutive runs (in KB): ``` [936.0, 520.0, 260.0, 520.0, 0.0, 0.0, 260.0, 520.0, 0.0, 0.0, 260.0, 0.0, 0.0, 0.0, 260.0, 260.0, 260.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] ``` (not always increase, but it continues to happen) - For `test_saved_model_creation_extended` (in KB): ``` [144436.0, -104552.0, 1280.0, -103908.0, -1536.0, 177868.0, -33572.0, 20240.0, 170852.0, -51704.0, -8448.0, 59904.0, -48128.0, 2440.0, 34856.0, 3068.0, -3420.0, -36864.0, -6756.0, 36136.0, -2048.0, -17400.0, -4608.0, -25896.0, 4096.0, 1024.0, 22344.0, 25784.0, -256.0] ``` (sometimes some amount of memory is released, but still leaks in the long run?) ### Pytest itself will accumulate some memory usage as tests continue to run. This is just my hypothesis: sometimes I see an increase of a few KB after a sequence of runs without leak. ### Possible actions to take - (It's probably worth it to fix this issue for a few tests mentioned above to gain some experience): - In this case, we can only focus on `non-slow` tests - **[Not to go]** There is a `pytest` plugin `pytest-forked` to run each test in a forked subprocess. But it doesn't work well with TensorFlow and Flax (some tests will hang forever). I will provide some details in the comments. - We can try to run the tests per model in each CircleCI job steps. However, the output on job run pages will be a bit noisy, but we can have an extra step to print the test failures in a cleaner way.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18525/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/18524
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18524/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18524/comments
https://api.github.com/repos/huggingface/transformers/issues/18524/events
https://github.com/huggingface/transformers/pull/18524
1,331,892,811
PR_kwDOCUB6oc480LnP
18,524
Add EntityPairClassification Pipeline, AutoClass & LUKE ONNX Support
{ "login": "kayvane1", "id": 42403093, "node_id": "MDQ6VXNlcjQyNDAzMDkz", "avatar_url": "https://avatars.githubusercontent.com/u/42403093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kayvane1", "html_url": "https://github.com/kayvane1", "followers_url": "https://api.github.com/users/kayvane1/followers", "following_url": "https://api.github.com/users/kayvane1/following{/other_user}", "gists_url": "https://api.github.com/users/kayvane1/gists{/gist_id}", "starred_url": "https://api.github.com/users/kayvane1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kayvane1/subscriptions", "organizations_url": "https://api.github.com/users/kayvane1/orgs", "repos_url": "https://api.github.com/users/kayvane1/repos", "events_url": "https://api.github.com/users/kayvane1/events{/privacy}", "received_events_url": "https://api.github.com/users/kayvane1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18524). All of your documentation changes will be reflected on that endpoint.", "Hi @NielsRogge - what next steps would you suggest?\r\nHappy to make updates to the PR", "I don't think it makes sense to create an auto-map just for this model, and the pipeline can be done as a [custom pipeline with code on the Hub](https://huggingface.co/docs/transformers/add_new_pipeline#share-your-pipeline-on-the-hub). If/when there are more models associated to this task, we can revisit this approach of course.", "Thanks @NielsRogge , @lewtun & @sgugger !\r\n\r\nI'll update the PR by reverting the autoclass creation and bypass the AutoModel Constructors in `test_onnx_v2.py` & use the LukeForXxx classes in `features.py` directly.\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @kayvane1, feel free to revive this PR :)" ]
1,659
1,665
1,665
NONE
null
# What does this PR do? This PR started out in adding support for Luke in ONNX. To not break existing AutoPatterns in [FeaturesManager](src/transformers/onnx/features.py), [AutoModelForEntityPairClassification](src/transformers/models/auto/modeling_auto.py) has also been added. Additionally, a pipeline for [EntityPairClassification](src/transformers/pipelines/entity_pair_classification.py) has been added to make the task more supported overall by the library. Note: A previous PR (https://github.com/huggingface/transformers/pull/16562) has been closed / not merged for LUKE ONNX support. I believe this PR addresses the remaining comments in that one. All ONNX tests pass - happy to implement any additional comments for the Pipeline / Autoclass. I have only implemented one of the additional Tasks `EntityPairClassification` - if this has been done to the appropriate standard, I can also implement it for the other two remaining Luke Heads which are not currently supported `Span Classification` & `Entity Classification` @NielsRogge - Worked on the original LUKE implementation @lewtun & @michaelbenayoun - Reviewed the previous PR ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? <img width="991" alt="Screenshot 2022-08-08 at 13 27 00" src="https://user-images.githubusercontent.com/42403093/183430054-26ee3d97-c9b3-43c8-b844-cde031b263e0.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18524/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18524", "html_url": "https://github.com/huggingface/transformers/pull/18524", "diff_url": "https://github.com/huggingface/transformers/pull/18524.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18524.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18523
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18523/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18523/comments
https://api.github.com/repos/huggingface/transformers/issues/18523/events
https://github.com/huggingface/transformers/pull/18523
1,331,881,726
PR_kwDOCUB6oc480JNd
18,523
[VideoMAE] Add model to doc tests
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @NielsRogge , it is this file to add\r\n\r\n```\r\ndocker/transformers-all-latest-gpu/Dockerfile\r\n```", "(The image will only be built tonight)\r\n\r\nIf you want to build it now + even run the doctest to make sure it works, let me now" ]
1,659
1,659
1,659
CONTRIBUTOR
null
# What does this PR do? This PR fixes the fact that VideoMAE supports the doc tests, but wasn't actually run. cc @ydshieh, could you point me where I need to add pip install decord to the setup of the machine that runs the doc tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18523/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18523", "html_url": "https://github.com/huggingface/transformers/pull/18523", "diff_url": "https://github.com/huggingface/transformers/pull/18523.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18523.patch", "merged_at": 1659979732000 }
https://api.github.com/repos/huggingface/transformers/issues/18522
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18522/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18522/comments
https://api.github.com/repos/huggingface/transformers/issues/18522/events
https://github.com/huggingface/transformers/pull/18522
1,331,879,743
PR_kwDOCUB6oc480Ixu
18,522
New cache fixes: add safeguard before looking in folders
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
COLLABORATOR
null
# What does this PR do? This PR adds a few fixes in the new cache functions, mainly to not call `os.listdir` on a folder that does not exist Fixes #18517
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18522/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18522", "html_url": "https://github.com/huggingface/transformers/pull/18522", "diff_url": "https://github.com/huggingface/transformers/pull/18522.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18522.patch", "merged_at": 1659968548000 }
https://api.github.com/repos/huggingface/transformers/issues/18521
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18521/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18521/comments
https://api.github.com/repos/huggingface/transformers/issues/18521/events
https://github.com/huggingface/transformers/pull/18521
1,331,853,920
PR_kwDOCUB6oc480DGh
18,521
update fsdp docs
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
CONTRIBUTOR
null
# What does this PR do? 1. updates FSDP doc to reflect the recently integrated features.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18521/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18521", "html_url": "https://github.com/huggingface/transformers/pull/18521", "diff_url": "https://github.com/huggingface/transformers/pull/18521.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18521.patch", "merged_at": 1659965211000 }
https://api.github.com/repos/huggingface/transformers/issues/18520
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18520/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18520/comments
https://api.github.com/repos/huggingface/transformers/issues/18520/events
https://github.com/huggingface/transformers/pull/18520
1,331,731,498
PR_kwDOCUB6oc48zobp
18,520
Image transforms library
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger @NielsRogge @alaradirik @LysandreJik Adding you all for a first-pass review for the draft ImageProcessor work. This PR is failing because it's not safely importing e.g. `PIL` if it's not available, but the core logic shouldn't change. I'll add you to the follow up PRs too. Note: `ImageProcessor` has only been implemented for the GLPN model so far. ", "_The documentation is not available anymore as the PR was closed or merged._", "@alaradirik @sgugger I've now merged in the stacked PRs above this one. This PR has the transforms library and the image processor for GLPN. Thanks for all of you reviews so far! This should be ready for a final review to make sure all the pieces work together before merging. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@alaradirik @NielsRogge Could you (re-)review? ", "> I just have a question regarding multi-modal models such as CLIP and OWL-ViT. These models have both feature extractors and processors, which call their respective tokenizer and feature extractor. Wouldn't creating XXModelProcessor aliases for their feature extractors create issues?\r\n\r\n@alaradirik I believe this should be OK, as the feature extractors are being mapped to `XxxImageProcessor` rather than `XxxProcessor`, so there's no clash of names. Not sure if this answers your question or I've missed the consequence you're asking about. " ]
1,659
1,665
1,665
COLLABORATOR
null
# What does this PR do? This is the first of a series of PRs to replace feature extractors with image processors for vision models. Create a new module `image_transforms.py` that will contain functions for transforming images e.g. `resize`. The functions are designed to: * Accept numpy arrays. * Return numpy arrays (except for e.g. `to_pil_image`) * Provide logic such that the new image processors produce the same outputs as feature extractors when called directly. Subsequent PRs: * Image Processor Mixin: https://github.com/amyeroberts/transformers/pull/25 * GLPNImageProcessor: https://github.com/amyeroberts/transformers/pull/23 * GLPNFeatureExtractor -> GLPNImageProcessor alias https://github.com/amyeroberts/transformers/pull/24 Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18520/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18520", "html_url": "https://github.com/huggingface/transformers/pull/18520", "diff_url": "https://github.com/huggingface/transformers/pull/18520.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18520.patch", "merged_at": 1665595922000 }
https://api.github.com/repos/huggingface/transformers/issues/18519
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18519/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18519/comments
https://api.github.com/repos/huggingface/transformers/issues/18519/events
https://github.com/huggingface/transformers/pull/18519
1,331,672,790
PR_kwDOCUB6oc48zbe0
18,519
Add seed setting to image classification example
{ "login": "regisss", "id": 15324346, "node_id": "MDQ6VXNlcjE1MzI0MzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4", "gravatar_id": "", "url": "https://api.github.com/users/regisss", "html_url": "https://github.com/regisss", "followers_url": "https://api.github.com/users/regisss/followers", "following_url": "https://api.github.com/users/regisss/following{/other_user}", "gists_url": "https://api.github.com/users/regisss/gists{/gist_id}", "starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/regisss/subscriptions", "organizations_url": "https://api.github.com/users/regisss/orgs", "repos_url": "https://api.github.com/users/regisss/repos", "events_url": "https://api.github.com/users/regisss/events{/privacy}", "received_events_url": "https://api.github.com/users/regisss/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds seed setting in the image classification example. Without it, runs are not reproducible because the seed is not set before model initialization (one can easily checks this behavior by running the command given in the README twice). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18519/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18519", "html_url": "https://github.com/huggingface/transformers/pull/18519", "diff_url": "https://github.com/huggingface/transformers/pull/18519.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18519.patch", "merged_at": 1659960492000 }
https://api.github.com/repos/huggingface/transformers/issues/18518
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18518/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18518/comments
https://api.github.com/repos/huggingface/transformers/issues/18518/events
https://github.com/huggingface/transformers/issues/18518
1,331,668,263
I_kwDOCUB6oc5PX6Un
18,518
onnx run error at translation model
{ "login": "xyx361100238", "id": 19569322, "node_id": "MDQ6VXNlcjE5NTY5MzIy", "avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xyx361100238", "html_url": "https://github.com/xyx361100238", "followers_url": "https://api.github.com/users/xyx361100238/followers", "following_url": "https://api.github.com/users/xyx361100238/following{/other_user}", "gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}", "starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions", "organizations_url": "https://api.github.com/users/xyx361100238/orgs", "repos_url": "https://api.github.com/users/xyx361100238/repos", "events_url": "https://api.github.com/users/xyx361100238/events{/privacy}", "received_events_url": "https://api.github.com/users/xyx361100238/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @regisss @JingyaHuang @michaelbenayoun, do you have ideas about what might be happening there? I never used onnxruntime's `InferenceSession`.", "@xyx361100238 The error message says that the model requires 4 inputs but you are providing only 2 of them. Either you need to provide the missing inputs, or you need to modify the `OnnxConfig` associated to your model to specify only 2 inputs.\r\n\r\nThe architecture of *opus-mt-en-zh* seems to be *MarianMTModel*. According to what I see in the `OnnxConfig` [here](https://github.com/huggingface/transformers/blob/8cb5ecd912e09301be126c6ce6e9a22ca7153da4/src/transformers/models/marian/configuration_marian.py#L176), the 4 expected inputs are:\r\n- `input_ids`,\r\n- `attention_mask`,\r\n- `decoder_input_ids`,\r\n- `decoder_attention_mask`.\r\n\r\nSo I think you are only providing `input_ids` and `attention_mask` here. To generate the missing inputs, you can take a look at [how dummy inputs used for exporting the model are generated](https://github.com/huggingface/transformers/blob/8cb5ecd912e09301be126c6ce6e9a22ca7153da4/src/transformers/models/marian/configuration_marian.py#L233).", "Thanks for your reply!\r\nI'm sorry,still understand to generate decoder_input_ids & decoder_attention_mask,could you please give a example,\r\nor onnxruntime example with marian model!", "So you also need to provide the inputs for the decoder side, something along the lines:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\nfrom onnxruntime import InferenceSession\r\n\r\ntokenizer=AutoTokenizer.from_pretrained(\"opus-mt-en-zh\")\r\nsession = InferenceSession(\"opus-mt-en-zh-onnx-301/model.onnx\")\r\ninputs = tokenizer(\"Using DistilBERT with ONNX Runtime!\", return_tensors=\"pt\")\r\ninputs[\"decoder_input_ids\"] = torch.tensor([tokenizer.bos_token_id], dtype=torch.long)\r\ninputs[\"decoder_attention_mask\"] = torch.tensor([1], dtype=torch.long)\r\noutputs = session.run(output_names=[\"last_hidden_state\"], input_feed=inputs)\r\n```\r\n\r\nWhat is true is that it would be easier if the `decoder_attention_mask` was automatically generated, but we you currently need to provide it manually.", "got error:\r\n![image](https://user-images.githubusercontent.com/19569322/183870045-562624c5-ee9c-400f-95a8-cafcf969cdc9.png)\r\n", "Basically, if you want to your ONNX model to predict the next token, provide the start of sentence token as first token, maybe you do not have `tokenizer.bos_token_id` but you know the value? Or maybe you do not have a start of sentence token?\r\nHow are you running things on the `transformers` side?", "![image](https://user-images.githubusercontent.com/19569322/183874977-e7dbe217-861d-4a94-b05b-abbc58f188ad.png)\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Has this problem been solved?", "Not Yet!", "From the [marian tokenizer](https://github.com/huggingface/transformers/blob/e342ac7e0390d157e8c14a8a7c5cf2e715341cee/src/transformers/models/marian/tokenization_marian.py#L146), the bos_token_id is not initialized. Instead it recommends using the decoder_start_token_id from the config. For this model, the decoder_start_token_id is [65000](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh/blob/main/config.json#L24).\r\n\r\nExample:\r\n```\r\nsession = InferenceSession(\"opus-mt-en-zh-onnx-301/model.onnx\")\r\ninputs = tokenizer(\"Using DistilBERT with ONNX Runtime!\", return_tensors=\"np\")\r\ninputs[\"decoder_input_ids\"] = np.array([[65000]])\r\ninputs[\"decoder_attention_mask\"] = np.array([[1]])\r\noutputs = session.run(None, input_feed=dict(inputs))\r\n```", "Alternatively, I found that the optimum library makes working with seq2seq models in ONNX much easier.\r\n\r\n```\r\nfrom transformers import AutoTokenizer, pipeline\r\nfrom optimum.onnxruntime import ORTModelForSeq2SeqLM\r\n\r\nmodel_path = \"Helsinki-NLP/opus-mt-en-zh\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_path)\r\nmodel = ORTModelForSeq2SeqLM.from_pretrained(model_path, from_transformers=True)\r\nonnx_translation = pipeline(\"translation\", model=model, tokenizer=tokenizer)\r\n\r\npred = onnx_translation(\"Hello\")\r\n```" ]
1,659
1,669
1,662
NONE
null
### System Info - `transformers` version: 4.17.0 - Platform: Linux-5.4.0-122-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.11 - PyTorch version (GPU?): 1.8.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patrickvonplaten @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1、convert model to onnx: python3 -m transformers.onnx --model opus-mt-en-zh --atol=2e-04 --feature=seq2seq-lm opus-mt-en-zh-onnx-301 tips: Validating ONNX model... -[✓] ONNX model output names match reference model ({'logits'}) - Validating ONNX Model output "logits": -[✓] (2, 8, 65001) matches (2, 8, 65001) -[✓] all values close (atol: 0.0002) All good, model saved at: opus-mt-en-zh-onnx-301/model.onnx 2、translation: ```py from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from onnxruntime import InferenceSession tokenizer=AutoTokenizer.from_pretrained("opus-mt-en-zh") session = InferenceSession("opus-mt-en-zh-onnx-301/model.onnx") inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="pt") outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` tips: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xieyouxi/anaconda3/envs/HuggingFace-torch-gpu/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 196, in run raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs)) ValueError: Model requires 4 inputs. Input Feed contains 2 ``` ### Expected behavior Unable to translate from en to zh, Am I using the wrong interface?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18518/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18518/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18517
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18517/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18517/comments
https://api.github.com/repos/huggingface/transformers/issues/18517/events
https://github.com/huggingface/transformers/issues/18517
1,331,655,054
I_kwDOCUB6oc5PX3GO
18,517
layoutlmv3 processor
{ "login": "rihabfounoun", "id": 74436019, "node_id": "MDQ6VXNlcjc0NDM2MDE5", "avatar_url": "https://avatars.githubusercontent.com/u/74436019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rihabfounoun", "html_url": "https://github.com/rihabfounoun", "followers_url": "https://api.github.com/users/rihabfounoun/followers", "following_url": "https://api.github.com/users/rihabfounoun/following{/other_user}", "gists_url": "https://api.github.com/users/rihabfounoun/gists{/gist_id}", "starred_url": "https://api.github.com/users/rihabfounoun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rihabfounoun/subscriptions", "organizations_url": "https://api.github.com/users/rihabfounoun/orgs", "repos_url": "https://api.github.com/users/rihabfounoun/repos", "events_url": "https://api.github.com/users/rihabfounoun/events{/privacy}", "received_events_url": "https://api.github.com/users/rihabfounoun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger ", "Is that the entire stacktrace, @founou-rihab ?" ]
1,659
1,659
1,659
NONE
null
### System Info ```shell The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. There was a problem when trying to move your cache: File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1551, in <module> move_cache() File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1491, in move_cache cached_files = get_all_cached_files(cache_dir=cache_dir) File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1397, in get_all_cached_files for file in os.listdir(cache_dir): ``` ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. There was a problem when trying to move your cache: File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1551, in <module> move_cache() File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1491, in move_cache cached_files = get_all_cached_files(cache_dir=cache_dir) File "/opt/conda/lib/python3.7/site-packages/transformers/utils/hub.py", line 1397, in get_all_cached_files for file in os.listdir(cache_dir): ### Expected behavior ```shell install the processor ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18517/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18516
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18516/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18516/comments
https://api.github.com/repos/huggingface/transformers/issues/18516/events
https://github.com/huggingface/transformers/pull/18516
1,331,618,844
PR_kwDOCUB6oc48zPyt
18,516
[DX fix] Fixing QA pipeline streaming a dataset.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
CONTRIBUTOR
null
# What does this PR do? Linked to https://github.com/huggingface/transformers/issues/18510 Enabling nicer code. The dataset example of the docs : https://huggingface.co/docs/transformers/pipeline_tutorial#audio-pipeline Wouldn't work as nicely on QA because of `QuestionAnsweringArgumentHandler`. This handler is legacy and would iterate over the whole dataset effectively killing all properties of the pipeline. This restores nice properties when using `Dataset` or `Generator` since those are meant to be consumed lazily. It means that neither `Dataset` nor `Generator` can contain odd input shapes like List of questions and single context, or lists of questions and list of contexts, but in general that should be OK since it is not advertised as working anywhere. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18516/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18516/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18516", "html_url": "https://github.com/huggingface/transformers/pull/18516", "diff_url": "https://github.com/huggingface/transformers/pull/18516.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18516.patch", "merged_at": 1659961557000 }
https://api.github.com/repos/huggingface/transformers/issues/18515
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18515/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18515/comments
https://api.github.com/repos/huggingface/transformers/issues/18515/events
https://github.com/huggingface/transformers/pull/18515
1,331,540,406
PR_kwDOCUB6oc48y-9C
18,515
Adds CLIP to models exportable with ONNX
{ "login": "unography", "id": 5240449, "node_id": "MDQ6VXNlcjUyNDA0NDk=", "avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unography", "html_url": "https://github.com/unography", "followers_url": "https://api.github.com/users/unography/followers", "following_url": "https://api.github.com/users/unography/following{/other_user}", "gists_url": "https://api.github.com/users/unography/gists{/gist_id}", "starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unography/subscriptions", "organizations_url": "https://api.github.com/users/unography/orgs", "repos_url": "https://api.github.com/users/unography/repos", "events_url": "https://api.github.com/users/unography/events{/privacy}", "received_events_url": "https://api.github.com/users/unography/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, @unography. Could you give us a more detailed traceback, please?\r\n\r\nIt's hard to say without the script command and the full traceback.", "> Hi, @unography. Could you give us a more detailed traceback, please?\r\n> \r\n> It's hard to say without the script command and the full traceback.\r\n\r\nthis is the full traceback - \r\n\r\n```\r\n(transformers) ➜ transformers git:(main) python -m transformers.onnx --model=openai/clip-vit-base-patch32 onnx/\r\nvocab_file vocab.json\r\nmerges_file merges.txt\r\ntokenizer_file tokenizer.json\r\nadded_tokens_file added_tokens.json\r\nspecial_tokens_map_file special_tokens_map.json\r\ntokenizer_config_file tokenizer_config.json\r\nUsing framework PyTorch: 1.13.0.dev20220806\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:222: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:262: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:680: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n mask.fill_(torch.tensor(torch.finfo(dtype).min))\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:239: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:4592: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.\r\n warnings.warn(\r\nValidating ONNX model...\r\nTraceback (most recent call last):\r\n File \"/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py\", line 107, in <module>\r\n main()\r\n File \"/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py\", line 100, in main\r\n validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)\r\n File \"/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py\", line 405, in validate_model_outputs\r\n onnx_outputs = session.run(onnx_named_outputs, onnx_inputs)\r\n File \"/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py\", line 200, in run\r\n return self._sess.run(output_names, input_feed, run_options)\r\nonnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(int64))\r\n```", "@unography You need to pass the ONNX inputs in the same order as they are registered in the `forward` method of the model. We can see [here](https://github.com/huggingface/transformers/blob/3632531ec60beb03fd3b4f0d30f69853d8bcd5b4/src/transformers/models/clip/modeling_clip.py#L982) that `pixel_values` comes before `attention_mask`, so in the ONNX config you must return:\r\n```\r\nOrderedDict(\r\n [\r\n (\"input_ids\", {0: \"batch\", 1: \"sequence\"}),\r\n (\"pixel_values\", {0: \"batch\"}),\r\n (\"attention_mask\", {0: \"batch\", 1: \"sequence\"}),\r\n ]\r\n)\r\n```\r\nNote that it is an `OrderedDict` so the order matters :)\r\n\r\nTo explain a bit the error message, what was happening is that it expected the second input to be `int64` since that is how you defined it in the ONNX config. But it actually got a float tensor because `pixel_values` is passed before `attention_mask` in the `forward` method.", "> @unography You need to pass the ONNX inputs in the same order as they are registered in the `forward` method of the model. We can see [here](https://github.com/huggingface/transformers/blob/3632531ec60beb03fd3b4f0d30f69853d8bcd5b4/src/transformers/models/clip/modeling_clip.py#L982) that `pixel_values` comes before `attention_mask`, so in the ONNX config you must return:\r\n> \r\n> ```\r\n> OrderedDict(\r\n> [\r\n> (\"input_ids\", {0: \"batch\", 1: \"sequence\"}),\r\n> (\"pixel_values\", {0: \"batch\"}),\r\n> (\"attention_mask\", {0: \"batch\", 1: \"sequence\"}),\r\n> ]\r\n> )\r\n> ```\r\n> \r\n> Note that it is an `OrderedDict` so the order matters :)\r\n> \r\n> To explain a bit the error message, what was happening is that it expected the second input to be `int64` since that is how you defined it in the ONNX config. But it actually got a float tensor because `pixel_values` is passed before `attention_mask` in the `forward` method.\r\n\r\nah yes, understood. able to resolve this issue, getting an error on the output values now\r\n\r\n```\r\n(transformers) ➜ transformers git:(main) python -m transformers.onnx --model=openai/clip-vit-base-patch32 onnx/\r\nvocab_file vocab.json\r\nmerges_file merges.txt\r\ntokenizer_file tokenizer.json\r\nadded_tokens_file added_tokens.json\r\nspecial_tokens_map_file special_tokens_map.json\r\ntokenizer_config_file tokenizer_config.json\r\nUsing framework PyTorch: 1.13.0.dev20220806\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:222: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:262: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:680: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n mask.fill_(torch.tensor(torch.finfo(dtype).min))\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:239: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:4592: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.\r\n warnings.warn(\r\nValidating ONNX model...\r\nTraceback (most recent call last):\r\n File \"/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py\", line 107, in <module>\r\n main()\r\n File \"/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py\", line 100, in main\r\n validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)\r\n File \"/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py\", line 405, in validate_model_outputs\r\n onnx_outputs = session.run(onnx_named_outputs, onnx_inputs)\r\n File \"/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py\", line 200, in run\r\n return self._sess.run(output_names, input_feed, run_options)\r\nonnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(int64))\r\n(transformers) ➜ transformers git:(clip_onnx) ✗ xx\r\n(transformers) ➜ transformers git:(clip_onnx) ✗ python -m transformers.onnx --model=openai/clip-vit-base-patch32 onnx/\r\nvocab_file vocab.json\r\nmerges_file merges.txt\r\ntokenizer_file tokenizer.json\r\nadded_tokens_file added_tokens.json\r\nspecial_tokens_map_file special_tokens_map.json\r\ntokenizer_config_file tokenizer_config.json\r\nUsing framework PyTorch: 1.13.0.dev20220806\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:222: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:262: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:680: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n mask.fill_(torch.tensor(torch.finfo(dtype).min))\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n/Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:239: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:4592: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.\r\n warnings.warn(\r\nValidating ONNX model...\r\n -[x] ONNX model output names {'last_hidden_state'} do not match reference model {'text_embeds', 'logits_per_image', 'text_model_output', 'logits_per_text', 'image_embeds', 'vision_model_output'}\r\nTraceback (most recent call last):\r\n File \"/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py\", line 107, in <module>\r\n main()\r\n File \"/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py\", line 100, in main\r\n validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)\r\n File \"/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py\", line 414, in validate_model_outputs\r\n raise ValueError(\r\nValueError: Outputs doesn't match between reference model and ONNX exported model: {'last_hidden_state'}\r\n```\r\n\r\nI'm guessing in the onnx config I have to define a separate function for outputs as well?", "_The documentation is not available anymore as the PR was closed or merged._", "@unography Yes, you have to define outputs the same way you did for inputs. Inputs are not defined in the parent class because they usually vary from one model to another, which is why it is mandatory to define them for each model. However, some default outputs are already defined depending on the tasks. For the `default` task, you can see [here](https://github.com/huggingface/transformers/blob/3632531ec60beb03fd3b4f0d30f69853d8bcd5b4/src/transformers/onnx/config.py#L78) that it expects `last_hiddent_state` as the output, which is not returned by CLIP. So you can override this to specify the outputs you want.\r\n\r\nNot sure though which outputs we would like to have here. Maybe `text_embeds` and `image_embeds` since this is basically feature extraction?", "> @unography Yes, you have to define outputs the same way you did for inputs. Inputs are not defined in the parent class because they usually vary from one model to another, which is why it is mandatory to define them for each model. However, some default outputs are already defined depending on the tasks. For the `default` task, you can see [here](https://github.com/huggingface/transformers/blob/3632531ec60beb03fd3b4f0d30f69853d8bcd5b4/src/transformers/onnx/config.py#L78) that it expects `last_hiddent_state` as the output, which is not returned by CLIP. So you can override this to specify the outputs you want.\r\n> \r\n> Not sure though which outputs we would like to have here. Maybe `text_embeds` and `image_embeds` since this is basically feature extraction?\r\n\r\nso if I define in the onnx config, say `text_embeds` and `image_embeds`, but the model is actually returning more outputs, like `vision_model_output`, will these extra outputs create any conflict or will it get handled by the onnxconfig automatically?", "> > @unography Yes, you have to define outputs the same way you did for inputs. Inputs are not defined in the parent class because they usually vary from one model to another, which is why it is mandatory to define them for each model. However, some default outputs are already defined depending on the tasks. For the `default` task, you can see [here](https://github.com/huggingface/transformers/blob/3632531ec60beb03fd3b4f0d30f69853d8bcd5b4/src/transformers/onnx/config.py#L78) that it expects `last_hiddent_state` as the output, which is not returned by CLIP. So you can override this to specify the outputs you want.\r\n> > Not sure though which outputs we would like to have here. Maybe `text_embeds` and `image_embeds` since this is basically feature extraction?\r\n> \r\n> so if I define in the onnx config, say `text_embeds` and `image_embeds`, but the model is actually returning more outputs, like `vision_model_output`, will these extra outputs create any conflict or will it get handled by the onnxconfig automatically?\r\n\r\nNot sure about this. I think it should work the same as inputs, i.e. the order will matter and the inputs that are not specified in the config will just be skipped.", "@regisss sure, i'll try it out, thank you so much for your help!", "how do I make the test cases pass? Is it only formatting issues or something else?", "@regisss I made the changes, apart from the code formatting. How do I format my code correctly? And do I need to run `make fix-copies` ?", "> @regisss I made the changes, apart from the code formatting. How do I format my code correctly? And do I need to run `make fix-copies` ?\r\n\r\nI don't think `make fix-copies` is necessary anymore because you already updated the doc.", "@unography Not sure why `modeling_groupvit.py` is still in the changes.\r\n\r\nAlso, can you make sure that the test `pytest tests/onnx/test_onnx_v2.py -v -k \"clip\"` pass?", "@regisss i reverted changes to groupvit, and when I'm running the test (on Colab) the tests are being skipped - \r\n\r\n```\r\npytest tests/onnx/test_onnx_v2.py -v -k \"clip\"\r\n```\r\n\r\n```\r\n============================= test session starts ==============================\r\nplatform linux -- Python 3.7.13, pytest-3.6.4, py-1.11.0, pluggy-0.7.1 -- /usr/bin/python3\r\ncachedir: .pytest_cache\r\nrootdir: /content/transformers, inifile: setup.cfg\r\nplugins: typeguard-2.7.1\r\ncollected 398 items / 396 deselected \r\n\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default <- ../../usr/lib/python3.7/unittest/case.py SKIPPED [ 50%]\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default <- ../../usr/lib/python3.7/unittest/case.py SKIPPED [100%]\r\n```\r\n\r\nis there some issue with my changes that its skipping these tests?", "> @regisss i reverted changes to groupvit, and when I'm running the test (on Colab) the tests are being skipped -\r\n> \r\n> ```\r\n> pytest tests/onnx/test_onnx_v2.py -v -k \"clip\"\r\n> ```\r\n> \r\n> ```\r\n> ============================= test session starts ==============================\r\n> platform linux -- Python 3.7.13, pytest-3.6.4, py-1.11.0, pluggy-0.7.1 -- /usr/bin/python3\r\n> cachedir: .pytest_cache\r\n> rootdir: /content/transformers, inifile: setup.cfg\r\n> plugins: typeguard-2.7.1\r\n> collected 398 items / 396 deselected \r\n> \r\n> tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default <- ../../usr/lib/python3.7/unittest/case.py SKIPPED [ 50%]\r\n> tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default <- ../../usr/lib/python3.7/unittest/case.py SKIPPED [100%]\r\n> ```\r\n> \r\n> is there some issue with my changes that its skipping these tests?\r\n\r\n@unography My bad I forgot the environment variable in the command I gave you, sorry. Here it is:\r\n```bash\r\nRUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -v -k \"clip\"\r\n```\r\n\r\nSome tests are skipped by default because they can take some time to complete, which is why we need to specify this env variable when running them.", "@regisss for some reason on google colab the tests are still being skipped, so I'm not able to test on GPU\r\n\r\nthis is on my local machine - \r\n\r\n```\r\n(transformers) ➜ transformers git:(clip_onnx) RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -v -k \"clip\"\r\n=========================================================================================== test session starts ===========================================================================================\r\nplatform darwin -- Python 3.8.12, pytest-7.1.2, pluggy-1.0.0 -- /Users/dhruv/Documents/code/transformers/.venv/bin/python\r\ncachedir: .pytest_cache\r\nrootdir: /Users/dhruv/Documents/code/transformers, configfile: setup.cfg\r\ncollected 398 items / 396 deselected / 2 selected \r\n\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default PASSED [ 50%]\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default PASSED [100%]\r\n\r\n============================================================================================ warnings summary =============================================================================================\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default\r\n /Users/dhruv/Documents/code/transformers/src/transformers/image_utils.py:223: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.\r\n def resize(self, image, size, resample=PIL.Image.BILINEAR, default_to_square=True, max_size=None):\r\n\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default\r\n /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/feature_extraction_clip.py:67: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.\r\n resample=Image.BICUBIC,\r\n\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default\r\n /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:222: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):\r\n\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default\r\n /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:262: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):\r\n\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default\r\n /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:681: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n mask.fill_(torch.tensor(torch.finfo(dtype).min))\r\n\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default\r\n /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if causal_attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default\r\n /Users/dhruv/Documents/code/transformers/src/transformers/models/clip/modeling_clip.py:239: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_029_clip_default\r\ntests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_029_clip_default\r\n /Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:4592: UserWarning: Exporting aten::index operator of advanced indexing in opset 14 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.\r\n warnings.warn(\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n============================================================================= 2 passed, 396 deselected, 14 warnings in 53.51s =============================================================================\r\n```", "@unography I'm going to checkout your branch and run the test on GPU.", "@regisss do let me know if there are any further changes needed!", "> @regisss do let me know if there are any further changes needed!\r\n\r\n@unography I had to change two small things in `modeling_clip.py` to make the tests pass:\r\n- replace `similarity.T` by `similarity.t()`\r\n- replace `logits_per_text.T` by `logits_per_text.t()`\r\n\r\nIt seems ONNX does not like `.T`, I got the issue mentioned [here](https://github.com/pytorch/pytorch/issues/51183). Have you encountered the same issue?", "> > @regisss do let me know if there are any further changes needed!\r\n> \r\n> @unography I had to change two small things in `modeling_clip.py` to make the tests pass:\r\n> \r\n> * replace `similarity.T` by `similarity.t()`\r\n> * replace `logits_per_text.T` by `logits_per_text.t()`\r\n> \r\n> It seems ONNX does not like `.T`, I got the issue mentioned [here](https://github.com/pytorch/pytorch/issues/51183). Have you encountered the same issue?\r\n\r\nOh I think this got fixed in pytorch's latest release. I'll verify this once, and test on older versions of Pytorch as well, and make the change and push", "@regisss `.T` is working for me while using pytorch's nightly release, but it fails on pytorch `1.12.1`, the stable version.\r\nI've made the change to make it `.t()`, this is working in the nightly release version as well", "Thanks @unography, it looks good to me!\r\n\r\nLooking at the failed tests, it seems you need to run `make fix-copies` one more time. Could you do it please?", "@regisss ah, my mistake. pushed. there are now additional changes to owlvit, groupvit and vision_text_dual_encoder, I'm assuming we copy over code to these files from the actual CLIP model?", "> @regisss ah, my mistake. pushed. there are now additional changes to owlvit, groupvit and vision_text_dual_encoder, I'm assuming we copy over code to these files from the actual CLIP model?\r\n\r\nYes that is what happens. Actually you removed those changes because they were among all the formatting changes that you got the first time you ran black, and I told you to remove them, sorry. I did not pay attention to those.\r\n\r\nIt looks good to me @unography :)", "Gently pinging @sgugger for approval", "@sgugger sure, removed the comment and pushed, but some tests are failing right now, I can't understand why.", "This is a flaky test, don't worry.\r\nThanks again for your contribution!", "Congrats @unography for this PR!", "Thanks @regisss for all the help!", "Huge contribution! That's awesome!" ]
1,659
1,660
1,660
CONTRIBUTOR
null
This isn't currently working, getting an error while validating the model - ``` onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(int64)) ``` Environment: Pytorch: 1.13.0.dev20220806 onnxruntime: 1.12.0 Would love some guidance here! @ChainYo @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18515/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18515/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18515", "html_url": "https://github.com/huggingface/transformers/pull/18515", "diff_url": "https://github.com/huggingface/transformers/pull/18515.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18515.patch", "merged_at": 1660160852000 }
https://api.github.com/repos/huggingface/transformers/issues/18514
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18514/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18514/comments
https://api.github.com/repos/huggingface/transformers/issues/18514/events
https://github.com/huggingface/transformers/pull/18514
1,331,492,617
PR_kwDOCUB6oc48y0ob
18,514
unpin torch to use 1.12.1
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Let's maybe add a `!=1.12.0` as well. cc @sgugger, who also has a PR open here: https://github.com/huggingface/transformers/pull/17925", "Here are the tests to fix before we can use 1.12.1\r\n\r\n - tests/models/tapas/test_modeling_tf_tapas.py -k \"TFTapasModelTest and test_pt_tf_model_equivalence\"\r\n - tests/onnx/test_onnx_v2.py -k \"StableDropoutTestCase and test_training\"\r\n - tests/pipelines/test_pipelines_table_question_answering.py -k \"TQAPipelineTests and test_slow_tokenizer_sqa_pt\"\r\n - tests/pipelines/test_pipelines_table_question_answering.py -k \"TQAPipelineTests and test_small_model_pt\"\r\n - tests/models/tapas/test_modeling_tapas.py::TapasUtilitiesTest::", "Most tapas tests will likely work given @sgugger's PR above, as it's probably linked to the version of the torch-scatter dependency", "Yes! Thanks, @LysandreJik ", "(guess I can close this PR, and just merge #17925)\r\nI will check `StableDropoutTestCase and test_training` though.", "Can you push necessary changes directly on #17925 (I'm too lazy to check this PR contains the same fixes as this one 😅 ) The branch is `enable_pt12`.", "Close this and work on #17925 instead" ]
1,659
1,662
1,659
COLLABORATOR
null
# What does this PR do? unpin torch to use 1.12.1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18514/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18514", "html_url": "https://github.com/huggingface/transformers/pull/18514", "diff_url": "https://github.com/huggingface/transformers/pull/18514.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18514.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18513
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18513/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18513/comments
https://api.github.com/repos/huggingface/transformers/issues/18513/events
https://github.com/huggingface/transformers/issues/18513
1,331,073,716
I_kwDOCUB6oc5PVpK0
18,513
VisionEncoderDecoderModel gradient checkpointing
{ "login": "metemadi", "id": 4220153, "node_id": "MDQ6VXNlcjQyMjAxNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4220153?v=4", "gravatar_id": "", "url": "https://api.github.com/users/metemadi", "html_url": "https://github.com/metemadi", "followers_url": "https://api.github.com/users/metemadi/followers", "following_url": "https://api.github.com/users/metemadi/following{/other_user}", "gists_url": "https://api.github.com/users/metemadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/metemadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/metemadi/subscriptions", "organizations_url": "https://api.github.com/users/metemadi/orgs", "repos_url": "https://api.github.com/users/metemadi/repos", "events_url": "https://api.github.com/users/metemadi/events{/privacy}", "received_events_url": "https://api.github.com/users/metemadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@NielsRogge, have you seen such examples? :)", "Here's a PR that added gradient checkpointing to T5: https://github.com/huggingface/transformers/pull/11353/files", "Fixed per #18697" ]
1,659
1,661
1,661
NONE
null
### Feature request Would love to be able to use gradient checkpointing on VisionEncoderDecoder model. >>> model.gradient_checkpointing_enable() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1418, in gradient_checkpointing_enable raise ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.") ValueError: VisionEncoderDecoderModel does not support gradient checkpointing. ### Motivation Gradient checkpointing always helps increase the accessibility of larger models - HuggingFace is awesome!!! ### Your contribution Happy to take a stab at this if someone can point me to a previous example of this working with an EncoderDecoder model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18513/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18512
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18512/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18512/comments
https://api.github.com/repos/huggingface/transformers/issues/18512/events
https://github.com/huggingface/transformers/pull/18512
1,331,069,782
PR_kwDOCUB6oc48xbxs
18,512
Add Spanish translation of converting_tensorflow_models.mdx
{ "login": "donelianc", "id": 7807897, "node_id": "MDQ6VXNlcjc4MDc4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7807897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donelianc", "html_url": "https://github.com/donelianc", "followers_url": "https://api.github.com/users/donelianc/followers", "following_url": "https://api.github.com/users/donelianc/following{/other_user}", "gists_url": "https://api.github.com/users/donelianc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donelianc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donelianc/subscriptions", "organizations_url": "https://api.github.com/users/donelianc/orgs", "repos_url": "https://api.github.com/users/donelianc/repos", "events_url": "https://api.github.com/users/donelianc/events{/privacy}", "received_events_url": "https://api.github.com/users/donelianc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hola @omarespejel. This PR is ready for review. Can you help me here? Thx a lot!", "@donelianc muchas gracias for the translation! I added a few comments as a review 🚀.", "@omarespejel suggested changes done 😀", "Muchas gracias @donelianc! Thanks for the translation!\r\n\r\n@sgugger LGTM :)" ]
1,659
1,659
1,659
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add the Spanish translation for `converting_tensorflow_models.mdx` as part of the #15947 issue. Changes include the Spanish version of the original document and the updated `_toctree.yml` file. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? **Task assignment [here](https://github.com/huggingface/transformers/pull/18415#issuecomment-1203391039)**. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18512/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18512", "html_url": "https://github.com/huggingface/transformers/pull/18512", "diff_url": "https://github.com/huggingface/transformers/pull/18512.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18512.patch", "merged_at": 1659988423000 }
https://api.github.com/repos/huggingface/transformers/issues/18511
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18511/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18511/comments
https://api.github.com/repos/huggingface/transformers/issues/18511/events
https://github.com/huggingface/transformers/issues/18511
1,330,953,352
I_kwDOCUB6oc5PVLyI
18,511
FSDP - TypeError: load_state_dict() got an unexpected keyword argument 'strict'
{ "login": "shrinath-suresh", "id": 63862647, "node_id": "MDQ6VXNlcjYzODYyNjQ3", "avatar_url": "https://avatars.githubusercontent.com/u/63862647?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shrinath-suresh", "html_url": "https://github.com/shrinath-suresh", "followers_url": "https://api.github.com/users/shrinath-suresh/followers", "following_url": "https://api.github.com/users/shrinath-suresh/following{/other_user}", "gists_url": "https://api.github.com/users/shrinath-suresh/gists{/gist_id}", "starred_url": "https://api.github.com/users/shrinath-suresh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shrinath-suresh/subscriptions", "organizations_url": "https://api.github.com/users/shrinath-suresh/orgs", "repos_url": "https://api.github.com/users/shrinath-suresh/repos", "events_url": "https://api.github.com/users/shrinath-suresh/events{/privacy}", "received_events_url": "https://api.github.com/users/shrinath-suresh/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[ { "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false } ]
[ "Hello @shrinath-suresh , this issue has to be fixed from PyTorch side. The issue raised with PyTorch has been linked above.", "Also, when using `auto_wrap` please specify either `--fsdp_transformer_layer_cls_to_wrap <value>` or `--fsdp_min_num_params <number>` as part of cmd arguments. This is what enables sharding of parameters, gradients and optimizer state across GPUs so that peak memory usage is further decreased drastically and you get the most out of using FSDP. For more details, please refer https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html and https://pytorch.org/docs/1.12/fsdp.html?highlight=fsdp#module-torch.distributed.fsdp. \r\n\r\n🤗 Trainer FSDP integration doc is being updated to reflect the recent updates in this PR https://github.com/huggingface/transformers/pull/18521. Please refer it for more details.", "Thanks for raising this issue! I responded in PT: https://github.com/pytorch/pytorch/issues/82963. Although, not sure if HF uses nightlies/latest PT or a stable version. If we can't get pytorch updated in HF to include the fix, could we work around this by changing\r\n\r\n```\r\nmodel.load_state_dict(state_dict, strict=False)\r\n```\r\n\r\nto \r\n\r\n```\r\nmodel.load_state_dict(state_dict, False)\r\n```", "@rohan-varma Thank you very much. I applied the fix as given in the screenshot and compiled from source. The model is gettting saved in the fsdp mode.\r\n\r\nAttached image and logs for the same\r\n\r\n![image](https://user-images.githubusercontent.com/63862647/184059491-94326735-b031-44dd-800e-660f5687c9b2.png)\r\n[vit_fsdp_with_fix.txt](https://github.com/huggingface/transformers/files/9305853/vit_fsdp_with_fix.txt)\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This should be fixed in PyTorch nightly now: https://github.com/pytorch/pytorch/pull/83309" ]
1,659
1,665
1,663
NONE
null
### System Info ``` - `transformers` version: 4.22.0.dev0 - Platform: Linux-5.4.0-1072-aws-x86_64-with-debian-buster-sid - Python version: 3.7.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behaviour: 1. Clone transformers - `git clone https://github.com/huggingface/transformers.git` 2. move to transformers folder - `cd transformers` 3. Install from source - `pip install .` 4. Move to image-classification example - `cd examples/pytorch/image-classification` 5. Train the model using fsdp ``` torchrun --nproc_per_node=4 run_image_classification.py --dataset_name beans --output_dir ./beans_outputs/ --remove_unused_columns False --do_train --do_eval --learning_rate 2e-5 --num_train_epochs 5 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --logging_strategy steps --logging_steps 10 --evaluation_strategy epoch --save_strategy epoch --load_best_model_at_end True --save_total_limit 3 --seed 1337 --fsdp "full_shard auto_wrap" ``` ### Expected behavior Model should get finetuned and saved successfully. However, the following error is produced ``` [INFO|trainer.py:1949] 2022-08-07 08:35:00,771 >> Loading best model from ./beans_outputs/checkpoint-165 (score: 0.19044387340545654). Traceback (most recent call last): File "run_image_classification.py", line 384, in <module> main() File "run_image_classification.py", line 358, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1509, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1867, in _inner_training_loop self._load_best_model() File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1992, in _load_best_model load_result = model.load_state_dict(state_dict, strict=False) TypeError: load_state_dict() got an unexpected keyword argument 'strict' Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "run_image_classification.py", line 384, in <module> File "run_image_classification.py", line 384, in <module> File "run_image_classification.py", line 384, in <module> main()main() File "run_image_classification.py", line 358, in main File "run_image_classification.py", line 358, in main main() File "run_image_classification.py", line 358, in main train_result = trainer.train(resume_from_checkpoint=checkpoint)train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1509, in train File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1509, in train train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1509, in train ignore_keys_for_eval=ignore_keys_for_eval,ignore_keys_for_eval=ignore_keys_for_eval, File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1867, in _inner_training_loop File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1867, in _inner_training_loop ignore_keys_for_eval=ignore_keys_for_eval, File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1867, in _inner_training_loop self._load_best_model()self._load_best_model() File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1992, in _load_best_model File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1992, in _load_best_model self._load_best_model() File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1992, in _load_best_model load_result = model.load_state_dict(state_dict, strict=False)load_result = model.load_state_dict(state_dict, strict=False) TypeErrorTypeError: : load_state_dict() got an unexpected keyword argument 'strict'load_state_dict() got an unexpected keyword argument 'strict' load_result = model.load_state_dict(state_dict, strict=False) TypeError: load_state_dict() got an unexpected keyword argument 'strict' ``` Full example log - [fsdp_error.txt](https://github.com/huggingface/transformers/files/9276468/fsdp_error.txt) Torch environment details: ``` PyTorch version: 1.12.0+cu102 Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.6 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: version 3.22.4 Libc version: glibc-2.10 Python version: 3.7.10 | packaged by conda-forge | (default, Feb 19 2021, 16:07:37) [GCC 9.3.0] (64-bit runtime) Python platform: Linux-5.4.0-1072-aws-x86_64-with-debian-buster-sid Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB GPU 1: Tesla V100-SXM2-16GB GPU 2: Tesla V100-SXM2-16GB GPU 3: Tesla V100-SXM2-16GB Nvidia driver version: 510.47.03 cuDNN version: Probably one of the following: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1 /usr/local/cuda-11.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] mlflow-torchserve==0.2.0 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.21.6 [pip3] numpydoc==1.1.0 [pip3] pytorch-kfp-components==0.1.0 [pip3] pytorch-lightning==1.6.5 [pip3] pytorch-ranger==0.1.1 [pip3] torch==1.12.0 [pip3] torch-model-archiver==0.6.0 [pip3] torch-optimizer==0.1.0 [pip3] torch-workflow-archiver==0.2.4b20220511 [pip3] torchdata==0.4.0 [pip3] torchmetrics==0.7.3 [pip3] torchserve==0.6.0 [pip3] torchtext==0.13.0 [pip3] torchvision==0.13.0 [conda] blas 1.0 mkl [conda] mkl 2020.2 256 [conda] mkl-service 2.3.0 py37he8ac12f_0 [conda] mkl_fft 1.2.1 py37h54f3939_0 [conda] mkl_random 1.1.1 py37h0573a6f_0 [conda] mlflow-torchserve 0.2.0 pypi_0 pypi [conda] numpy 1.21.6 pypi_0 pypi [conda] numpydoc 1.1.0 pyhd3eb1b0_1 [conda] pytorch-kfp-components 0.1.0 pypi_0 pypi [conda] pytorch-lightning 1.6.5 pypi_0 pypi [conda] pytorch-ranger 0.1.1 pypi_0 pypi [conda] torch 1.12.0 pypi_0 pypi [conda] torch-model-archiver 0.6.0 pypi_0 pypi [conda] torch-optimizer 0.1.0 pypi_0 pypi [conda] torch-workflow-archiver 0.2.4b20220511 pypi_0 pypi [conda] torchdata 0.4.0 pypi_0 pypi [conda] torchmetrics 0.7.3 pypi_0 pypi [conda] torchserve 0.6.0 pypi_0 pypi [conda] torchtext 0.13.0 pypi_0 pypi [conda] torchvision 0.13.0 pypi_0 pypi ``` the issue seems to be appearing after [this commit ](https://gist.github.com/shrinath-suresh/d613b48791d7fc49b859508ec8676ba1).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18511/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18510
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18510/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18510/comments
https://api.github.com/repos/huggingface/transformers/issues/18510/events
https://github.com/huggingface/transformers/issues/18510
1,330,928,879
I_kwDOCUB6oc5PVFzv
18,510
Tqdm not working with question-answering pipeline
{ "login": "Alex-apostolo", "id": 44866858, "node_id": "MDQ6VXNlcjQ0ODY2ODU4", "avatar_url": "https://avatars.githubusercontent.com/u/44866858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Alex-apostolo", "html_url": "https://github.com/Alex-apostolo", "followers_url": "https://api.github.com/users/Alex-apostolo/followers", "following_url": "https://api.github.com/users/Alex-apostolo/following{/other_user}", "gists_url": "https://api.github.com/users/Alex-apostolo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Alex-apostolo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Alex-apostolo/subscriptions", "organizations_url": "https://api.github.com/users/Alex-apostolo/orgs", "repos_url": "https://api.github.com/users/Alex-apostolo/repos", "events_url": "https://api.github.com/users/Alex-apostolo/events{/privacy}", "received_events_url": "https://api.github.com/users/Alex-apostolo/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi there,\r\n\r\nTry this:\r\n\r\n```python\r\nfrom tqdm.auto import tqdm\r\nfrom transformers import pipeline\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('cuad', split='test')\r\ndataset = dataset.remove_columns(['id', 'title', 'answers'])\r\n\r\nbatch_size = 16\r\nnlp = pipeline(\"question-answering\", device=0)\r\n\r\nresults = []\r\nfor i in tqdm(range(0, len(dataset), batch_size)):\r\n results.extend(\r\n nlp(\r\n context=dataset[i:i+batch_size][\"context\"],\r\n question=dataset[i:i+batch_size][\"question\"]\r\n )\r\n )\r\n```", "@nbroad1881 Thanks a bunch that worked! ", "@Alex-apostolo ,\r\n\r\nThe answer from @nbroad1881 will work, however it will not batch anything because the pipeline was not set with batching.\r\n\r\n```python\r\nfrom tqdm.auto import tqdm\r\nfrom transformers import pipeline\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('cuad', split='test')\r\ndataset = dataset.remove_columns(['id', 'title', 'answers'])\r\n\r\nbatch_size = 16\r\nnlp = pipeline(\"question-answering\", device=0, batch_size=batch_size) # <--- small change\r\n\r\nresults = []\r\nfor i in tqdm(range(0, len(dataset), batch_size)):\r\n results.extend(\r\n nlp(\r\n context=dataset[i:i+batch_size][\"context\"],\r\n question=dataset[i:i+batch_size][\"question\"]\r\n )\r\n )\r\n```\r\n\r\nThis should work more as you intend.\r\nKeep in mind that batching will occur on chunks of text, not on the entire question/context. That's a feature since you have more control on the memory + sequence_length of what the model sees. So while you are sending 16 question+context pairs at a time you might get any amount of forward calls depending on the chunking of those pairs (only 1 forward call if the pair is small enough). `max_seq_len`, `doc_stride` and `max_question_len` might have to be adjusted for your dataset+model pair. (There are defaults used for squad, but it might impact actual score for your use case).\r\n\r\n\r\nActually the problem is not really the pipeline in general it should work with `tqdm`. The problem is the legacy support for many args that's actually looking a tthe whole dataset to create `SquadExample` out of it. \r\nI am going to look at solutions for this, since consuming the entire dataset before feeding it to the pipeline (+ consuming memory) is not really intended.\r\n\r\n```python\r\nfrom tqdm.auto import tqdm\r\nfrom transformers import pipeline\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"cuad\", split=\"test\")\r\ndataset = dataset.remove_columns([\"id\", \"title\", \"answers\"])\r\n\r\npipe = pipeline(\"question-answering\", device=0, batch_size=1, framework=\"pt\")\r\n\r\n\r\ndef data(dataset):\r\n for item in dataset:\r\n yield {\"question\": item[\"question\"], \"context\": item[\"context\"]}\r\n\r\n\r\nresults = []\r\nfor out in tqdm(pipe(data(dataset)), total=len(dataset)):\r\n pass # print(out)\r\n```\r\nHere is an example that should be working + desirable (it actually works, but is *not* an iterator like intended.\r\n" ]
1,659
1,659
1,659
NONE
null
### System Info Running inside a notebook on Google Colab - `transformers` version: 4.21.1 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @Narsil, @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the below code: from tqdm.auto import tqdm from transformers import pipeline dataset = load_dataset('cuad', split='test') dataset = dataset.remove_columns(['id', 'title', 'answers']) nlp = pipeline("question-answering", device=0, batch_size=64) for answer in tqdm(nlp(dataset)): print(answer) ### Expected behavior Expected to see a progress bar with the time remaining and the it/s but nothing is displayed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18510/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18509
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18509/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18509/comments
https://api.github.com/repos/huggingface/transformers/issues/18509/events
https://github.com/huggingface/transformers/issues/18509
1,330,895,368
I_kwDOCUB6oc5PU9oI
18,509
How to use run_glue.py with tensorboard?
{ "login": "Daromog", "id": 40415903, "node_id": "MDQ6VXNlcjQwNDE1OTAz", "avatar_url": "https://avatars.githubusercontent.com/u/40415903?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Daromog", "html_url": "https://github.com/Daromog", "followers_url": "https://api.github.com/users/Daromog/followers", "following_url": "https://api.github.com/users/Daromog/following{/other_user}", "gists_url": "https://api.github.com/users/Daromog/gists{/gist_id}", "starred_url": "https://api.github.com/users/Daromog/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Daromog/subscriptions", "organizations_url": "https://api.github.com/users/Daromog/orgs", "repos_url": "https://api.github.com/users/Daromog/repos", "events_url": "https://api.github.com/users/Daromog/events{/privacy}", "received_events_url": "https://api.github.com/users/Daromog/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Try using `--report_to tensorboard`" ]
1,659
1,660
1,660
NONE
null
### System Info I'm writting a folder path in --logging_dir but nothing is written there . I've tried with --logging_dir foldername --logging_dir pathtofolder But nothing works @LysandreJik @sgugger ### Who can help? @sgugger @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction --logging_dir foldername --logging_dir pathtofolder --logging_strategy steps \ --logging_first_step True \ --logging_steps 5 \ ### Expected behavior Nothing is saved in the folder
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18509/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18508
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18508/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18508/comments
https://api.github.com/repos/huggingface/transformers/issues/18508/events
https://github.com/huggingface/transformers/issues/18508
1,330,866,608
I_kwDOCUB6oc5PU2mw
18,508
Small typo in docs/README.md
{ "login": "ankrgyl", "id": 565363, "node_id": "MDQ6VXNlcjU2NTM2Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/565363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ankrgyl", "html_url": "https://github.com/ankrgyl", "followers_url": "https://api.github.com/users/ankrgyl/followers", "following_url": "https://api.github.com/users/ankrgyl/following{/other_user}", "gists_url": "https://api.github.com/users/ankrgyl/gists{/gist_id}", "starred_url": "https://api.github.com/users/ankrgyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankrgyl/subscriptions", "organizations_url": "https://api.github.com/users/ankrgyl/orgs", "repos_url": "https://api.github.com/users/ankrgyl/repos", "events_url": "https://api.github.com/users/ankrgyl/events{/privacy}", "received_events_url": "https://api.github.com/users/ankrgyl/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @ankrgyl! It seems that is correct! Would you like to open a PR to fix the problem? Happy to guide you through it if you need pointers; please ping @sgugger and myself on the PR. Thanks!", "Hi, `CONTRIBUTING.md` also requires a language such as \"en\" to be specified in the doc-builder. @ankrgyl @LysandreJik ", "> Hi, `CONTRIBUTING.md` also requires a language such as \"en\" to be specified in the doc-builder. @ankrgyl @LysandreJik\r\n\r\nThis typo has been fixed, sorry I missed the latest version of `CONTRIBUTING.md`." ]
1,659
1,666
1,659
CONTRIBUTOR
null
### System Info Working on main (commit 9129fd0377e4d46cb2d0ea28dc1eb91a15f65b77). The suggested command: ``` doc-builder build transformers docs/source/ --build_dir ~/tmp/test-build ``` fails with: ``` FileNotFoundError: [Errno 2] No such file or directory: 'docs/source/_toctree.yml' ``` I think is because you have to specify a language (e.g. `en`) while building the docs, e.g. ``` doc-builder build transformers docs/source/en/ --build_dir ~/tmp/test-build ``` I'm happy to contribute a fix. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the following command: ``` doc-builder build transformers docs/source/ --build_dir ~/tmp/test-build ``` ### Expected behavior The command should _not_ error, and instead should generate the expected MDX files.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18508/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18507
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18507/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18507/comments
https://api.github.com/repos/huggingface/transformers/issues/18507/events
https://github.com/huggingface/transformers/issues/18507
1,330,814,711
I_kwDOCUB6oc5PUp73
18,507
https://github.com/huggingface/transformers/blob/f0d496828d3da3bf1e3c8fbed394d7847e839fa6/src/transformers/models/funnel/modeling_funnel.py#L1004
{ "login": "Tharolzakariya", "id": 109880164, "node_id": "U_kgDOBoyjZA", "avatar_url": "https://avatars.githubusercontent.com/u/109880164?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tharolzakariya", "html_url": "https://github.com/Tharolzakariya", "followers_url": "https://api.github.com/users/Tharolzakariya/followers", "following_url": "https://api.github.com/users/Tharolzakariya/following{/other_user}", "gists_url": "https://api.github.com/users/Tharolzakariya/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tharolzakariya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tharolzakariya/subscriptions", "organizations_url": "https://api.github.com/users/Tharolzakariya/orgs", "repos_url": "https://api.github.com/users/Tharolzakariya/repos", "events_url": "https://api.github.com/users/Tharolzakariya/events{/privacy}", "received_events_url": "https://api.github.com/users/Tharolzakariya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This does not look like an issue :) Closing \r\n\r\nFeel free to reopen with the actual question." ]
1,659
1,660
1,660
NONE
null
https://github.com/huggingface/transformers/blob/f0d496828d3da3bf1e3c8fbed394d7847e839fa6/src/transformers/models/funnel/modeling_funnel.py#L1004
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18507/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18506
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18506/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18506/comments
https://api.github.com/repos/huggingface/transformers/issues/18506/events
https://github.com/huggingface/transformers/issues/18506
1,330,732,913
I_kwDOCUB6oc5PUV9x
18,506
bert-large-uncased gives `(1024) must match the size of tensor b (512) at non-singleton dimension 1` error
{ "login": "monk1337", "id": 17107749, "node_id": "MDQ6VXNlcjE3MTA3NzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/monk1337", "html_url": "https://github.com/monk1337", "followers_url": "https://api.github.com/users/monk1337/followers", "following_url": "https://api.github.com/users/monk1337/following{/other_user}", "gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}", "starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/monk1337/subscriptions", "organizations_url": "https://api.github.com/users/monk1337/orgs", "repos_url": "https://api.github.com/users/monk1337/repos", "events_url": "https://api.github.com/users/monk1337/events{/privacy}", "received_events_url": "https://api.github.com/users/monk1337/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @monk1337 \r\nThe loaded model has a maximum sequence length of 512 tokens.\r\n\r\nIf you use:\r\n`model = BertModel.from_pretrained(\"bert-large-uncased\", max_position_embeddings=1024)`\r\nThe model won't be loaded because the loaded checkpoint also relies on 512 tokens (wrong tensor size).\r\n\r\nIf you set `model.config.max_position_embeddings = 1024` after loading, this has no effect because the model is already loaded with 512 tokens.\r\n\r\nSome models have a `model.resize_position_embeddings(1024)` method (e.g Pegasus) but it is not the case for BERT.\r\nYou have to:\r\n* load the model\r\n* set `model.config.max_position_embeddings = 1024`\r\n* manually resize both `model.embeddings.position_ids` and `model.embeddings.position_embeddings.weight.data` tensors\r\n\r\nNote that the way you resize `model.embeddings.position_embeddings.weight.data` can have a significant effect on the quality of predictions as you add new untrained parameters and vanilla attention has poor extrapolation capabilities.\r\n\r\nIf you don't mind switching to an efficient attention mecanism, you can use my [repo](https://github.com/ccdv-ai/convert_checkpoint_to_lsg) to convert your model and process long sequences while preserving the quality of its predictions.", "@ccdv-ai That's helpful; I was checking the repo; excellent work! It would be great if you could provide a simple working classification example of colab?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,659
1,663
1,663
NONE
null
### System Info Python : python3.6 "transformers_version": "4.18.0" ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am trying to use the bert-large-uncased for long sequence ending, but it's giving the error: Code: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = BertModel.from_pretrained("bert-large-uncased") text = "Replace me by any text you'd like."*1024 encoded_input = tokenizer(text, truncation=True, max_length=1024, return_tensors='pt') output = model(**encoded_input) It's giving the following error : ~/.local/lib/python3.6/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 218 if self.position_embedding_type == "absolute": 219 position_embeddings = self.position_embeddings(position_ids) --> 220 embeddings += position_embeddings 221 embeddings = self.LayerNorm(embeddings) 222 embeddings = self.dropout(embeddings) RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 1 I also tried to change the default size of the positional embedding: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = BertModel.from_pretrained("bert-large-uncased") model.config.max_position_embeddings = 1024 text = "Replace me by any text you'd like."*1024 encoded_input = tokenizer(text, truncation=True, max_length=1024, return_tensors='pt') output = model(**encoded_input) But still the error is persistent, How to use large model for 1024 length sequences? ### Expected behavior Expecting the output of 1024 given the sequence length of 1024
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18506/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18506/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18505
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18505/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18505/comments
https://api.github.com/repos/huggingface/transformers/issues/18505/events
https://github.com/huggingface/transformers/issues/18505
1,330,728,188
I_kwDOCUB6oc5PUUz8
18,505
Suppress `reset_parameters` of `torch.nn.Linear,Conv2d...` inside `no_init_weights`
{ "login": "YouJiacheng", "id": 83971976, "node_id": "MDQ6VXNlcjgzOTcxOTc2", "avatar_url": "https://avatars.githubusercontent.com/u/83971976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YouJiacheng", "html_url": "https://github.com/YouJiacheng", "followers_url": "https://api.github.com/users/YouJiacheng/followers", "following_url": "https://api.github.com/users/YouJiacheng/following{/other_user}", "gists_url": "https://api.github.com/users/YouJiacheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/YouJiacheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YouJiacheng/subscriptions", "organizations_url": "https://api.github.com/users/YouJiacheng/orgs", "repos_url": "https://api.github.com/users/YouJiacheng/repos", "events_url": "https://api.github.com/users/YouJiacheng/events{/privacy}", "received_events_url": "https://api.github.com/users/YouJiacheng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@younesbelkada @pacman100 Can I suggest reopening this issue - it's pretty important IMO and it's hitting lots of people.", "May be interesting to you @ArthurZucker ", "Yes! I'll take this one, makes sense 😉 ", "Why not just do\r\n```\r\nwith torch.device('meta'):\r\n model_init()\r\n```\r\n?", "Yes, meta device or faketensor is the correct and recommended choice for deferring/skipping initialization.\n\nhttps://pytorch.org/torchdistx/latest/fake_tensor_and_deferred_init.html\n\nhttps://huggingface.co/blog/accelerate-large-models\n\nBut `with torch.device('meta')` requires pytorch 2.0 IIUC.\n\nhttps://github.com/pytorch/pytorch/issues/97951#issuecomment-1490417946\n\nTBH, I didn't know meta device when I posted this issue (2022). I knew meta device this year.", "Oups sorry for the delay! Have not forgotten about this 😉 ", "> Yes, meta device or faketensor is the correct and recommended choice for deferring/skipping initialization.\r\n\r\nmeta device doesn't solve the problem, because buffers (e.g sin/cos in llama2) don't get initialized in that case.", "> > Yes, meta device or faketensor is the correct and recommended choice for deferring/skipping initialization.\n> \n> meta device doesn't solve the problem, because buffers (e.g sin/cos in llama2) don't get initialized in that case.\n\nYou are right, meta device would skip all initializations, builtin or user-defined, parameters or buffers.\nBut as long as those buffers are not marked as persistent=False, it can be loaded from the checkpoint.\n", "We have a lot of buffers with persistant = False. I answered on the other issue but I’ll most probably go about this with skipping initialization for all layers but the ones that are missing in the checkpoints.\r\n", "Hello, please also see this comment https://github.com/huggingface/transformers/issues/26258#issuecomment-1829210754", "Fixed by #27709 🤗 " ]
1,659
1,702
1,702
CONTRIBUTOR
null
### Feature request `torch.nn.Linear,Conv2d...` will call `self.reset_parameters()` inside their `__init__`. I'd like to make `reset_parameters` be a no-op inside `no_init_weights` context manager. ### Motivation `no_init_weights` is used in `from_pretrained` to speed up loading large models. However, torch-built-in modules like `torch.nn.Linear` are heavily used in models of `transformers`, while its weights initialization cannot be disabled by `no_init_weights`. And in the doc string of `no_init_weights`, it should "globally disable weight initialization". ### Your contribution possible implementation ```python class SupportsResetParameters(Protocol): def reset_parameters(self): ... @contextmanager def no_init(module_classes: Iterable[Type[SupportsResetParameters]]): saved = {m: vars(m).get('reset_parameters') for m in module_classes} def no_op(_): pass for m in saved: m.reset_parameters = no_op # Iterable can only be safely iterated through once try: yield finally: for m, init in saved.items(): del m.reset_parameters if init is not None: m.reset_parameters = init TORCH_BUILT_IN_MODULES = [nn.Linear, nn.Conv2d, ...] @contextmanager def no_init_weights(): """ Context manager to globally disable weight initialization to speed up loading large models. """ global _init_weights saved = _init_weights _init_weights = False try: with no_init(TORCH_BUILT_IN_MODULES): yield finally: _init_weights = saved ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18505/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18504
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18504/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18504/comments
https://api.github.com/repos/huggingface/transformers/issues/18504/events
https://github.com/huggingface/transformers/pull/18504
1,330,725,212
PR_kwDOCUB6oc48wZ0d
18,504
Restore _init_weights value in no_init_weights
{ "login": "YouJiacheng", "id": 83971976, "node_id": "MDQ6VXNlcjgzOTcxOTc2", "avatar_url": "https://avatars.githubusercontent.com/u/83971976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YouJiacheng", "html_url": "https://github.com/YouJiacheng", "followers_url": "https://api.github.com/users/YouJiacheng/followers", "following_url": "https://api.github.com/users/YouJiacheng/following{/other_user}", "gists_url": "https://api.github.com/users/YouJiacheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/YouJiacheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YouJiacheng/subscriptions", "organizations_url": "https://api.github.com/users/YouJiacheng/orgs", "repos_url": "https://api.github.com/users/YouJiacheng/repos", "events_url": "https://api.github.com/users/YouJiacheng/events{/privacy}", "received_events_url": "https://api.github.com/users/YouJiacheng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger has also been contributing extensively to that part of the code and might have something to say!", "Thanks a lot for iterating on this!" ]
1,659
1,660
1,660
CONTRIBUTOR
null
Fend for potential nested use, and match the intuitive expectation for context managers(). In addition, users might modify private no_init_weights as well. @patrickvonplaten @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18504/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18504", "html_url": "https://github.com/huggingface/transformers/pull/18504", "diff_url": "https://github.com/huggingface/transformers/pull/18504.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18504.patch", "merged_at": 1660069411000 }
https://api.github.com/repos/huggingface/transformers/issues/18503
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18503/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18503/comments
https://api.github.com/repos/huggingface/transformers/issues/18503/events
https://github.com/huggingface/transformers/issues/18503
1,330,715,745
I_kwDOCUB6oc5PURxh
18,503
Add Mask2Former
{ "login": "shivalikasingh95", "id": 73357305, "node_id": "MDQ6VXNlcjczMzU3MzA1", "avatar_url": "https://avatars.githubusercontent.com/u/73357305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shivalikasingh95", "html_url": "https://github.com/shivalikasingh95", "followers_url": "https://api.github.com/users/shivalikasingh95/followers", "following_url": "https://api.github.com/users/shivalikasingh95/following{/other_user}", "gists_url": "https://api.github.com/users/shivalikasingh95/gists{/gist_id}", "starred_url": "https://api.github.com/users/shivalikasingh95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shivalikasingh95/subscriptions", "organizations_url": "https://api.github.com/users/shivalikasingh95/orgs", "repos_url": "https://api.github.com/users/shivalikasingh95/repos", "events_url": "https://api.github.com/users/shivalikasingh95/events{/privacy}", "received_events_url": "https://api.github.com/users/shivalikasingh95/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "@NielsRogge I'd like to work on adding this model if no one is working on it yet?", "cc'ing @alaradirik, yes we're planning to add this model. If you're interested in it, feel free to get started with a draft PR. Note that we already have [MaskFormer](https://github.com/huggingface/transformers/tree/main/src/transformers/models/maskformer) implemented, and I've heard Mask2Former only adds minor modifications.\r\n\r\nCould you give me your email address, such that we can add you on Slack for easier communication?", "Thanks @NielsRogge that would be great! You can use this email (shivalikasingh95@gmail.com) to add me on Slack!\r\n\r\nI'll get started on a draft PR. But, I may need some guidance as this is my first time contributing to transformers.\r\nI'll get started by understanding the [MaskFormer](https://github.com/huggingface/transformers/tree/main/src/transformers/models/maskformer) implementation.", "Hi @NielsRogge just a gentle reminder to add me on slack :)", "Hi, I've pinged someone to add you.", "Hello.\r\nAny updates about the Mask2Former integration? Thanks", "Hi @ArthurOuaknine I'm working on it. Currently there is an open PR on my transformers fork. Will try to close this in next couple of days. ", "Thanks for your work, it will definitely help :) " ]
1,659
1,668
null
CONTRIBUTOR
null
### Model description Mask2Former is a single architecture for panoptic, instance and semantic segmentation. **Mask2Former Paper Abstract**: Image segmentation is about grouping pixels with different semantics, e.g., category or instance membership, where each choice of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K). ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Paper: https://arxiv.org/abs/2112.01527 Github repo (and weights): https://github.com/facebookresearch/Mask2Former
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18503/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/18502
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18502/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18502/comments
https://api.github.com/repos/huggingface/transformers/issues/18502/events
https://github.com/huggingface/transformers/issues/18502
1,330,633,353
I_kwDOCUB6oc5PT9qJ
18,502
The Document of LongT5 confilcts with and its example code of prefix
{ "login": "GabrielLin", "id": 9721795, "node_id": "MDQ6VXNlcjk3MjE3OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/9721795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GabrielLin", "html_url": "https://github.com/GabrielLin", "followers_url": "https://api.github.com/users/GabrielLin/followers", "following_url": "https://api.github.com/users/GabrielLin/following{/other_user}", "gists_url": "https://api.github.com/users/GabrielLin/gists{/gist_id}", "starred_url": "https://api.github.com/users/GabrielLin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GabrielLin/subscriptions", "organizations_url": "https://api.github.com/users/GabrielLin/orgs", "repos_url": "https://api.github.com/users/GabrielLin/repos", "events_url": "https://api.github.com/users/GabrielLin/events{/privacy}", "received_events_url": "https://api.github.com/users/GabrielLin/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "@stancld @patil-suraj Could you please help to solve this issue and tell me how to set and use special down tasks for LongT5? Thanks.", "Hi @GabrielLin, with LongT5 model no prefix should be added to the input sentence. The doc example seems not to be accurate.", "Hi, @stancld . Thank you for your reply. Could you please indicate how to use `[PegasusForConditionalGeneration]` for different down-tasks and help to fix the example code? I have no ideas.", "> Hi, @stancld . Thank you for your reply. Could you please indicate how to use `[PegasusForConditionalGeneration]` for different down-tasks and help to fix the example code? I have no ideas.\n\nHi, example should have been already fixed by @patrickvonplaten. Fine-tuning on different down-tasks should be pretty standard. There's no prefix, you can thus use same techniques for models like BART, GPT-2, etc. :] However, the final performance is questionable as, AFAIK, only summarization and Q&A has been investigated so far.", "> > Hi, @stancld . Thank you for your reply. Could you please indicate how to use `[PegasusForConditionalGeneration]` for different down-tasks and help to fix the example code? I have no ideas.\r\n> \r\n> Hi, example should have been already fixed by @patrickvonplaten. Fine-tuning on different down-tasks should be pretty standard. There's no prefix, you can thus use same techniques for models like BART, GPT-2, etc. :] However, the final performance is questionable as, AFAIK, only summarization and Q&A has been investigated so far.\r\n\r\nThank you @stancld . Thank @patrickvonplaten . I have one more question. If having the prefix, I consider that different down-tasks can be fine-tuned in the same model. Now, without the prefix, should we use separated model for different down-tasks? Thanks.", "Hey @GabrielLin \r\n\r\nThat depends on how different the use cases are and what your limitations are exactly. In general, I'd say yes you should use different fine-tuned models for different tasks", "@patrickvonplaten Got it. Thanks. This issue has been fixed and closed." ]
1,659
1,662
1,662
NONE
null
### System Info All. ### Who can help? @patrickvonplaten ### Reproduction See https://huggingface.co/docs/transformers/main/en/model_doc/longt5 ### Expected behavior In the above document, it said `Unlike the T5 model, LongT5 does not use a task prefix. Furthermore, it uses a different pre-training objective inspired by the pre-training of [PegasusForConditionalGeneration].`. But in the example code of `LongT5ForConditionalGeneration`, there is a prefix of `summarize: `. I am confused about how to use LongT5 in different down tasks. Could you please help? Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18502/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18501
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18501/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18501/comments
https://api.github.com/repos/huggingface/transformers/issues/18501/events
https://github.com/huggingface/transformers/issues/18501
1,330,586,343
I_kwDOCUB6oc5PTyLn
18,501
Wav2Vec 2.0 model output logits related audio pad?
{ "login": "YooSungHyun", "id": 34292279, "node_id": "MDQ6VXNlcjM0MjkyMjc5", "avatar_url": "https://avatars.githubusercontent.com/u/34292279?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YooSungHyun", "html_url": "https://github.com/YooSungHyun", "followers_url": "https://api.github.com/users/YooSungHyun/followers", "following_url": "https://api.github.com/users/YooSungHyun/following{/other_user}", "gists_url": "https://api.github.com/users/YooSungHyun/gists{/gist_id}", "starred_url": "https://api.github.com/users/YooSungHyun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YooSungHyun/subscriptions", "organizations_url": "https://api.github.com/users/YooSungHyun/orgs", "repos_url": "https://api.github.com/users/YooSungHyun/repos", "events_url": "https://api.github.com/users/YooSungHyun/events{/privacy}", "received_events_url": "https://api.github.com/users/YooSungHyun/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "cc @sanchit-gandhi as well", "@patrickvonplaten plz help!", "Hey @YooSungHyun!\r\n\r\nI too have experienced differences in eval WER results by changing my padding strategy. In this case, I changed how I bucketed my inputs from bins of 2s to 1.5s, and got a 0.5% WER improvement when training on LibriSpeech 100h and evaluating on validation.clean. It looks like your example is much more severe!\r\n\r\nTheoretically speaking, padding should not impact the training or evaluation results: the attention mask ensures that padded inputs/labels are not attended to and sets them to a large negative number in the attention scores, so group norm and self-attention operations should be unaffected. However, practically there might be small differences due to numerical precision, especially if the amount of padding is excessive.\r\n\r\nIf padding is having such a large effect on your evaluation results, it might be worthwhile injecting some custom behaviour into the `Trainer`. What you can do is override the `_get_eval_sampler` method to return the `LengthGroupedSampler` instead of the sequential sampler:\r\n\r\n```python\r\nfrom typing import Optional\r\nimport datasets\r\nimport torch\r\nfrom datasets import Dataset\r\nfrom torch.utils.data import SequentialSampler\r\nfrom transformers import Trainer, is_datasets_available\r\nfrom transformers.trainer_pt_utils import LengthGroupedSampler\r\nfrom packaging import version\r\n\r\n\r\nclass CustomTrainer(Trainer):\r\n def _get_eval_sampler(self, eval_dataset: Dataset) -> Optional[torch.utils.data.Sampler]:\r\n if self.args.group_by_length:\r\n # Build the sampler. Adapted from _get_train_sampler\r\n generator = None\r\n if version.parse(torch.__version__) >= version.parse(\"1.6\"):\r\n generator = torch.Generator()\r\n generator.manual_seed(self.args.data_seed)\r\n if is_datasets_available() and isinstance(self.eval_dataset, datasets.Dataset):\r\n lengths = (\r\n eval_dataset[self.args.length_column_name]\r\n if self.args.length_column_name in self.eval_dataset.column_names\r\n else None\r\n )\r\n else:\r\n lengths = None\r\n model_input_name = self.tokenizer.model_input_names[0] if self.tokenizer is not None else None\r\n return LengthGroupedSampler(\r\n self.args.eval_batch_size,\r\n dataset=eval_dataset,\r\n lengths=lengths,\r\n model_input_name=model_input_name,\r\n generator=generator,\r\n )\r\n else:\r\n return SequentialSampler(eval_dataset)\r\n \r\n \r\ntrainer = CustomTrainer(model=model, ...)\r\n```\r\n\r\nLet me know how you get on", "hi, bro! @sanchit-gandhi !\r\n\r\nLol! you make custom trainer???? 😂 Awesome!!\r\n\r\nBut, I have another very easy way solution....kkk!!!\r\n\r\neval and predict loop used SequentialSampler right?\r\nso! i only sorted my datasets.\r\n\r\nlook like this,\r\n<img width=\"560\" alt=\"image\" src=\"https://user-images.githubusercontent.com/34292279/184590907-609f5227-783a-4679-9d9b-fd87658cce1c.png\">\r\n\r\nIf training, group_by_length working. don't sort!\r\nIf eval & predict, group_by_length not working, so sorting and input SequentialSampler -> it works looks like LengthGroupedSampler\r\nSo, i don`t have to override anymore!! 😎\r\n\r\nand, anyway, i think that problem is caused layer normalization. not attention. attention is innocent!\r\nwav2vec 2.0 pre-training have to select group or layer norm. and i debugging already it.\r\nusing pad & not using pad(batch 1)'s normalize output is different and\r\nin case of very long sequence text and very short text (2 batchs), short text's attention output(context vector) is looks like all pad so, model predict empty text ''. so WER metric is high. that is problem🦄", "Hey @YooSungHyun!\r\n\r\nNice, the `.sort()` trick you used is neat! As you said, this is fine for the dev/test datasets where we don't require shuffling, and so a deterministic sorting strategy is entirely valid.\r\n\r\nThere is indeed strange behaviour in the original Wav2Vec2 base checkpoint caused by a bug in the computation of the layer-norm layers: https://github.com/huggingface/transformers/blob/84beb8a49bf137a88d1b29ab3a85ba0a3cd097d5/src/transformers/models/wav2vec2/configuration_wav2vec2.py#L98\r\n\r\nThis was copied one-to-one from the original fariseq implementation! \r\n\r\nYou could try using a checkpoint that uses the 'stable' layer-norm implementation, i.e. one of the large checkpoints: https://huggingface.co/facebook/wav2vec2-large-lv60/blob/main/config.json#L42", "THX @sanchit-gandhi \r\ni'm already use that `do_stable_layer_norm ` that problem raised too. so, i have to sorted eval, test set...😂\r\nand also, wav2vec2-conformer is not supported that param!\r\n\r\ndo you agree pad issue is raised to layer_norm?", "Sure, if you're using Wav2Vec2Conformer then the only configuration is the correct layer-norm implementation. It's hard to know where the issue lies without a reproducible example, could you maybe provide a short code-snippet that I could run to see how you're padding the data? Thanks!", "@sanchit-gandhi THX for reply\r\ni checked wav2vec2-conformer, that is already do_stable_layer_norm like...!\r\n\r\nin case, i just pretrained base model and Wav2Vec2ForCTC finetuning. (do_stable_layer_norm is True, group_by_length True)\r\nand finally when i do predict loop (for model evaluation(testing))\r\nfirst case. eval_set shuffle and eval batch size 2\r\nWER is high\r\nsecond case. eval_set sort and eval batch size 2\r\nWER is lower than first case\r\nthird case. eval_set sort and eval batch size 1\r\nWER is the lowest\r\nfourth case, eval_set sort and eval batch size 1\r\nWER is same that third case.\r\n\r\nso, i think batch and shuffle is affect WER. that is reason to 'padded data is affect to layer normalization'.\r\ndo_stable_layer_norm is not help for this situation i think.\r\n\r\ni used source is https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py\r\nbut dataset is korean audio and korean text\r\n", "Okay interesting - could you check the losses for the four cases - are they the same or do they differ? If they are the same it's a tokenizer issue with padding. Otherwise likely a modelling issue!", "@sanchit-gandhi i`m very busy now, so i will reply this comment as soon as possible bro!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,659
1,666
1,666
NONE
null
### System Info ubuntu 18.04 python 3.6, 3.9 transformers 1.18.0 ### Who can help? @patrickvonplaten, @anton-l ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. dev, test datasets is shuffled 2. eval, predict loop model input is not smart_batching (not support group_by_length) 3. then large batch size input calculated wer is higher than small batch size input 4. if i sorted audio length dev, test datasets, wer compute_metric is faster and not affected by batch size ### Expected behavior this is my real test case sorted case (batch: metric_result) 8: {'test_wer': 0.2266084739113378, 'test_cer': 0.08425300357677845} 4: {'test_wer': 0.22646135739505688, 'test_cer': 0.08419186206474887} 2: {'test_wer': 0.2264123185562966, 'test_cer': 0.08417657668674146} 1: {'test_wer': 0.22646135739505688, 'test_cer': 0.08419186206474887} un sorted case 8: 35% 4: not test 2: 25% 1: {'eval_wer': 0.22646135739505688, 'eval_cer': 0.08419186206474887} maybe, CNN Layer or Group normalization is affect to padded data...? (both config all raised this issue) when i was training, input group_by_length=True, so training is good i think but, eval, test sampler is just sequential sampler, so eval or predict test wer result is some weired
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18501/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18500
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18500/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18500/comments
https://api.github.com/repos/huggingface/transformers/issues/18500/events
https://github.com/huggingface/transformers/pull/18500
1,330,418,791
PR_kwDOCUB6oc48vlcp
18,500
Update to use interlibrary links instead of Markdown
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
MEMBER
null
This PR updates Markdown links to other HF libraries with the doc-builder's interlibrary links instead.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18500/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18500", "html_url": "https://github.com/huggingface/transformers/pull/18500", "diff_url": "https://github.com/huggingface/transformers/pull/18500.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18500.patch", "merged_at": 1659974033000 }
https://api.github.com/repos/huggingface/transformers/issues/18499
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18499/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18499/comments
https://api.github.com/repos/huggingface/transformers/issues/18499/events
https://github.com/huggingface/transformers/pull/18499
1,330,404,863
PR_kwDOCUB6oc48vi-z
18,499
Update feature extractor methods to enable type cast before normalize
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Looks good to me! If the changes per model are small enough, it would probably be best to change them all in the same PR, rather than doing individual ones.\r\n\r\n@sgugger Yep, I completely agree. The changes all together aren't that small, but almost exactly the same across models. Once this is merged in, I'll open a PR for the VideoMAE refactor (https://github.com/amyeroberts/transformers/pull/9/files) as this covers all the changes. Once approved, I'll merge in the other models to the branch, as for re-review of the total PR and then merge all together. " ]
1,659
1,661
1,660
COLLABORATOR
null
# What does this PR do? At the moment, the return type of our feature extractors isn't always as expected or sometimes fails if a `do_xxx` config flag is set to `False`. This PR introduces the necessary changes to the `ImageFeatureExtractionMixin` methods such that we can modify the feature extractor calls to fix this. This is an alternative solution to setting `return_tensors="np"` as default. Each vision model using `ImageFeatureExtractionMixin` has a separate PR adding their necessary modifications and tests. - [ ] [beit](https://github.com/amyeroberts/transformers/pull/12) - [ ] [clip](https://github.com/amyeroberts/transformers/pull/22) - [ ] [convnext](https://github.com/amyeroberts/transformers/pull/13) - [ ] [deit](https://github.com/amyeroberts/transformers/pull/14) - [ ] [detr](https://github.com/amyeroberts/transformers/pull/1) - [ ] [dpt](https://github.com/amyeroberts/transformers/pull/15) - [ ] [flava](https://github.com/amyeroberts/transformers/pull/17) - [ ] [glpn](https://github.com/amyeroberts/transformers/pull/18) - [ ] [imagegpt](https://github.com/amyeroberts/transformers/pull/2) - [ ] [layoutlmv2](https://github.com/amyeroberts/transformers/pull/19) - [ ] [layoutlmv3](https://github.com/amyeroberts/transformers/pull/20) - [ ] [levit](https://github.com/amyeroberts/transformers/pull/3) - [ ] [maskformer](https://github.com/amyeroberts/transformers/pull/4) - [ ] [mobilevit](https://github.com/amyeroberts/transformers/pull/21) - [ ] [owlvit](https://github.com/amyeroberts/transformers/pull/5) - [ ] [perceiver](https://github.com/amyeroberts/transformers/pull/6) - [ ] [poolformer](https://github.com/amyeroberts/transformers/pull/7) - [ ] [segformer](https://github.com/amyeroberts/transformers/pull/8) - [ ] [vilt](https://github.com/amyeroberts/transformers/pull/10) - [ ] [vit](https://github.com/amyeroberts/transformers/pull/16) - [ ] [yolos](https://github.com/amyeroberts/transformers/pull/11) - [ ] [videomae](https://github.com/amyeroberts/transformers/pull/9) ## Details At the moment, if `do_normalize=False`, `do_resize=True` and `return_tensors=None` then the output tensors will be a list of `PIL.Image.Image` objects if even if the inputs are numpy arrays. If `do_normalize=False` and `return_tensors` is specified (`"pt"`, `"np"`, `"tf"`, `"jax"`) an exception is raised. The main reasons for this are: * `BatchFeature` can't convert `PIL.Image.Image` to the requested tensors. * The necessary conversion of `PIL.Image.Image` -> `np.ndarray` happens within the `normalize` method and the output of `resize` is `PIL.Image.Image`. In order to have the type of the returned `pixel_values` reflect `return_tensors` we need to: * Convert `PIL.Image.Image` objects to numpy arrays before passing to `BatchFeature` * Be able to optionally rescale the inputs in the `normalize` method. If the input to `normalize` is a `PIL.Image.Image` it is converted to a numpy array using `to_numpy_array` which rescales to between [0, 1]. If `do_resize=False` then this rescaling won't happen if the inputs are numpy arrays. The optional flags enable us to preserve the same default behaviour for the `resize` and `normalize` methods whilst modifying the internal logic of the feature extractor call. ## Checks The model PRs are all cherry picked (file diffs) of `type-cast-before-normalize` The following was run to check the outputs: ``` from dataclasses import dataclass import requests import numpy as np from PIL import Image import pygit2 from transformers import AutoFeatureExtractor @dataclass class FeatureExtractorConfig: model_name: str checkpoint: str return_type: str = "np" feat_name: str = "pixel_values" IMAGE_FEATURE_EXTRACTOR_CONFIGS = [ FeatureExtractorConfig(model_name="clip", checkpoint="openai/clip-vit-base-patch32"), FeatureExtractorConfig(model_name="convnext", checkpoint="facebook/convnext-tiny-224"), FeatureExtractorConfig(model_name="deit", checkpoint="facebook/deit-base-distilled-patch16-224"), FeatureExtractorConfig(model_name="detr", checkpoint="facebook/detr-resnet-50"), FeatureExtractorConfig(model_name="dpt", checkpoint="Intel/dpt-large"), FeatureExtractorConfig(model_name="flava", checkpoint="facebook/flava-full"), FeatureExtractorConfig(model_name="glpn", checkpoint="vinvino02/glpn-kitti"), FeatureExtractorConfig(model_name="imagegpt", checkpoint="openai/imagegpt-small", feat_name='input_ids'), FeatureExtractorConfig(model_name="layoutlmv2", checkpoint="microsoft/layoutlmv2-base-uncased"), FeatureExtractorConfig(model_name="layoutlmv3", checkpoint="microsoft/layoutlmv3-base"), FeatureExtractorConfig(model_name="levit", checkpoint="facebook/levit-128S"), FeatureExtractorConfig(model_name="maskformer", checkpoint="facebook/maskformer-swin-base-ade", return_type="pt"), FeatureExtractorConfig(model_name="mobilevit", checkpoint="apple/mobilevit-small"), FeatureExtractorConfig(model_name="owlvit", checkpoint="google/owlvit-base-patch32"), FeatureExtractorConfig(model_name="perceiver", checkpoint="deepmind/vision-perceiver-fourier"), FeatureExtractorConfig(model_name="poolformer", checkpoint="sail/poolformer_s12"), FeatureExtractorConfig(model_name="segformer", checkpoint="nvidia/mit-b0"), FeatureExtractorConfig(model_name="vilt", checkpoint="dandelin/vilt-b32-mlm"), FeatureExtractorConfig(model_name="vit", checkpoint="google/vit-base-patch16-224-in21k"), FeatureExtractorConfig(model_name="yolos", checkpoint="hustvl/yolos-small"), ] VIDEO_FEATURE_EXTRACTOR_CONFIGS = [ FeatureExtractorConfig(model_name="videomae", checkpoint="MCG-NJU/videomae-base"), ] url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) def produce_pixel_value_outputs(): BRANCH = pygit2.Repository('.').head.shorthand def get_processed_outputs(inputs, model_checkpoint, feat_name): feature_extractor = AutoFeatureExtractor.from_pretrained(model_checkpoint) outputs = feature_extractor(inputs, return_tensors=fe_config.return_type)[feat_name] return outputs for fe_config in IMAGE_FEATURE_EXTRACTOR_CONFIGS: print(fe_config.model_name, fe_config.checkpoint) outputs = get_processed_outputs(image, fe_config.checkpoint, fe_config.feat_name) np.save(f"{fe_config.model_name}_{BRANCH.replace('-', '_')}_pixel_values.npy", outputs) for fe_config in VIDEO_FEATURE_EXTRACTOR_CONFIGS: print(fe_config.model_name, fe_config.checkpoint) outputs = get_processed_outputs([[image, image]], fe_config.checkpoint, fe_config.feat_name) np.save(f"{fe_config.model_name}_{BRANCH.replace('-', '_')}_pixel_values.npy", outputs) branch_main = "main" branch_feature = "type-cast-before-normalize" repo = pygit2.Repository('.git') print("\nChecking out main") branch = repo.lookup_branch('main') ref = repo.lookup_reference(branch.name) repo.checkout(ref) produce_pixel_value_outputs() print("\nChecking out type-cast-before-normalize") branch = repo.lookup_branch('type-cast-before-normalize') ref = repo.lookup_reference(branch.name) repo.checkout(ref) produce_pixel_value_outputs() for fe_config in IMAGE_FEATURE_EXTRACTOR_CONFIGS + VIDEO_FEATURE_EXTRACTOR_CONFIGS: model_name = fe_config.model_name try: output_1 = np.load(f"{model_name}_{branch_main}_pixel_values.npy") output_2 = np.load(f"{model_name}_{branch_feature.replace('-', '_')}_pixel_values.npy") max_diff = np.amax(np.abs(output_1 - output_2)) print(f"{model_name}: {max_diff:.5f}") except Exception as e: print(f"{model_name} failed check with {e}") ``` Output: ``` clip: 0.00000 convnext: 0.00000 deit: 0.00000 detr: 0.00000 dpt: 0.00000 flava: 0.00000 glpn: 0.00000 imagegpt: 0.00000 layoutlmv2: 0.00000 layoutlmv3: 0.00000 levit: 0.00000 maskformer: 0.00000 mobilevit: 0.00000 owlvit: 0.00000 perceiver: 0.00000 poolformer: 0.00000 segformer: 0.00000 vilt: 0.00000 vit: 0.00000 yolos: 0.00000 videomae: 0.00000 ``` ## Fixes https://github.com/huggingface/transformers/issues/17714 https://github.com/huggingface/transformers/issues/15055 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? (in model PRs)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18499/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18499", "html_url": "https://github.com/huggingface/transformers/pull/18499", "diff_url": "https://github.com/huggingface/transformers/pull/18499.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18499.patch", "merged_at": 1660762628000 }
https://api.github.com/repos/huggingface/transformers/issues/18498
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18498/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18498/comments
https://api.github.com/repos/huggingface/transformers/issues/18498/events
https://github.com/huggingface/transformers/pull/18498
1,330,384,392
PR_kwDOCUB6oc48ve1p
18,498
Add example of multimodal usage to pipeline tutorial
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
MEMBER
null
As suggested by @NielsRogge, this PR adds a small example of how to use a multimodal pipeline (VQA) in the pipeline tutorial. 🙂
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18498/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18498", "html_url": "https://github.com/huggingface/transformers/pull/18498", "diff_url": "https://github.com/huggingface/transformers/pull/18498.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18498.patch", "merged_at": 1659976292000 }
https://api.github.com/repos/huggingface/transformers/issues/18497
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18497/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18497/comments
https://api.github.com/repos/huggingface/transformers/issues/18497/events
https://github.com/huggingface/transformers/pull/18497
1,330,364,505
PR_kwDOCUB6oc48vajV
18,497
Clean up hub
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I still had an error mentioning `ImportError: cannot import name 'cached_path' from 'transformers.utils' `\r\n@ `tf.__version__ == '2.11.0'`. Is this related? What should i do?", "You should stop using that function, as it has been removed from `transformers.utils` in the most recent versions." ]
1,659
1,670
1,659
COLLABORATOR
null
# What does this PR do? This PR removes all uses of the old utils in Transformers to migrate to the new version using `cached_file`, and remove said utils from the Transformers library. It is slightly breaking since we remove objects (in particular `cached_path` is in the main init albeit not documented), but those are all internal tools, so it's okay in my opinion. Only the research examples are left as before.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18497/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18497/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18497", "html_url": "https://github.com/huggingface/transformers/pull/18497", "diff_url": "https://github.com/huggingface/transformers/pull/18497.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18497.patch", "merged_at": 1659962891000 }
https://api.github.com/repos/huggingface/transformers/issues/18496
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18496/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18496/comments
https://api.github.com/repos/huggingface/transformers/issues/18496/events
https://github.com/huggingface/transformers/issues/18496
1,330,235,211
I_kwDOCUB6oc5PScdL
18,496
TF to ONNX export fails with large models
{ "login": "cchan-lm", "id": 88676609, "node_id": "MDQ6VXNlcjg4Njc2NjA5", "avatar_url": "https://avatars.githubusercontent.com/u/88676609?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cchan-lm", "html_url": "https://github.com/cchan-lm", "followers_url": "https://api.github.com/users/cchan-lm/followers", "following_url": "https://api.github.com/users/cchan-lm/following{/other_user}", "gists_url": "https://api.github.com/users/cchan-lm/gists{/gist_id}", "starred_url": "https://api.github.com/users/cchan-lm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cchan-lm/subscriptions", "organizations_url": "https://api.github.com/users/cchan-lm/orgs", "repos_url": "https://api.github.com/users/cchan-lm/repos", "events_url": "https://api.github.com/users/cchan-lm/events{/privacy}", "received_events_url": "https://api.github.com/users/cchan-lm/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "cc @JingyaHuang @michaelbenayoun ", "If there are no onnx-level solutions, it may be due to TF1 code (embeddings) in our models -- see https://github.com/tensorflow/tensorflow/issues/45041\r\n\r\nRewriting embeddings into TF2 code is in our to do list, which may fix this issue.", "TF2ONNX offers the [support for exporting large ONNX](https://github.com/onnx/tensorflow-onnx/blob/v1.12.1/tf2onnx/convert.py#L427) tensors with external files, however by adding the flag to the ONNX exporter of transformers, it doesn't work correctly for the moment:\r\n```\r\n File \"/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/transformers/onnx/convert.py\", line 338, in export\r\n return export_tensorflow(preprocessor, model, config, opset, output, tokenizer=tokenizer)\r\n File \"/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/transformers/onnx/convert.py\", line 265, in export_tensorflow\r\n onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=opset, large_model=True)\r\n File \"/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/tf2onnx/convert.py\", line 495, in from_keras\r\n model_proto, external_tensor_storage = _convert_common(\r\n File \"/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/tf2onnx/convert.py\", line 165, in _convert_common\r\n g = process_tf_graph(tf_graph, const_node_values=const_node_values,\r\n File \"/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/tf2onnx/tfonnx.py\", line 459, in process_tf_graph\r\n main_g, subgraphs = graphs_from_tf(tf_graph, input_names, output_names, shape_override, const_node_values,\r\n File \"/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/tf2onnx/tfonnx.py\", line 499, in graphs_from_tf\r\n utils.check_io(input_names, output_names, output_shapes.keys())\r\n File \"/home/ubuntu/anaconda3/envs/venv_onnx_large/lib/python3.9/site-packages/tf2onnx/utils.py\", line 316, in check_io\r\n raise ValueError(\"Inputs/Outputs Not Found\")\r\nValueError: Inputs/Outputs Not Found\r\n```\r\nFurther investigation needs to be done from the TensorFlow side. And I will be happy to help with a PR to enable this in transformers' onnx tf exporter once we are sure that the large proto export features work correctly.", "> If there are no onnx-level solutions, it may be due to TF1 code (embeddings) in our models -- see [tensorflow/tensorflow#45041](https://github.com/tensorflow/tensorflow/issues/45041)\r\n> \r\n> Rewriting embeddings into TF2 code is in our to do list, which may fix this issue.\r\n\r\nDidn't know that, ok, it seems that it is not just a problem from the limit of protobuf size then.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,659
1,663
1,663
NONE
null
### System Info - `transformers` version: 4.21.1 - Platform: Linux-4.15.0-187-generic-x86_64-with-debian-buster-sid - Python version: 3.7.5 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run `python -m transformers.onnx --model=gpt2-large --framework=tf onnx/` See error like below: ``` Traceback (most recent call last): File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tf2onnx/tf_loader.py", line 221, in from_trackable frozen_graph = from_function(concrete_func, inputs, outputs, large_model) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tf2onnx/tf_loader.py", line 280, in from_function raise e File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tf2onnx/tf_loader.py", line 273, in from_function frozen_func = convert_variables_to_constants_v2(func, lower_control_flow=False, aggressive_inlining=True) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1156, in convert_variables_to_constants_v2 converted_input_indices) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1082, in _construct_concrete_function new_output_names) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 660, in function_from_graph_def wrapped_import = wrap_function(_imports_graph_def, []) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 631, in wrap_function collections={}), File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 1143, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 87, in __call__ return self.call_with_variable_creator_scope(self._fn)(*args, **kwargs) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 93, in wrapped return fn(*args, **kwargs) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py", line 654, in _imports_graph_def importer.import_graph_def(graph_def, name="") File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 552, in new_func return func(*args, **kwargs) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/framework/importer.py", line 412, in import_graph_def producer_op_list=producer_op_list) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tensorflow/python/framework/importer.py", line 501, in _import_graph_def_internal with c_api_util.tf_buffer(graph_def.SerializeToString()) as serialized: ValueError: Message tensorflow.GraphDef exceeds maximum protobuf size of 2GB: 3096993336 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/craig/.pyenv/versions/3.7.5/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/craig/.pyenv/versions/3.7.5/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/transformers/onnx/__main__.py", line 107, in <module> main() File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/transformers/onnx/__main__.py", line 94, in main args.output, File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/transformers/onnx/convert.py", line 338, in export return export_tensorflow(preprocessor, model, config, opset, output, tokenizer=tokenizer) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/transformers/onnx/convert.py", line 265, in export_tensorflow onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=opset) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tf2onnx/convert.py", line 493, in from_keras tf_loader.from_trackable(model, concrete_func, input_names, output_names, large_model) File "/home/craig/.pyenv/versions/tf-hf-test/lib/python3.7/site-packages/tf2onnx/tf_loader.py", line 224, in from_trackable raise ValueError(err_large_model) ValueError: model exceeds maximum protobuf size of 2GB. Try setting large_model. ``` ### Expected behavior Export should still be successful for large TF models. `tf2onnx` expects `large_model` to be passed in should the protobuf exceed 2 GB. Not sure if `tf2onnx` behavior will be changed, but maybe `transformers` can account for this before using `tf2onnx`?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18496/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18495
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18495/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18495/comments
https://api.github.com/repos/huggingface/transformers/issues/18495/events
https://github.com/huggingface/transformers/issues/18495
1,330,187,700
I_kwDOCUB6oc5PSQ20
18,495
TF to ONNX export fails with CLI using example from docs
{ "login": "cchan-lm", "id": 88676609, "node_id": "MDQ6VXNlcjg4Njc2NjA5", "avatar_url": "https://avatars.githubusercontent.com/u/88676609?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cchan-lm", "html_url": "https://github.com/cchan-lm", "followers_url": "https://api.github.com/users/cchan-lm/followers", "following_url": "https://api.github.com/users/cchan-lm/following{/other_user}", "gists_url": "https://api.github.com/users/cchan-lm/gists{/gist_id}", "starred_url": "https://api.github.com/users/cchan-lm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cchan-lm/subscriptions", "organizations_url": "https://api.github.com/users/cchan-lm/orgs", "repos_url": "https://api.github.com/users/cchan-lm/repos", "events_url": "https://api.github.com/users/cchan-lm/events{/privacy}", "received_events_url": "https://api.github.com/users/cchan-lm/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hmmm that's interesting, indeed!\r\n\r\nThe docs should be updated, but it would be nice to also support this out of the box. Would you like to try your hand at a PR? \r\n\r\ncc @lewtun @michaelbenayoun @JingyaHuang for knowledge", "Sure, I can try making a PR for it! Will be doing so from my personal account, @rachthree.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Because PR https://github.com/huggingface/transformers/pull/18615 has been merged, I'm considering this closed." ]
1,659
1,662
1,662
NONE
null
### System Info - `transformers` version: 4.21.1 - Platform: Linux-4.15.0-187-generic-x86_64-with-debian-buster-sid - Python version: 3.7.5 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Save a TF transformers model (from example at https://huggingface.co/docs/transformers/serialization) ``` from transformers import AutoTokenizer, TFAutoModelForSequenceClassification # Load tokenizer and TensorFlow weights from the Hub tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") # Save to disk tokenizer.save_pretrained("local-tf-checkpoint") tf_model.save_pretrained("local-tf-checkpoint") ``` 2. Use CLI to export to ONNX to see failure: `python -m transformers.onnx --model=local-tf-checkpoint onnx/` 3. Use `--framework` to use successfully: `python -m transformers.onnx --model=local-tf-checkpoint --framework=tf onnx/` ### Expected behavior Once the model directory has been provided, the export should know that a TF model is being used. There should be no dependency on PyTorch (there is also no PyTorch in this environment). Instead, I get this error: `RuntimeError: Cannot export model to ONNX using PyTorch because no PyTorch package was found.` Either `transformers` should be updated or the docs at https://huggingface.co/docs/transformers/serialization should be updated to say that `--framework=tf` for TensorFlow models is required.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18495/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18494
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18494/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18494/comments
https://api.github.com/repos/huggingface/transformers/issues/18494/events
https://github.com/huggingface/transformers/pull/18494
1,330,137,780
PR_kwDOCUB6oc48upSX
18,494
`pipeline` support for `device="mps"` (or any other string)
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks @julien-c . am getting a new error\r\nMy guess this is on the pytorch op.\r\n> NotImplementedError: The operator 'aten::unique_consecutive' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.\r\n\r\n\r\n\r\nHere is the traceback\r\n\r\n> preds = pipe('i am feeling awesome', ['positive', 'negative'])\r\n> File \"/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/pipelines/zero_shot_classification.py\", line 182, in __call__\r\n> return super().__call__(sequences, **kwargs)\r\n> File \"/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 1071, in __call__\r\n> return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n> File \"/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 1093, in run_single\r\n> model_outputs = self.forward(model_inputs, **forward_params)\r\n> File \"/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 987, in forward\r\n> model_outputs = self._forward(model_inputs, **forward_params)\r\n> File \"/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/pipelines/zero_shot_classification.py\", line 201, in _forward\r\n> outputs = self.model(**model_inputs)\r\n> File \"/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1186, in _call_impl\r\n> return forward_call(*input, **kwargs)\r\n> File \"/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py\", line 1516, in forward\r\n> if len(torch.unique_consecutive(eos_mask.sum(1))) > 1:\r\n> File \"/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/torch/_jit_internal.py\", line 447, in fn\r\n> return if_false(*args, **kwargs)\r\n> File \"/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/torch/_jit_internal.py\", line 447, in fn\r\n> return if_false(*args, **kwargs)\r\n> File \"/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/torch/functional.py\", line 913, in _consecutive_return_output\r\n> output, _, _ = _unique_consecutive_impl(input, return_inverse, return_counts, dim)\r\n> File \"/Volumes/training/torch-gpu/env/lib/python3.8/site-packages/torch/functional.py\", line 830, in _unique_consecutive_impl\r\n> output, inverse_indices, counts = _VF.unique_consecutive( # type: ignore[attr-defined]", "Yes @AsaKal, please comment on https://github.com/pytorch/pytorch/issues/77764", "⚠️ You might have to `os.environ[\"PYTORCH_ENABLE_MPS_FALLBACK\"] = \"1\"` because many operations are still not implemented\r\n\r\nalso cc @pacman100 for visibility" ]
1,659
1,660
1,660
MEMBER
null
No tests yet given we don't officially support `torch==1.12` yet (and M1-based CI is still a work in progress)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18494/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18494/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18494", "html_url": "https://github.com/huggingface/transformers/pull/18494", "diff_url": "https://github.com/huggingface/transformers/pull/18494.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18494.patch", "merged_at": 1660150336000 }
https://api.github.com/repos/huggingface/transformers/issues/18493
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18493/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18493/comments
https://api.github.com/repos/huggingface/transformers/issues/18493/events
https://github.com/huggingface/transformers/pull/18493
1,330,136,063
PR_kwDOCUB6oc48uo6_
18,493
Typo reported by Joel Grus on TWTR
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18493/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18493", "html_url": "https://github.com/huggingface/transformers/pull/18493", "diff_url": "https://github.com/huggingface/transformers/pull/18493.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18493.patch", "merged_at": 1659720579000 }
https://api.github.com/repos/huggingface/transformers/issues/18492
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18492/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18492/comments
https://api.github.com/repos/huggingface/transformers/issues/18492/events
https://github.com/huggingface/transformers/pull/18492
1,330,127,591
PR_kwDOCUB6oc48unGq
18,492
Move cache folder to huggingface/hub for consistency with hf_hub
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
COLLABORATOR
null
# What does this PR do? This PR relocates the cache to just `~/.cache/huggingface/` when no environment variable has been set. Users having pulled between #18348 and now will need to move their cache manually.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18492/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18492", "html_url": "https://github.com/huggingface/transformers/pull/18492", "diff_url": "https://github.com/huggingface/transformers/pull/18492.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18492.patch", "merged_at": 1659719641000 }
https://api.github.com/repos/huggingface/transformers/issues/18491
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18491/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18491/comments
https://api.github.com/repos/huggingface/transformers/issues/18491/events
https://github.com/huggingface/transformers/pull/18491
1,330,124,887
PR_kwDOCUB6oc48umh9
18,491
Update the original mapping in _LazyConfigMapping to fix AutoTokenizer registration
{ "login": "lolipopshock", "id": 22512825, "node_id": "MDQ6VXNlcjIyNTEyODI1", "avatar_url": "https://avatars.githubusercontent.com/u/22512825?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lolipopshock", "html_url": "https://github.com/lolipopshock", "followers_url": "https://api.github.com/users/lolipopshock/followers", "following_url": "https://api.github.com/users/lolipopshock/following{/other_user}", "gists_url": "https://api.github.com/users/lolipopshock/gists{/gist_id}", "starred_url": "https://api.github.com/users/lolipopshock/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lolipopshock/subscriptions", "organizations_url": "https://api.github.com/users/lolipopshock/orgs", "repos_url": "https://api.github.com/users/lolipopshock/repos", "events_url": "https://api.github.com/users/lolipopshock/events{/privacy}", "received_events_url": "https://api.github.com/users/lolipopshock/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18491). All of your documentation changes will be reflected on that endpoint.", "cc @sgugger ", "Do you have a full example of the error you are reporting I could run? I am unable to reproduce it. Something like the [test](https://github.com/huggingface/transformers/blob/ab2006e3d6db88654526a4169e65d4bfc52da2e3/tests/models/auto/test_tokenization_auto.py#L234) of this feature we could investigate more.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,659
1,663
1,663
NONE
null
# What does this PR do? Currently when we want to register a new config+tokenizer+model, per [the instructions](https://huggingface.co/docs/transformers/model_doc/auto), it seems we should do the following: ``` from transformers import AutoConfig, AutoModel AutoConfig.register("new-model", NewModelConfig) AutoTokenizer.register(NewModelConfig, TokenizerSlow, TokenizerFast) AutoModel.register(NewModelConfig, NewModel) AutoTokenizer.from_pretrained("xxx") # <--- error `Unrecognized configuration class <xxx> to build an AutoTokenizer.` ``` However, there is one potential bug in the current AutoTokenizer registration code: - In https://github.com/huggingface/transformers/blob/280db2e39c1e586389df4e46f2b895fc092911bb/src/transformers/models/auto/tokenization_auto.py#L605, `AutoTokenizer` will `config_class_to_model_type` to determine whether the corresponding config is registered in the input config. - The `config_class_to_model_type` function checks the `CONFIG_MAPPING_NAMES ` to find the newly register config class. https://github.com/huggingface/transformers/blob/280db2e39c1e586389df4e46f2b895fc092911bb/src/transformers/models/auto/configuration_auto.py#L438 - However, according to https://github.com/huggingface/transformers/blob/280db2e39c1e586389df4e46f2b895fc092911bb/src/transformers/models/auto/configuration_auto.py#L781 , after registering a config, the `CONFIG_MAPPING ` only updates the `_extra_content ` but not the original mapping or `CONFIG_MAPPING_NAMES` in this case https://github.com/huggingface/transformers/blob/280db2e39c1e586389df4e46f2b895fc092911bb/src/transformers/models/auto/configuration_auto.py#L492 . That is to say, the `config_class_to_model_type` cannot find the newly registered config in this case, and will throw an error `Unrecognized configuration class <xxx> to build an AutoTokenizer.` A temporary local hot fix can be: ``` from transformers import AutoConfig, AutoModel from transformers.models.auto.configuration_auto import CONFIG_MAPPING_NAMES AutoConfig.register("new-model", NewModelConfig) CONFIG_MAPPING_NAMES["new-model"] = NewModelConfig.__name__ AutoTokenizer.register(NewModelConfig, TokenizerSlow, TokenizerFast) AutoModel.register(NewModelConfig, NewModel) ``` But thought it would be better to fix it upstream. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @n1t0, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18491/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18491/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18491", "html_url": "https://github.com/huggingface/transformers/pull/18491", "diff_url": "https://github.com/huggingface/transformers/pull/18491.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18491.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18490
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18490/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18490/comments
https://api.github.com/repos/huggingface/transformers/issues/18490/events
https://github.com/huggingface/transformers/pull/18490
1,330,083,680
PR_kwDOCUB6oc48udvP
18,490
`transformers-cli login` => `huggingface-cli login`
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Failures are due to `ERROR tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py` trying to be run in a non-torch setup and without the appropriate decorator setup; Fine to ignore the failure for me." ]
1,659
1,659
1,659
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18490/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18490", "html_url": "https://github.com/huggingface/transformers/pull/18490", "diff_url": "https://github.com/huggingface/transformers/pull/18490.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18490.patch", "merged_at": 1659771776000 }
https://api.github.com/repos/huggingface/transformers/issues/18489
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18489/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18489/comments
https://api.github.com/repos/huggingface/transformers/issues/18489/events
https://github.com/huggingface/transformers/pull/18489
1,330,063,670
PR_kwDOCUB6oc48uZb5
18,489
Just re-reading the whole doc every couple of months 😬
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18489/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18489", "html_url": "https://github.com/huggingface/transformers/pull/18489", "diff_url": "https://github.com/huggingface/transformers/pull/18489.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18489.patch", "merged_at": 1659771536000 }
https://api.github.com/repos/huggingface/transformers/issues/18488
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18488/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18488/comments
https://api.github.com/repos/huggingface/transformers/issues/18488/events
https://github.com/huggingface/transformers/pull/18488
1,329,934,135
PR_kwDOCUB6oc48t9cO
18,488
Add Donut
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I have implemented a new `DonutSwinModel`, that copies everything of `SwinModel`, except the final layer norm. I've added it in a file called `modeling_donut_swin.py` (and implemented a corresponding `DonutSwinConfig` in `configuration_donut_swin.py`). \r\n\r\nI went with `modeling_donut_swin.py` (and `configuration_donut_swin.py`) in the \"donut\" folder rather than `modeling_donut.py` (and `configuration_donut.py`) since it only implements the model and configuration of the encoder part (Swin Transformer). For the decoder, BART is leveraged. Let me know if this is ok.", "Hi @NielsRogge , do you plan on supporting: `Document Parsing` modality? " ]
1,659
1,660
1,660
CONTRIBUTOR
null
# What does this PR do? This PR adds Donut to the library. Donut is to LayoutLM what T5 is to BERT. :D The model is implemented as an instance of our existing `VisionEncoderDecoderModel`. See also https://github.com/clovaai/donut/issues/10#issue-1324734927 To do: - [x] move repos to appropriate organization in - [x] update niels to appropriate organization in `test_modeling_vision_encoder_decoder.py`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18488/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18488", "html_url": "https://github.com/huggingface/transformers/pull/18488", "diff_url": "https://github.com/huggingface/transformers/pull/18488.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18488.patch", "merged_at": 1660315259000 }
https://api.github.com/repos/huggingface/transformers/issues/18487
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18487/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18487/comments
https://api.github.com/repos/huggingface/transformers/issues/18487/events
https://github.com/huggingface/transformers/pull/18487
1,329,905,157
PR_kwDOCUB6oc48t3EL
18,487
Fix pipeline tests
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Why weren't these ran on the original PR ?\r\n\r\nCause the common test files used not only have the tester for each pipeline, not tests of its own, so is not triggered by the `tests_fetcher`. Will fix in this PR as well.", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
COLLABORATOR
null
# What does this PR do? Changes in #18392 broke tests in the pipelines, this PR fixes them.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18487/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18487", "html_url": "https://github.com/huggingface/transformers/pull/18487", "diff_url": "https://github.com/huggingface/transformers/pull/18487.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18487.patch", "merged_at": 1659705291000 }
https://api.github.com/repos/huggingface/transformers/issues/18486
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18486/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18486/comments
https://api.github.com/repos/huggingface/transformers/issues/18486/events
https://github.com/huggingface/transformers/pull/18486
1,329,903,927
PR_kwDOCUB6oc48t2zG
18,486
Change BartLearnedPositionalEmbedding's forward method signature to support Opacus training
{ "login": "donebydan", "id": 15520428, "node_id": "MDQ6VXNlcjE1NTIwNDI4", "avatar_url": "https://avatars.githubusercontent.com/u/15520428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donebydan", "html_url": "https://github.com/donebydan", "followers_url": "https://api.github.com/users/donebydan/followers", "following_url": "https://api.github.com/users/donebydan/following{/other_user}", "gists_url": "https://api.github.com/users/donebydan/gists{/gist_id}", "starred_url": "https://api.github.com/users/donebydan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donebydan/subscriptions", "organizations_url": "https://api.github.com/users/donebydan/orgs", "repos_url": "https://api.github.com/users/donebydan/repos", "events_url": "https://api.github.com/users/donebydan/events{/privacy}", "received_events_url": "https://api.github.com/users/donebydan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "You will also need to apply the same changes to all the models that are touched by the change in embedding (mBART, plBARt etc) to have the tests passing.", "@sgugger I am iterating through. 👍 thanks for the heads up, though!", "Ah, this also looks like it's breaking the conversion to `torch.fx`. Let's see if @michaelbenayoun can think of an easy solution to that.", "Thanks, I was about to ask.. any thoughts? I am not well-versed in torch's symbolic tracer (or FX generally). I'm happy to do the work if you can point me somewhere useful 😄 ", "Current offending line is line 985 in `src/transformers/utils/fx.py` (`HFTracer.trace()`):\r\n`self.graph = super().trace(root, concrete_args=concrete_args)` \r\n\r\nwhere `root` is \r\n```\r\nPLBartModel(\r\n (shared): Embedding(99, 16, padding_idx=1)\r\n (encoder): PLBartEncoder(\r\n (embed_tokens): Embedding(99, 16, padding_idx=1)\r\n (embed_positions): PLBartLearnedPositionalEmbedding(102, 16)\r\n (layers): ModuleList(\r\n (0): PLBartEncoderLayer(\r\n (self_attn): PLBartAttention(\r\n (k_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (v_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (q_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (out_proj): Linear(in_features=16, out_features=16, bias=True)\r\n )\r\n (self_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n (activation_fn): GELUActivation()\r\n (fc1): Linear(in_features=16, out_features=4, bias=True)\r\n (fc2): Linear(in_features=4, out_features=16, bias=True)\r\n (final_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n )\r\n (1): PLBartEncoderLayer(\r\n (self_attn): PLBartAttention(\r\n (k_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (v_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (q_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (out_proj): Linear(in_features=16, out_features=16, bias=True)\r\n )\r\n (self_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n (activation_fn): GELUActivation()\r\n (fc1): Linear(in_features=16, out_features=4, bias=True)\r\n (fc2): Linear(in_features=4, out_features=16, bias=True)\r\n (final_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n )\r\n )\r\n (layernorm_embedding): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n )\r\n (decoder): PLBartDecoder(\r\n (embed_tokens): Embedding(99, 16, padding_idx=1)\r\n (embed_positions): PLBartLearnedPositionalEmbedding(102, 16)\r\n (layers): ModuleList(\r\n (0): PLBartDecoderLayer(\r\n (self_attn): PLBartAttention(\r\n (k_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (v_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (q_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (out_proj): Linear(in_features=16, out_features=16, bias=True)\r\n )\r\n (activation_fn): GELUActivation()\r\n (self_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n (encoder_attn): PLBartAttention(\r\n (k_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (v_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (q_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (out_proj): Linear(in_features=16, out_features=16, bias=True)\r\n )\r\n (encoder_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n (fc1): Linear(in_features=16, out_features=4, bias=True)\r\n (fc2): Linear(in_features=4, out_features=16, bias=True)\r\n (final_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n )\r\n (1): PLBartDecoderLayer(\r\n (self_attn): PLBartAttention(\r\n (k_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (v_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (q_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (out_proj): Linear(in_features=16, out_features=16, bias=True)\r\n )\r\n (activation_fn): GELUActivation()\r\n (self_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n (encoder_attn): PLBartAttention(\r\n (k_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (v_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (q_proj): Linear(in_features=16, out_features=16, bias=True)\r\n (out_proj): Linear(in_features=16, out_features=16, bias=True)\r\n )\r\n (encoder_attn_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n (fc1): Linear(in_features=16, out_features=4, bias=True)\r\n (fc2): Linear(in_features=4, out_features=16, bias=True)\r\n (final_layer_norm): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n )\r\n )\r\n (layernorm_embedding): LayerNorm((16,), eps=1e-05, elementwise_affine=True)\r\n )\r\n) \r\n```\r\nand `concrete_args` is:\r\n```\r\n{'head_mask': None, 'decoder_head_mask': None, 'cross_attn_head_mask': None, 'encoder_outputs': None, 'past_key_values': None, 'inputs_embeds': None, 'decoder_inputs_embeds': None, 'use_cache': None, 'output_attentions': None, 'output_hidden_states': None, 'return_dict': None}\r\n```", "I will check on Monday and come back to you, it should be easily fixable I think.", "Hey @michaelbenayoun, let me know if you have any thoughts to resolve the tracer issue :)", "@sgugger all resolved now. Would you mind giving the PR another look?", "Thanks a lot for working on this!", "No problem, thank you for your support 👍 " ]
1,659
1,660
1,660
CONTRIBUTOR
null
As outlined in #18425, this PR changes the signature of `BartLearnedPositionalEmbedding`'s forward method signature to take the `input_ids` tensor (and not just its shape). This is needed to enable private training of BART via DP-SGD in Opacus. PR welcomed by @sgugger in linked issue. Fixes #18425.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18486/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18486", "html_url": "https://github.com/huggingface/transformers/pull/18486", "diff_url": "https://github.com/huggingface/transformers/pull/18486.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18486.patch", "merged_at": 1660225504000 }
https://api.github.com/repos/huggingface/transformers/issues/18485
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18485/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18485/comments
https://api.github.com/repos/huggingface/transformers/issues/18485/events
https://github.com/huggingface/transformers/pull/18485
1,329,874,929
PR_kwDOCUB6oc48twdZ
18,485
Remove py.typed
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,659
1,659
COLLABORATOR
null
The experiment was fun (not). As we are getting a rise in issues of users complaining typecheckers are not happy with our annotations, which we have chosen to keep simple for the sake of documentation, this PR removes the `py.typed` file indicating to type-checkers that Transformers is properly typed. It is not, and it won't be in the near future because static type-checking things in Python requires sacrificing too much in term of clarity of code (at least in my opinion). This PR just makes it "official".
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18485/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18485/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18485", "html_url": "https://github.com/huggingface/transformers/pull/18485", "diff_url": "https://github.com/huggingface/transformers/pull/18485.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18485.patch", "merged_at": 1659705140000 }
https://api.github.com/repos/huggingface/transformers/issues/18484
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18484/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18484/comments
https://api.github.com/repos/huggingface/transformers/issues/18484/events
https://github.com/huggingface/transformers/pull/18484
1,329,872,343
PR_kwDOCUB6oc48tv4d
18,484
Update some expected values in `quicktour.mdx` for `resampy 0.3.0`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,662
1,659
COLLABORATOR
null
# What does this PR do? It took me some time to figure out the test failure is due to different `resampy` versions. Some current values work with `0.2.2`, but not with `0.3.0`. [current failed job](https://github.com/huggingface/transformers/runs/7682932969?check_suite_focus=true)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18484/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18484", "html_url": "https://github.com/huggingface/transformers/pull/18484", "diff_url": "https://github.com/huggingface/transformers/pull/18484.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18484.patch", "merged_at": 1659719872000 }
https://api.github.com/repos/huggingface/transformers/issues/18483
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18483/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18483/comments
https://api.github.com/repos/huggingface/transformers/issues/18483/events
https://github.com/huggingface/transformers/issues/18483
1,329,732,057
I_kwDOCUB6oc5PQhnZ
18,483
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): initialization failed
{ "login": "danielbellhv", "id": 84714841, "node_id": "MDQ6VXNlcjg0NzE0ODQx", "avatar_url": "https://avatars.githubusercontent.com/u/84714841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danielbellhv", "html_url": "https://github.com/danielbellhv", "followers_url": "https://api.github.com/users/danielbellhv/followers", "following_url": "https://api.github.com/users/danielbellhv/following{/other_user}", "gists_url": "https://api.github.com/users/danielbellhv/gists{/gist_id}", "starred_url": "https://api.github.com/users/danielbellhv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danielbellhv/subscriptions", "organizations_url": "https://api.github.com/users/danielbellhv/orgs", "repos_url": "https://api.github.com/users/danielbellhv/repos", "events_url": "https://api.github.com/users/danielbellhv/events{/privacy}", "received_events_url": "https://api.github.com/users/danielbellhv/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Changed Kernel: `conda_tensorflow2_p38`" ]
1,659
1,659
1,659
NONE
null
### System Info Goal: Run a **GPT-2** model instance. I am using the latest Tensorflow and Hugging Face 🤗 Transformers. - Tensorflow - 2.9.1 - Transformers - 4.21.1 Notebook: ``` pip install tensorflow ``` ``` pip install transformers ``` ``` from transformers import pipeline, set_seed generator = pipeline('text-generation', model='gpt2') set_seed(42) ``` ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd --------------------------------------------------------------------------- ImportError Traceback (most recent call last) ImportError: numpy.core.multiarray failed to import The above exception was the direct cause of the following exception: SystemError Traceback (most recent call last) SystemError: <built-in method __contains__ of dict object at 0x7f5b58a64d00> returned a result with an error set The above exception was the direct cause of the following exception: ImportError Traceback (most recent call last) ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1001 try: -> 1002 return importlib.import_module("." + module_name, self.__name__) 1003 except Exception as e: ~/anaconda3/envs/python3/lib/python3.8/importlib/__init__.py in import_module(name, package) 126 level += 1 --> 127 return _bootstrap._gcd_import(name[level:], package, level) 128 ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _gcd_import(name, package, level) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _find_and_load(name, import_) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _load_unlocked(spec) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap_external.py in exec_module(self, module) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/pipelines/__init__.py in <module> 36 from ..utils import HUGGINGFACE_CO_RESOLVE_ENDPOINT, http_get, is_tf_available, is_torch_available, logging ---> 37 from .audio_classification import AudioClassificationPipeline 38 from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/pipelines/audio_classification.py in <module> 19 from ..utils import add_end_docstrings, is_torch_available, logging ---> 20 from .base import PIPELINE_INIT_ARGS, Pipeline 21 ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/pipelines/base.py in <module> 33 from ..feature_extraction_utils import PreTrainedFeatureExtractor ---> 34 from ..modelcard import ModelCard 35 from ..models.auto.configuration_auto import AutoConfig ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/modelcard.py in <module> 43 ) ---> 44 from .training_args import ParallelMode 45 from .utils import ( ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/training_args.py in <module> 25 from .debug_utils import DebugOption ---> 26 from .trainer_utils import ( 27 EvaluationStrategy, ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/trainer_utils.py in <module> 46 if is_tf_available(): ---> 47 import tensorflow as tf 48 ~/anaconda3/envs/python3/lib/python3.8/site-packages/tensorflow/__init__.py in <module> 36 ---> 37 from tensorflow.python.tools import module_util as _module_util 38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader ~/anaconda3/envs/python3/lib/python3.8/site-packages/tensorflow/python/__init__.py in <module> 36 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow ---> 37 from tensorflow.python.eager import context 38 ~/anaconda3/envs/python3/lib/python3.8/site-packages/tensorflow/python/eager/context.py in <module> 34 from tensorflow.python import tf2 ---> 35 from tensorflow.python.client import pywrap_tf_session 36 from tensorflow.python.eager import executor ~/anaconda3/envs/python3/lib/python3.8/site-packages/tensorflow/python/client/pywrap_tf_session.py in <module> 18 from tensorflow.python import pywrap_tensorflow ---> 19 from tensorflow.python.client._pywrap_tf_session import * 20 from tensorflow.python.client._pywrap_tf_session import _TF_SetTarget ImportError: initialization failed The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) /tmp/ipykernel_4924/2487422996.py in <cell line: 1>() ----> 1 from transformers import pipeline, set_seed 2 3 generator = pipeline('text-generation', model='gpt2') 4 set_seed(42) ~/anaconda3/envs/python3/lib/python3.8/importlib/_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive) ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/utils/import_utils.py in __getattr__(self, name) 990 value = self._get_module(name) 991 elif name in self._class_to_module.keys(): --> 992 module = self._get_module(self._class_to_module[name]) 993 value = getattr(module, name) 994 else: ~/anaconda3/envs/python3/lib/python3.8/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1002 return importlib.import_module("." + module_name, self.__name__) 1003 except Exception as e: -> 1004 raise RuntimeError( 1005 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its" 1006 f" traceback):\n{e}" RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): initialization failed ``` ``` def query(payload, multiple, min_tokens, max_tokens): nlp_setup() list_dict = generator(payload, min_length=min_tokens, max_new_tokens=max_tokens, num_return_sequences=multiple) return [d['generated_text'].split(payload)[1].strip() for d in list_dict ``` ``` output = query("Banking customer's needs:", 3000, 50, 50) ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. pip install tensorflow 2. pip install transformers 3. from transformers import pipeline, set_seed ### Expected behavior Should import and create instance of generator
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18483/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18482
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18482/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18482/comments
https://api.github.com/repos/huggingface/transformers/issues/18482/events
https://github.com/huggingface/transformers/pull/18482
1,329,654,129
PR_kwDOCUB6oc48tAW0
18,482
Fix `test_dbmdz_english` by updating expected values
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think we can update the results instead yes:\r\n\r\nThe original string didn't have the typo:\r\n\r\nhttps://github.com/huggingface/transformers/issues/5077#issuecomment-656398617\r\n\r\nSo probably my bad in putting it into a test (or I copied from somewhere else, I can't remember)", "(off topic) @Narsil How you are able to find that (very) old comment - I can't even find some comments that are just 2-3 months old 😢 ", "Voilà 🚀 ", "> you\r\n\r\nI copy pasted `Enzo works at the UN` in the GH search bar within `issues` tab.\r\nIt doesn't work all the time, but it does work better than searching in the top left bar.\r\n\r\nGH search is very hit and miss. Really depends how segregating your keywords are. and you HAVE to be word aligned (which is super annoying when looking for function/method names since I usually only know part of the name)" ]
1,659
1,662
1,659
COLLABORATOR
null
# What does this PR do? Fix #18405 I originally stated this is an expected value - it turns out to be an **input sentence**. But probably it is still intentional to have `the the`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18482/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18482", "html_url": "https://github.com/huggingface/transformers/pull/18482", "diff_url": "https://github.com/huggingface/transformers/pull/18482.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18482.patch", "merged_at": 1659710994000 }
https://api.github.com/repos/huggingface/transformers/issues/18481
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18481/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18481/comments
https://api.github.com/repos/huggingface/transformers/issues/18481/events
https://github.com/huggingface/transformers/pull/18481
1,329,570,749
PR_kwDOCUB6oc48sufR
18,481
Add TF prefix to TF-Res test class
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,659
1,662
1,659
COLLABORATOR
null
# What does this PR do? Let's give TF a bit more space.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18481/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 3, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/18481/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18481", "html_url": "https://github.com/huggingface/transformers/pull/18481", "diff_url": "https://github.com/huggingface/transformers/pull/18481.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18481.patch", "merged_at": 1659700795000 }
https://api.github.com/repos/huggingface/transformers/issues/18480
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18480/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18480/comments
https://api.github.com/repos/huggingface/transformers/issues/18480/events
https://github.com/huggingface/transformers/issues/18480
1,329,505,600
I_kwDOCUB6oc5PPqVA
18,480
Not able to use DistilBERT in VisualTextDualEncoder
{ "login": "jagjeetsian", "id": 65284408, "node_id": "MDQ6VXNlcjY1Mjg0NDA4", "avatar_url": "https://avatars.githubusercontent.com/u/65284408?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jagjeetsian", "html_url": "https://github.com/jagjeetsian", "followers_url": "https://api.github.com/users/jagjeetsian/followers", "following_url": "https://api.github.com/users/jagjeetsian/following{/other_user}", "gists_url": "https://api.github.com/users/jagjeetsian/gists{/gist_id}", "starred_url": "https://api.github.com/users/jagjeetsian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jagjeetsian/subscriptions", "organizations_url": "https://api.github.com/users/jagjeetsian/orgs", "repos_url": "https://api.github.com/users/jagjeetsian/repos", "events_url": "https://api.github.com/users/jagjeetsian/events{/privacy}", "received_events_url": "https://api.github.com/users/jagjeetsian/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I believe this is because [this line in `modeling_vision_text_dual_encoder.py`](https://github.com/huggingface/transformers/blob/14928921e2f6d5b049d8dcfa07982e9ca351a402/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py#L368) passes a keyword argument `token_type_ids` to the `text_model` which is not supported by the `distilbert-base-uncased` model.", "Hi,\r\n\r\nThe scripts are meant as examples, and you can easily tweak them for your use case. So it's advised to fork the library and tweak it to your liking.\r\n\r\nIf you can come up with a fix that makes the scrpit more general, feel free to open a PR.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,659
1,663
1,663
NONE
null
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): 2.6.4 (True) - Flax version (CPU?/GPU?/TPU?): 0.5.2 (gpu) - Jax version: 0.3.14 - JaxLib version: 0.3.14 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @NielsRogge ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from PIL import Image import requests from transformers import ( VisionTextDualEncoderModel, VisionTextDualEncoderProcessor, AutoFeatureExtractor, AutoTokenizer, ) tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") feature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224") processor = VisionTextDualEncoderProcessor(feature_extractor, tokenizer) model = VisionTextDualEncoderModel.from_vision_text_pretrained( "google/vit-base-patch16-224", "distilbert-base-uncased" ) # contrastive training urls = [ "http://images.cocodataset.org/val2017/000000039769.jpg", "https://farm3.staticflickr.com/2674/5850229113_4fe05d5265_z.jpg", ] images = [Image.open(requests.get(url, stream=True).raw) for url in urls] inputs = processor( text=["a photo of a cat", "a photo of a dog"], images=images, return_tensors="pt", padding=True ) outputs = model( input_ids=inputs.input_ids, attention_mask=inputs.attention_mask, pixel_values=inputs.pixel_values, return_loss=True, ) loss, logits_per_image = outputs.loss, outputs.logits_per_image # this is the image-text similarity score # save and load from pretrained model.save_pretrained("vit-bert") model = VisionTextDualEncoderModel.from_pretrained("vit-bert") # inference outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` TypeError: forward() got an unexpected keyword argument 'token_type_ids' ### Expected behavior `CLIPOutput' object with these components: ['loss', 'logits_per_image', 'logits_per_text', 'text_embeds', 'image_embeds', 'text_model_output', 'vision_model_output']
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18480/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18479
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18479/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18479/comments
https://api.github.com/repos/huggingface/transformers/issues/18479/events
https://github.com/huggingface/transformers/pull/18479
1,329,406,393
PR_kwDOCUB6oc48sLVn
18,479
join last hidden states of layers for BertForSequenceClassification
{ "login": "dantruonghtno1", "id": 55061612, "node_id": "MDQ6VXNlcjU1MDYxNjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55061612?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dantruonghtno1", "html_url": "https://github.com/dantruonghtno1", "followers_url": "https://api.github.com/users/dantruonghtno1/followers", "following_url": "https://api.github.com/users/dantruonghtno1/following{/other_user}", "gists_url": "https://api.github.com/users/dantruonghtno1/gists{/gist_id}", "starred_url": "https://api.github.com/users/dantruonghtno1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dantruonghtno1/subscriptions", "organizations_url": "https://api.github.com/users/dantruonghtno1/orgs", "repos_url": "https://api.github.com/users/dantruonghtno1/repos", "events_url": "https://api.github.com/users/dantruonghtno1/events{/privacy}", "received_events_url": "https://api.github.com/users/dantruonghtno1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18479). All of your documentation changes will be reflected on that endpoint.", "Hey @dantruonghtno1, was this discussed in an issue somewhere? We're very unlikely to merge this as it seems like a niche use-case that adds a number of statements to the code for that specific use-case which could be handled outside of the model." ]
1,659
1,663
1,663
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18479/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18479", "html_url": "https://github.com/huggingface/transformers/pull/18479", "diff_url": "https://github.com/huggingface/transformers/pull/18479.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18479.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/18478
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18478/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18478/comments
https://api.github.com/repos/huggingface/transformers/issues/18478/events
https://github.com/huggingface/transformers/issues/18478
1,329,385,185
I_kwDOCUB6oc5PPM7h
18,478
How to do batch inference in GPT-J
{ "login": "ZeyiLiao", "id": 97815464, "node_id": "U_kgDOBdSLqA", "avatar_url": "https://avatars.githubusercontent.com/u/97815464?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZeyiLiao", "html_url": "https://github.com/ZeyiLiao", "followers_url": "https://api.github.com/users/ZeyiLiao/followers", "following_url": "https://api.github.com/users/ZeyiLiao/following{/other_user}", "gists_url": "https://api.github.com/users/ZeyiLiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZeyiLiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZeyiLiao/subscriptions", "organizations_url": "https://api.github.com/users/ZeyiLiao/orgs", "repos_url": "https://api.github.com/users/ZeyiLiao/repos", "events_url": "https://api.github.com/users/ZeyiLiao/events{/privacy}", "received_events_url": "https://api.github.com/users/ZeyiLiao/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Maybe @gante can help out regarding generate", "EDIT: removed this comment, a correct example is given below.\r\n", "@gante ,Thanks for your reply! \r\nActually, your reply just made me a bit confused about the padding things. Why do we need to do the left padding? Typically, I think we are doing the RHS padding, right? And I tried that no matter what kind of padding or wherever the padding is inserted, having the corresponding attention mask(i.e. 0 for the position needs to be masked) would be enough.", "Hey @ZeyiLiao 👋 \r\n\r\nYeah, left padding matters! Although tokens with the attention mask set to `0` are numerically masked and the position IDs are correctly identified from the attention mask, models like GPT-2 or GPT-J generate a new token at a time from the previous token. As such, if your last input token is not part of your prompt (e.g. it is padding), your output will be drastically different!\r\n\r\nCheck this colab with examples: https://colab.research.google.com/drive/1i0g18lUNZ2cYRms0E-gE1KCf6N4mZRwy?usp=sharing", "@ZeyiLiao I realized one incorrect detail from the example I gave above (setting the padding token), GPT-J is working for batched generation :)\r\n\r\nHere's the working example:\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"EleutherAI/gpt-j-6B\", padding_side=\"left\")\r\ntokenizer.add_special_tokens({'pad_token': tokenizer.eos_token})\r\nmodel = AutoModelForCausalLM.from_pretrained(\"EleutherAI/gpt-j-6B\")\r\n\r\nsens = [\r\n \"I am a random example\",\r\n \"This is the\"\r\n]\r\n\r\nprompts = tokenizer(sens, return_tensors='pt', padding=True, truncation=True)\r\nprint(prompts[\"input_ids\"])\r\nprint(prompts[\"attention_mask\"])\r\n\r\nwith torch.no_grad():\r\n gen_tokens = model.generate(\r\n **prompts,\r\n do_sample=False,\r\n max_new_tokens=20,\r\n )\r\ngen_text = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)\r\nprint(gen_text)\r\n```\r\n\r\nThis means that this issue is now sorted -- closing it. Let us know if you have further questions or run into more issues!", "> Hey @ZeyiLiao 👋\r\n> \r\n> Yeah, left padding matters! Although tokens with the attention mask set to `0` are numerically masked and the position IDs are correctly identified from the attention mask, models like GPT-2 or GPT-J generate a new token at a time from the previous token. As such, if your last input token is not part of your prompt (e.g. it is padding), your output will be drastically different!\r\n> \r\n> Check this colab with examples: https://colab.research.google.com/drive/1i0g18lUNZ2cYRms0E-gE1KCf6N4mZRwy?usp=sharing\r\n\r\nThanks! That's a really great example to elaborate it.", "Hi @gante , I did some review on this issue and read the tips from huggingface\r\n<img width=\"1452\" alt=\"image\" src=\"https://user-images.githubusercontent.com/97815464/190067034-79000b94-2e0a-4a44-a181-d2da9a0aff23.png\">\r\nWhy it said that it's recommended to pad on the right sides?\r\n\r\nOr does it means that when do inference instead of generation, we should use right padding? But for generation, we should use left padding? ", "BTW, do you know how to change the loss function of a model like GPT2. Like, I wanna set ```CrossEntropyLoss(reduction = 'sum')```, do I need to change the internal code or there is a way to deal with it.\r\nThanks!!!!!", "@ZeyiLiao \r\n\r\n> Or does it means that when do inference instead of generation, we should use right padding? But for generation, we should use left padding?\r\n\r\nIt depends on whether you pass the [`position_ids`](https://huggingface.co/docs/transformers/main/en/glossary#position-ids) argument to the model or not. At generation time, we hand-craft it according to the attention mask (remember: padded tokens get `0` in the attention mask), at inference time we do not. As such, if you run inference with left padding, unless you build `position_ids` correctly and pass it to the model, you will get a slightly different output. Hence the suggestion to not use left padding at inference time :)\r\n\r\n> BTW, do you know how to change the loss function of a model like GPT2\r\n\r\nPT training questions are best left to our [forum](https://discuss.huggingface.co/) :D ", "@gante , Thanks :D,\r\nAnd I checked the position_ids and wonder why the padded parts are **_1_**(left-side)? I think _**1**_ here of the position_ids can not achieve what the padded token(0) at attention mask does(totally ignore them)?\r\n```\r\nattention mask:\r\n[[0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\r\n[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\r\n[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\r\n[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\r\n[0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\r\n[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\r\n[0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\r\n[0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]\r\n\r\nposition ids:\r\n[[ 1, 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,14, 15, 16, 17],\r\n[ 1, 1, 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,13, 14, 15, 16],\r\n[ 1, 1, 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,13, 14, 15, 16],\r\n[ 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,17, 18, 19, 20],\r\n[ 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,15, 16, 17, 18],\r\n[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,18, 19, 20, 21],\r\n[ 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,15, 16, 17, 18],\r\n[ 1, 1, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,15, 16, 17, 18]]\r\n```", "The code that creates the position ids at generation time for GPT-2 is defined [here](https://github.com/huggingface/transformers/blob/30a28f5227d541ae6b0a287ae345dfae687f21da/src/transformers/models/gpt2/modeling_gpt2.py#L986)", "@gante ,yeah, I know that. I wanna ask why the position ids of padded tokens are 1?\r\n\r\nLike for the attention mask, it would set the padded tokens to 0 to make sure the score (query * key)for the padded one is zero.\r\nSo what's the point of setting position ids of padded tokens to 1?", "There is no point, but it also doesn't make a difference :) Because of the attention mask, the signal at the start of the sentence will be almost inexistent regardless of what's in the input.\r\n\r\nBTW, we reserve this GH issues space for bugs in transformers. These sort of questions are best left to our [forum](https://discuss.huggingface.co/)", "@gante Hi gante, when I checked the course [here](https://huggingface.co/course/chapter7/6?fw=pt), I note that when we do training on gpt2 , we don't have left padding setting:? Don't we need that during training? I think it also would affect it?", "During training, the model doesn't generate text, it only predicts the next token (for each position in the sequence, given all prior tokens). Being left or right padded doesn't make a difference, the text has no discontinuities :) \r\n\r\nThat is opposed to generate, where if left padding is NOT applied there will be a gap between the input text and the start of generation, which causes the problems.", "Hi there!\r\n\r\nIs possible to do batch with different parameters in each prompt?\r\n", "Hi @carlose2108 👋 That is impossible with our `.generate()` function. But it should be possible to build under certain assumptions, if you'd like to build it for your project.", "Hi @gante thanks a lot for your response!" ]
1,659
1,669
1,659
NONE
null
### System Info - `transformers` version: 4.21.1 - Platform: Linux-4.15.0-189-generic-x86_64-with-debian-buster-sid - Python version: 3.7.3 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <False> - Using distributed or parallel set-up in script?: <False> ### Who can help? @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") sens = [ "In a shocking finding, scientists discovered a herd of unicorns living in a remote", "previously unexplored valley, in the Andes Mountains. Even more surprising to the", "researchers was the fact that the unicorns spoke perfect English." ] token_ids = [torch.squeeze(tokenizer(sen,return_tensors='pt',truncation=True)['input_ids'],0) for sen in sens] sens = pad_sequence(token_ids, batch_first=True, padding_value=-1) attention_mask = (sens != -1).long() print(sens) print(attention_mask) gen_tokens = model.generate( sens, attention_mask = attention_mask, do_sample=True, temperature=0.9, max_length=100, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) ``` ### Expected behavior It should work well.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18478/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18477
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18477/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18477/comments
https://api.github.com/repos/huggingface/transformers/issues/18477/events
https://github.com/huggingface/transformers/pull/18477
1,329,326,701
PR_kwDOCUB6oc48r64K
18,477
Spanish translation of summarization.mdx (#15947)
{ "login": "AguilaCudicio", "id": 11493956, "node_id": "MDQ6VXNlcjExNDkzOTU2", "avatar_url": "https://avatars.githubusercontent.com/u/11493956?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AguilaCudicio", "html_url": "https://github.com/AguilaCudicio", "followers_url": "https://api.github.com/users/AguilaCudicio/followers", "following_url": "https://api.github.com/users/AguilaCudicio/following{/other_user}", "gists_url": "https://api.github.com/users/AguilaCudicio/gists{/gist_id}", "starred_url": "https://api.github.com/users/AguilaCudicio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AguilaCudicio/subscriptions", "organizations_url": "https://api.github.com/users/AguilaCudicio/orgs", "repos_url": "https://api.github.com/users/AguilaCudicio/repos", "events_url": "https://api.github.com/users/AguilaCudicio/events{/privacy}", "received_events_url": "https://api.github.com/users/AguilaCudicio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Muchas gracias @AguilaCudicio por la traducción! Agregué algunos comentarios en el review.", "Gracias @AguilaCudicio for the translation! 🚀\r\n\r\n@sgugger LGTM :)" ]
1,659
1,659
1,659
CONTRIBUTOR
null
Spanish translation of summarization.mdx (#15947) <!-- Remove if not applicable --> Fixes #15947 (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? @omarespejel @osanseviero @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18477/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18477", "html_url": "https://github.com/huggingface/transformers/pull/18477", "diff_url": "https://github.com/huggingface/transformers/pull/18477.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18477.patch", "merged_at": 1659988452000 }
https://api.github.com/repos/huggingface/transformers/issues/18476
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18476/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18476/comments
https://api.github.com/repos/huggingface/transformers/issues/18476/events
https://github.com/huggingface/transformers/issues/18476
1,329,105,139
I_kwDOCUB6oc5POIjz
18,476
Fine tuning TensorFlow DeBERTa fails on TPU
{ "login": "tmoroder", "id": 48967773, "node_id": "MDQ6VXNlcjQ4OTY3Nzcz", "avatar_url": "https://avatars.githubusercontent.com/u/48967773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tmoroder", "html_url": "https://github.com/tmoroder", "followers_url": "https://api.github.com/users/tmoroder/followers", "following_url": "https://api.github.com/users/tmoroder/following{/other_user}", "gists_url": "https://api.github.com/users/tmoroder/gists{/gist_id}", "starred_url": "https://api.github.com/users/tmoroder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tmoroder/subscriptions", "organizations_url": "https://api.github.com/users/tmoroder/orgs", "repos_url": "https://api.github.com/users/tmoroder/repos", "events_url": "https://api.github.com/users/tmoroder/events{/privacy}", "received_events_url": "https://api.github.com/users/tmoroder/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @tmoroder 👋 Thank you for adding all that information to the issue <3 \r\n\r\nIf I got it right, the second notebook replaces the `take_along_axis` function, and the third notebook also replaces the custom dropout. Still, there are XLA exceptions.\r\n\r\nBefore diving into debugging, two questions:\r\n1. Does it return the same error on a GPU?\r\n2. I see that you prepare a dataset with static batch size and that the input is padded. Do you think that there is any additional source of shape variability in the inputs? (I don't think so, but asking doesn't hurt :D )", "Hi @gante.\r\n\r\n> If I got it right, the second notebook replaces the ``take_along_axis function``, and the third notebook also replaces the custom dropout. Still, there are XLA exceptions.\r\n\r\nCorrect. I think the XLA exceptions occur during gradient computation at these dynamic/computed tensor shape sizes. \r\nThe first collection seems to me being triggered within the ``TFDebertaV2DisentangledSelfAttention.disentangled_att_bias`` method, like at [L735](https://github.com/huggingface/transformers/blob/v4.21.0/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L735). I am not about other position ``TFDebertaV2DisentangledSelfAttention.call`` like [L704](https://github.com/huggingface/transformers/blob/v4.21.0/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L704).\r\n\r\n\r\n> 1. Does it return the same error on a GPU?\r\n\r\nIt runs on GPU without errors if I use ``transformers==4.20.1``, see [FineTuning_TF_DeBERTa _GPU](https://colab.research.google.com/drive/1KduvPzwXbDee3sR4DR4-woehg7v-9c8I?usp=sharing). With version ``4.21.0`` I get the same error ValueError.\r\n\r\n> 2. I see that you prepare a dataset with static batch size and that the input is padded. Do you think that there is any additional source of shape variability in the inputs? (I don't think so, but asking doesn't hurt :D )\r\n\r\nNo further shape variability as fas as I can judge.\r\n", "Hi @tmoroder, can you try on GPU with `jit_compile=True` in both 4.20 and 4.21? I believe the code had issues with XLA before 4.21, and TPU code is always compiled to XLA.", "Interesting. Since `transformers==4.20.1`, there are only two DeBERTa PRs:\r\n1. https://github.com/huggingface/transformers/pull/17940 (should have no impact at all here)\r\n2. https://github.com/huggingface/transformers/pull/18256 (what should have been a TPU-friendly `take_along_axis`)\r\n\r\nAs @Rocketknight1 said, that data would be interesting. If v4.21 works on GPU but not on TPU, we are up for an interesting challenge :D ", "> Hi @tmoroder, can you try on GPU with jit_compile=True in both 4.20 and 4.21?\r\n\r\nUsing ``jit_compile=True`` while compiling the model gives an error for both 4.20.1 and 4.21, e.g., [FineTuning_TF_DeBERTa _GPU_Tests](https://colab.research.google.com/drive/1VMPD2k5WuiHzT5ESdIXs0PuESvg82Ju3?usp=sharing) for 4.21; with 4.20.1 it crashes in the last command.\r\n \r\n> As @Rocketknight1 said, that data would be interesting. If v4.21 works on GPU but not on TPU, we are up for an interesting challenge :D\r\n\r\nWithout ``jit_compile=True`` it also fails on GPU with 4.21; with 4.20.1 it works.", "That makes sense - we made changes to the model to make it XLA-compatible in 4.21. XLA compatibility is necessary for TPU support, so the 4.20 model would never have run on TPU. However, we seem to have some other TPU-specific issues now - in my testing I was able to get DeBERTa to work with XLA on GPU in 4.21.", "Weird! \r\n\r\nDuring my TPU and GPU tests, i was using a custom training loop instead of keras's `.fit()`, which I'm not sure if it actually matters.\r\n\r\nIn my custom training code, I got deberta to train in an electra style training, with XLA enabled with `jit_compile=True` with non of the issues mentioned above.\r\n\r\nI will be sharing my code asap once I finish the pretraining and validate the results. It is based on Nvidia BERT and Electra TF2 training code https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow2/LanguageModeling/", "@tmoroder I can confirm that I can run your example with `jit_compile=True` (i.e. XLA compilation) on `model.compile()`, using a GPU, if the two changes you made in your [third TPU notebook](https://colab.research.google.com/drive/1L6cCdYCf3R5l90TK-Hs5dv85O6qL5vrR?usp=sharing):\r\n- replace `take_along_axis` by the `tf.einsum` version\r\n- replace dropout by the standard dropout\r\n\r\nIf XLA compilation works, then it should run on TPU. I noticed that in [your notebook](https://colab.research.google.com/drive/1L6cCdYCf3R5l90TK-Hs5dv85O6qL5vrR?usp=sharing) you were using TF 2.6, which may explain the XLA failure. Are you able to bump your TPU TF version (to as high as possible)?\r\n\r\nMeanwhile I'm opening a PR to reflect those two changes :)", "@gante\r\n\r\nThanks a lot for your effort. Maybe I am doing something wrong... but using the code from your pull request it now runs on GPU (with ``jit_compile=True`` as additional argument during model compilation), while it still fails on TPU (without using ``jit_compile=True`` as an argument). I am using TF 2.8.2 in both cases which is the current default in the Colab environment. On TPU it seems again to have errors on the [tile operation](https://github.com/huggingface/transformers/blob/v4.21.0/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L733).\r\n\r\n- Working GPU version: [FineTuning_TF_DeBERTa_Propsed_Fix_GPU](https://colab.research.google.com/drive/1kF-I5Mb3eUyydl9681RXi4GfxDL2Ls4o?usp=sharing)\r\n- Failing TPU version: [FineTuning_TF_DeBERTa_Propsed_Fix_TPU](https://colab.research.google.com/drive/1BlYkfl0l5RZVhTXHDuHbLTsD1WlrzomT?usp=sharing)\r\n", "(linking issues -- the Tile issue is also present in the following unsolved issue: https://github.com/huggingface/transformers/issues/14058)", "The cause is trivial (the `multiple` argument of `tf.tile` can't have dynamic shapes), but the fix may be not :D Will look into it ", "@tmoroder the dynamic shape in question is the batch size. I may be able to get an alternative to `tf.tile`, but I highly doubt that it will make a difference -- it will be a dynamic shape no matter how I turn it around, as it is not set.\r\n\r\nAlternatively, could you try setting the `batch_size` argument in the `Input` layers? It will make that shape static, which should be enough to squash the problem you're seeing :)", "@gante \r\n\r\nGreat, setting the ``batch_size`` works 🥳. I only had to make sure that it divides the ``strategy.num_replicas_in_sync``, [FineTuning_TF_DeBERTa_Working_Fix_TPU](https://colab.research.google.com/drive/1wQ_shM9zigRzeATvcncTC4koFb2GkDgY?usp=sharing). Thanks a lot, I will test the procedure now on my real use case at hand.", "Wooo nice! 🎊 \r\n\r\nI'm closing this issue since the problem seems to be solved for now. Feel free to reopen if you run into new related issues. Also, if you have the authorization to, please share TPU-related findings -- I'm sure they will be useful for other users!", "@tmoroder Hey, can i ask about the training throughput/performance you got with the TPUs?", "@WissamAntoun \r\n\r\nHere some output that I get during the ``model.fit`` call. The model is very close to the one in the Colab notebooks, but the run is carried out on a Kaggle TPU. \r\n\r\nSome further specification:\r\n- model max length: 512\r\n- batch size: 128\r\n- 12800 training samples (or 100 steps per epoch)\r\n- about 7500 validation samples\r\n- smoothed cross-entropy loss\r\n- accuray and cross-entropy metric\r\n\r\nWhen calling ``model.fit`` the method prints, depending on the base model backbone the following times:\r\n- ``deberta-v3-base``: 540s (632s first epoch)\r\n- ``bert-base-uncased``: 29s (115s first epoch)\r\n\r\nHope it helps!", "Oh great! I mean not great in the sense that the model is super slow on TPUs, but great that `model.fit` and my custom training loop have the same issue. you are getting 512sentences*100batches/540s=~23sents/s, and I'm getting ~sents/s but for an electra style training.\r\n\r\nThank you for providing the numbers they really helped." ]
1,659
1,660
1,660
NONE
null
### System Info Latest version of transformers, Colab TPU, tensorflow 2. - Colab TPU - transformers: 4.21.0 - tensorflow: 2.8.2 / 2.6.2 - Python 3.7 ### Who can help? @LysandreJik, @Rocketknight1, @san ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am facing some issues while trying to fine-tune a TensorFlow DeBERTa model ``microsoft/deberta-v3-base`` on TPU. I have created some Colab notebooks showing the errors. Note, the second and third notebooks already include some measures to circumvent previous errors. - ValueError with partially known TensorShape with latest ``take_along_axis`` change: [FineTuning_TF_DeBERTa_TPU_1](https://colab.research.google.com/drive/1TN4Ro-U6a-7MypDN3AUoHFfEPnFErPBt?usp=sharing) - Output shape mismatch of branches with custom dropout: [FineTuning_TF_DeBERTa_TPU_2](https://colab.research.google.com/drive/1gubIwNKNFwexKcra37w9-CSzFJUDGm07?usp=sharing) - XLA compilation error because of dynamic/computed tensor shapes: [FineTuning_TF_DeBERTa_TPU_3](https://colab.research.google.com/drive/1L6cCdYCf3R5l90TK-Hs5dv85O6qL5vrR?usp=sharing) I have seen similar issues when using ``microsoft/deberta-base``. I believe the following issues are related: - [TF2 DeBERTaV2 runs super slow on TPUs #18239](https://github.com/huggingface/transformers/issues/18239) - [Debertav2 debertav3 TPU : socket closed #18276](https://github.com/huggingface/transformers/issues/18276). From this I used the fix on ``take_along_axis``. Thanks! ### Expected behavior Fine tuning is possible as it happens when using a GPU.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18476/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18476/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18475
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18475/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18475/comments
https://api.github.com/repos/huggingface/transformers/issues/18475/events
https://github.com/huggingface/transformers/pull/18475
1,329,002,189
PR_kwDOCUB6oc48q299
18,475
Add type hints to XLM-Roberta-XL models
{ "login": "asofiaoliveira", "id": 74454835, "node_id": "MDQ6VXNlcjc0NDU0ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asofiaoliveira", "html_url": "https://github.com/asofiaoliveira", "followers_url": "https://api.github.com/users/asofiaoliveira/followers", "following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}", "gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}", "starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions", "organizations_url": "https://api.github.com/users/asofiaoliveira/orgs", "repos_url": "https://api.github.com/users/asofiaoliveira/repos", "events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}", "received_events_url": "https://api.github.com/users/asofiaoliveira/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hello @Rocketknight1 , just reminding you of this ", "Hi @asofiaoliveira, I'm extremely sorry for the delay here! The PR is perfect, and I'm merging now!" ]
1,659
1,662
1,662
CONTRIBUTOR
null
This PR adds type hints for the PyTorch XLM-Roberta-XL as mentioned in #16059 @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18475/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18475", "html_url": "https://github.com/huggingface/transformers/pull/18475", "diff_url": "https://github.com/huggingface/transformers/pull/18475.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18475.patch", "merged_at": 1662381488000 }