url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/22292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22292/comments
https://api.github.com/repos/huggingface/transformers/issues/22292/events
https://github.com/huggingface/transformers/pull/22292
1,634,068,024
PR_kwDOCUB6oc5Mjejj
22,292
fix more doctests
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? Add missing `python` in docstring then add more some files to `documentation_tests.txt` - following #22268
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22292/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22292", "html_url": "https://github.com/huggingface/transformers/pull/22292", "diff_url": "https://github.com/huggingface/transformers/pull/22292.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22292.patch", "merged_at": 1679411778000 }
https://api.github.com/repos/huggingface/transformers/issues/22291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22291/comments
https://api.github.com/repos/huggingface/transformers/issues/22291/events
https://github.com/huggingface/transformers/pull/22291
1,633,966,979
PR_kwDOCUB6oc5MjIlB
22,291
Time to Say Goodbye, torch 1.7 and 1.8
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you @sgugger . I will try to make a clean breakup", "_The documentation is not available anymore as the PR was closed or merged._", "Hope I don't miss anything ", "ok, the deepspeed CI is running pt-1.8 - how do we solve that then?\r\n\r\nI have passed this change to the Deepspeed team let's see what they say.\r\n\r\nedit: they followed suit https://github.com/microsoft/DeepSpeed/pull/3082" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? We have been together for more than 2 years ❤️ (see this [discussion](https://github.com/huggingface/transformers/issues/18817))
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22291/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22291", "html_url": "https://github.com/huggingface/transformers/pull/22291", "diff_url": "https://github.com/huggingface/transformers/pull/22291.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22291.patch", "merged_at": 1679422921000 }
https://api.github.com/repos/huggingface/transformers/issues/22290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22290/comments
https://api.github.com/repos/huggingface/transformers/issues/22290/events
https://github.com/huggingface/transformers/issues/22290
1,633,619,328
I_kwDOCUB6oc5hXw2A
22,290
Native support of ChatGLM-6b
{ "login": "xianbaoqian", "id": 38108242, "node_id": "MDQ6VXNlcjM4MTA4MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/38108242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xianbaoqian", "html_url": "https://github.com/xianbaoqian", "followers_url": "https://api.github.com/users/xianbaoqian/followers", "following_url": "https://api.github.com/users/xianbaoqian/following{/other_user}", "gists_url": "https://api.github.com/users/xianbaoqian/gists{/gist_id}", "starred_url": "https://api.github.com/users/xianbaoqian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xianbaoqian/subscriptions", "organizations_url": "https://api.github.com/users/xianbaoqian/orgs", "repos_url": "https://api.github.com/users/xianbaoqian/repos", "events_url": "https://api.github.com/users/xianbaoqian/events{/privacy}", "received_events_url": "https://api.github.com/users/xianbaoqian/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Transformers does have native support for it even if it's not in the lib itself ;-) I see this as a chance to better support models with code on the Hub since that is the way the author chose, and since it will be more and more the norm as we cannot have the library grow exponentially.\r\n\r\nOf course, if the authors prefer to integrate the model in the library directly, we would be happy to look at the PR and help them merge it. We can also revisit if the issue gets a lot of traction and integrate it ourselves directly.", "I echo what Sylvain is saying above.\r\n\r\nAdditionally, for readers, if you would like this model to be integrated within the library nonetheless for it to be constantly tested and up-to-date with our API, please upvote the original post or add a comment mentioning so in this issue as this will help us identify models that should be more actively tested.\r\n\r\nThanks!", "Thanks for all great inputs! Let's see how much demand we gathered for this one. \r\n\r\nJust for your information ChatGLM-6b is the No. 1 model on the trending page now.\r\n\r\n<img width=\"288\" alt=\"image\" src=\"https://user-images.githubusercontent.com/38108242/226628729-b4cf69e8-8fe1-45bc-b03f-ebceb2bfce2c.png\">\r\n", "\r\n> This model performs really well (despite being a small model compared to large ones) and got a LOT of attention recently. It might be the SD moment for LLM IMO as it runs perfectly on consumer GPUs.\r\n\r\nIt does seem quite good but for it to be the true SD moment I think the license would have to allow commercial use, which it doesn't.\r\n" ]
1,679
1,679
null
NONE
null
### Feature request Support https://huggingface.co/THUDM/chatglm-6b (and its int4 variants) in the Transformers library instead of relying on remote code execution. ### Motivation This model performs really well (despite being a small model compared to large ones) and got a LOT of attention recently. It might be the SD moment for LLM IMO as it runs perfectly on consumer GPUs. It would be great if Transformers can have native support for this model, instead of relying on remote code execution. A native integration will also make it much easier to use the model on Inference API / Endpoints. ### Your contribution cc @sgugger @osanseviero
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22290/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22290/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22289/comments
https://api.github.com/repos/huggingface/transformers/issues/22289/events
https://github.com/huggingface/transformers/pull/22289
1,633,591,008
PR_kwDOCUB6oc5Mh2Ny
22,289
Add Beit3 model
{ "login": "raghavanone", "id": 115454562, "node_id": "U_kgDOBuGyYg", "avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raghavanone", "html_url": "https://github.com/raghavanone", "followers_url": "https://api.github.com/users/raghavanone/followers", "following_url": "https://api.github.com/users/raghavanone/following{/other_user}", "gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}", "starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions", "organizations_url": "https://api.github.com/users/raghavanone/orgs", "repos_url": "https://api.github.com/users/raghavanone/repos", "events_url": "https://api.github.com/users/raghavanone/events{/privacy}", "received_events_url": "https://api.github.com/users/raghavanone/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22289). All of your documentation changes will be reflected on that endpoint.", "I don't know much about the details of the transformers library but isn't it confusing that it refers to the model as microsoft/beit-base-patch16-224-pt22k etc which is the name for beit v1, not beit v3?", "> \r\n\r\nHi @MetaB0y , All I have done is to pull in different modules needed for beit3 into single file. I will start working on cleaning it up. ", "Hi @raghavanone, just wanted to know updates on this PR. If required, I would like to help.", "> Hi @raghavanone, just wanted to know updates on this PR. If required, I would like to help.\r\n\r\n@atharvakavitkar You can for sure contribute, I am out till mid of april, will pick this up if not done by anyone else by that time . Start a new PR, if you wanted to work on this .", "Hey @raghavanone , I was thinking about working on this this weekend and see that you have a WIP already and are probably back now. Would it help if I contributed on this PR or better to leave it to you?", "> Hey @raghavanone , I was thinking about working on this this weekend and see that you have a WIP already and are probably back now. Would it help if I contributed on this PR or better to leave it to you?\r\n\r\nHey @JonathanRayner , Thanks for asking, I have some changes locally, making changes to MOE layers are bit tricky. We can collaborate in this PR, But we have to find a way to communicate (use HF slack maybe) .", "cc @alaradirik so you can help when needed.", "> > Hey @raghavanone , I was thinking about working on this this weekend and see that you have a WIP already and are probably back now. Would it help if I contributed on this PR or better to leave it to you?\r\n> \r\n> Hey @JonathanRayner , Thanks for asking, I have some changes locally, making changes to MOE layers are bit tricky. We can collaborate in this PR, But we have to find a way to communicate (use HF slack maybe) .\r\n\r\nHi @raghavanone, thanks for working on this! The easiest way to collaborate would be to add @JonathanRayner as a collaborator to your forked transformers repo. I'd also be happy to create a Slack channel and invite you both to make it easier to communicate.\r\n\r\nI took a quick look at the PR and it'd be great if you could follow naming conventions in line with Beit such that the class and folder names are Beit3 and beit3 respectively. Other than this, BEiT-3 is a multi-modal model so you'll need to create `image_processing_beit3.py`, `tokenizer_beit3.py` and `processing_beit3.py` scripts, where the latter wraps the text and image preprocessor classes into a single class. BEiT-3 uses the transformers `XLMRobertaTokenizer` class to preprocess text so you can use that instead of creating `tokenizer_beit3.py`. You can refer to [OWL-ViT](https://github.com/huggingface/transformers/tree/v4.28.1/src/transformers/models/owlvit) to see an example of a multi-modal model that uses an existing tokenizer class (`CLIPTokenizer`).\r\n\r\nIn order to check if the PR passes the CI tests, you can run the following commands:\r\n```\r\nmake style\r\nmake quality\r\nmake repo-consistency\r\n\r\npytest tests/models/beit3/test_image_processor_beit3.py\r\npytest tests/models/beit3/test_processor_beit3.py\r\nRUN_SLOW=True pytest tests/models/beit3/test_modeling_beit3.py\r\n```\r\n\r\nHope this helps, I can add invite you to Slack if you send me your email addresses :)", "> > > Hey @raghavanone , I was thinking about working on this this weekend and see that you have a WIP already and are probably back now. Would it help if I contributed on this PR or better to leave it to you?\r\n> > \r\n> > \r\n> > Hey @JonathanRayner , Thanks for asking, I have some changes locally, making changes to MOE layers are bit tricky. We can collaborate in this PR, But we have to find a way to communicate (use HF slack maybe) .\r\n> \r\n> Hi @raghavanone, thanks for working on this! The easiest way to collaborate would be to add @JonathanRayner as a collaborator to your forked transformers repo. I'd also be happy to create a Slack channel and invite you both to make it easier to communicate.\r\n> \r\n> I took a quick look at the PR and it'd be great if you could follow naming conventions in line with Beit such that the class and folder names are Beit3 and beit3 respectively. Other than this, BEiT-3 is a multi-modal model so you'll need to create `image_processing_beit3.py`, `tokenizer_beit3.py` and `processing_beit3.py` scripts, where the latter wraps the text and image preprocessor classes into a single class. BEiT-3 uses the transformers `XLMRobertaTokenizer` class to preprocess text so you can use that instead of creating `tokenizer_beit3.py`. You can refer to [OWL-ViT](https://github.com/huggingface/transformers/tree/v4.28.1/src/transformers/models/owlvit) to see an example of a multi-modal model that uses an existing tokenizer class (`CLIPTokenizer`).\r\n> \r\n> In order to check if the PR passes the CI tests, you can run the following commands:\r\n> \r\n> ```\r\n> make style\r\n> make quality\r\n> make repo-consistency\r\n> \r\n> pytest tests/models/beit3/test_image_processor_beit3.py\r\n> pytest tests/models/beit3/test_processor_beit3.py\r\n> RUN_SLOW=True pytest tests/models/beit3/test_modeling_beit3.py\r\n> ```\r\n> \r\n> Hope this helps, I can add invite you to Slack if you send me your email addresses :)\r\n\r\nThanks @alaradirik , The PR is almost ready. I am already in HF slack under email oneraghavan@gmail.com / username Raghavan . I have some questions, please ping me in slack. ", "@alaradirik I am unable to understand the purpose of a processor, the Beit3 model takes in token ids and image as tensor. Please help me understand this more. \r\n", "> @alaradirik I am unable to understand the purpose of a processor, the Beit3 model takes in token ids and image as tensor. Please help me understand this more.\r\n\r\nHi @raghavanone, the models expect the input images to be preprocessed. Hence, the image_processing_beit3.py script should contain the `Beit3ImageProcessor` class that takes in the raw input image and preprocesses it to the format expected as input to the model (resizing to a fixed input size, normalization, cropping, etc.). \r\n\r\nprocessing_beit3.py script should contain the `Beit3Processor` class that wraps this image processor class and tokenizer class into a single instance such that users can use it preprocess text or images or both. Please take a look at other multi-modal model processors such as OWL-ViT and CLIP to see how that works.", "> > @alaradirik I am unable to understand the purpose of a processor, the Beit3 model takes in token ids and image as tensor. Please help me understand this more.\r\n> \r\n> Hi @raghavanone, the models expect the input images to be preprocessed. Hence, the image_processing_beit3.py script should contain the `Beit3ImageProcessor` class that takes in the raw input image and preprocesses it to the format expected as input to the model (resizing to a fixed input size, normalization, cropping, etc.).\r\n> \r\n> processing_beit3.py script should contain the `Beit3Processor` class that wraps this image processor class and tokenizer class into a single instance such that users can use it preprocess text or images or both. Please take a look at other multi-modal model processors such as OWL-ViT and CLIP to see how that works.\r\n\r\n@alaradirik Thanks, I have added both the class and added tests for them . Requesting you to review the PR.", "> \r\n\r\n@alaradirik All the PR feedbacks has been resolved. There are few open and I have put my questions in the comment. On the conversion script I have following questions :\r\n\r\n- There are 22 variations of model checkpoints released, should I test out for each of them ?\r\n- How to upload the checkpoints to hf ? ", "@NielsRogge Following are the open questions to be resolved :\r\n\r\nQ1. How should be the config uploaded ? \r\nQ2. How should be the checkpoints uploaded ? \r\nQ3. Comment from Alara : \"Passing a module to the class is not very optimal. I see that you are initializing and passing various modules in Beit3EncoderLayer and MultiheadAttention to MultiwayNetwork and creating deep copies.\r\n\r\nI think it'd make more sense to create separate classes (e.g. Beit3Dense, Beit3FeedForwardNetwork) as variable names such as first and second are confusing and make the code more difficult to be adapted for works that build upon this.\r\n\r\nI'm cc'ing @sgugger for his opinion.\"\r\n\r\nFor reference look at here https://github.com/huggingface/transformers/blob/1cda50bd12d7d454f56fbdd9f8fe32aee1eae5b3/src/transformers/models/beit3/modeling_beit3.py#L382", "cc @amyeroberts ", "Hi @raghavanone, \r\n\r\nI'll try to answer your questions as best as possible. For future Q's it's best to ping me rather than @NielsRogge or @sgugger.\r\n\r\n1. When you say 'uploaded' - are you referring to uploading onto the hub e.g. [like this for bert](https://huggingface.co/bert-base-uncased/blob/main/config.json)? If so, this should be uploaded alongside the model weights. When calling `model.push_to_hub(repo_path)`, both the model's checkpoint and configuration will be uploaded. You can look at some of the conversion scripts to see the weight loading / converting / uploading logic e.g. [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin/convert_swin_simmim_to_pytorch.py). Whilst the PR is still underdevelopment, I suggest having the models under a personal repo. Then, once ready to merge, we can transfer the weights and configs to under the official orgs'.\r\n2. See above.\r\n3. What's the question? Are you asking what @alaradirik's comment means, or asking whether this is something that should be done? \r\n\r\n", "> Hi @raghavanone,\r\n> \r\n> I'll try to answer your questions as best as possible. For future Q's it's best to ping me rather than @NielsRogge or @sgugger.\r\n> \r\n> 1. When you say 'uploaded' - are you referring to uploading onto the hub e.g. [like this for bert](https://huggingface.co/bert-base-uncased/blob/main/config.json)? If so, this should be uploaded alongside the model weights. When calling `model.push_to_hub(repo_path)`, both the model's checkpoint and configuration will be uploaded. You can look at some of the conversion scripts to see the weight loading / converting / uploading logic e.g. [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/swin/convert_swin_simmim_to_pytorch.py). Whilst the PR is still underdevelopment, I suggest having the models under a personal repo. Then, once ready to merge, we can transfer the weights and configs to under the official orgs'.\r\n> 2. See above.\r\n> 3. What's the question? Are you asking what @alaradirik's comment means, or asking whether this is something that should be done?\r\n\r\nThanks for the answers, for Q3 , I am not sure how to incorporate the feedback, I need some support on what needs to be done .", "@raghavanone For 3. what I believe Alara was saying (and I agree with) is that the layer `Beit3MultiwayNetwork` is trying to do too much, resulting in patterns which don't match with the rest of the library and is less interpretable. We should instead implement individual blocks which avoids hacky tricks like copying then reseting layer parameters. \r\n\r\nTo be explicit, an example would be for `self.self_attn_layer_norm = Beit3MultiwayNetwork(LayerNorm(self.embed_dim, eps=config.layernorm_eps))` on L491. Instead of using `Beit3MultiwayNetwork` we could instead define a model specific layernorm layer:\r\n\r\n```python\r\nclass Beit3LayerNorm(nn.Module):\r\n def __init__(self, config):\r\n super().__init__()\r\n self.layernorm_1 = nn.LayerNorm(config.embed_dim, eps=self.config.layernorm_eps)\r\n self.layernorm_2 = nn.LayerNorm(config.embed_dim, eps=self.config.layernorm_eps)\r\n\r\n def forward(self, hidden_states, split_position=-1):\r\n if split_position == -1:\r\n return self.layernorm_1(hidden_states)\r\n \r\n if split_position == 0:\r\n return self.layernorm_2(hidden_states)\r\n \r\n text_hidden, image_hidden = torch.split(\r\n hidden_states, [split_position, hidden_states.size(1) - split_position], dim=1,\r\n )\r\n text_hidden = self.layernorm_1(text_hidden)\r\n image_hidden = self.layernorm_2(image_hidden)\r\n hidden_states = torch.cat([text_hidden, image_hidden], dim=1)\r\n return hidden_states\r\n```\r\n\r\nAnd then L491 would become:\r\n\r\n```python\r\nself.self_attn_layer_norm = Beit3LayerNorm(config)\r\n```\r\n\r\nIt's OK for us to have some of the `if split_position` logic repeated if it means a clearer architecture and having layers take the config to instantiate themselves. \r\n\r\nA note regarding the general design of the layer above: \r\n* The `set_split_position` is very hacky and requires iterating over all of the layer of the model each time we do a forward pass with `multiway_split_position` set. Instead, lets pass this to the layers in the forward pass\r\n* The layers shouldn't accept arbitary *args or **kwargs in their methods\r\n* AFAICT `dim` was never changed or set, so we can remove this attribute. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@amyeroberts All the feedbacks have been incorporated.", "@raghavanone I'll be off for a few weeks from next week. If you need another review in that time please ask @rafaelpadilla. Once he's approved and all tests passing, then we can ask for a core maintainer review. ", "@rafaelpadilla All the PR feedbacks has been taken in. Request you to do a review .", "@rafaelpadilla All the PR feedbacks has been resolved.", "@ArthurZucker I have questions for some of the comments, request for more clarification .", "@NielsRogge @ArthurZucker All the feedbacks has been incorporated. ", "Thanks, I'm currently checking out your branch, will open a PR on your fork of things I'd like to see updated", "Hi @raghavanone I went over your PR, looks great already, however there are still various things which need to be addressed, for which I opened a PR here: https://github.com/raghavanone/transformers/pull/1.", "@amyeroberts All the comments have been addressed, The failing test are unrelated to this PR. Let me know if I need to anything to fix them. ", "Gently pinging @amyeroberts for approving this PR" ]
1,679
1,708
null
CONTRIBUTOR
null
Fixes #22178
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22289/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 5, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22289/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22289", "html_url": "https://github.com/huggingface/transformers/pull/22289", "diff_url": "https://github.com/huggingface/transformers/pull/22289.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22289.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22288/comments
https://api.github.com/repos/huggingface/transformers/issues/22288/events
https://github.com/huggingface/transformers/pull/22288
1,633,405,451
PR_kwDOCUB6oc5MhONF
22,288
add low_cpu_mem_usage option in run_clm.py example which will benefit…
{ "login": "sywangyi", "id": 36058628, "node_id": "MDQ6VXNlcjM2MDU4NjI4", "avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sywangyi", "html_url": "https://github.com/sywangyi", "followers_url": "https://api.github.com/users/sywangyi/followers", "following_url": "https://api.github.com/users/sywangyi/following{/other_user}", "gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions", "organizations_url": "https://api.github.com/users/sywangyi/orgs", "repos_url": "https://api.github.com/users/sywangyi/repos", "events_url": "https://api.github.com/users/sywangyi/events{/privacy}", "received_events_url": "https://api.github.com/users/sywangyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger please help review", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
… LLM loading add low_cpu_mem_usage option in run clm example, set the option True will help reduce peak memory and loading time in LLM finetune and inference.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22288/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22288", "html_url": "https://github.com/huggingface/transformers/pull/22288", "diff_url": "https://github.com/huggingface/transformers/pull/22288.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22288.patch", "merged_at": 1679481760000 }
https://api.github.com/repos/huggingface/transformers/issues/22287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22287/comments
https://api.github.com/repos/huggingface/transformers/issues/22287/events
https://github.com/huggingface/transformers/issues/22287
1,633,314,014
I_kwDOCUB6oc5hWmTe
22,287
the line 32 in convert_llama_weights_to_hf is LlamaTokenizer not LlamaForTokenizer
{ "login": "zhl5842", "id": 3888752, "node_id": "MDQ6VXNlcjM4ODg3NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3888752?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhl5842", "html_url": "https://github.com/zhl5842", "followers_url": "https://api.github.com/users/zhl5842/followers", "following_url": "https://api.github.com/users/zhl5842/following{/other_user}", "gists_url": "https://api.github.com/users/zhl5842/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhl5842/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhl5842/subscriptions", "organizations_url": "https://api.github.com/users/zhl5842/orgs", "repos_url": "https://api.github.com/users/zhl5842/repos", "events_url": "https://api.github.com/users/zhl5842/events{/privacy}", "received_events_url": "https://api.github.com/users/zhl5842/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @zhl5842, thanks for raising this! Would you like to open a PR to fix this doc example? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
https://github.com/huggingface/transformers/blob/c07a02a4b7892edfee22cbe57d3cdd9e10ae7a4d/src/transformers/models/llama/convert_llama_weights_to_hf.py#L37 from transformers import LlamaForCausalLM, LlamaForTokenizer to from transformers import LlamaForCausalLM, LlamaTokenizer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22287/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22286/comments
https://api.github.com/repos/huggingface/transformers/issues/22286/events
https://github.com/huggingface/transformers/issues/22286
1,633,313,517
I_kwDOCUB6oc5hWmLt
22,286
Tokenizer class LLaMATokenizer does not exist or is not currently imported.
{ "login": "ans92", "id": 51845963, "node_id": "MDQ6VXNlcjUxODQ1OTYz", "avatar_url": "https://avatars.githubusercontent.com/u/51845963?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ans92", "html_url": "https://github.com/ans92", "followers_url": "https://api.github.com/users/ans92/followers", "following_url": "https://api.github.com/users/ans92/following{/other_user}", "gists_url": "https://api.github.com/users/ans92/gists{/gist_id}", "starred_url": "https://api.github.com/users/ans92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ans92/subscriptions", "organizations_url": "https://api.github.com/users/ans92/orgs", "repos_url": "https://api.github.com/users/ans92/repos", "events_url": "https://api.github.com/users/ans92/events{/privacy}", "received_events_url": "https://api.github.com/users/ans92/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have found the following answer that solved my issue:\r\nhttps://github.com/huggingface/transformers/issues/22222#issuecomment-1477171703" ]
1,679
1,679
1,679
NONE
null
Hi, I want to run the LLaMA model. But I am facing issues on AutoTokenizer. I am running the following command. ``` tokenizer = AutoTokenizer.from_pretrained(Path(f"/root/text-generation-webui/models/{shared.model_name}/")) ``` But it is giving me the following error: ``` ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported. ``` My transformers version is: 4.28.0.dev0 Python version: 3.10.9 Could you please help me in this regard.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22286/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22285/comments
https://api.github.com/repos/huggingface/transformers/issues/22285/events
https://github.com/huggingface/transformers/pull/22285
1,633,270,723
PR_kwDOCUB6oc5MgxRZ
22,285
Guard imports of PreTrainedTokenizerFast on is_tokenizers_available
{ "login": "hvaara", "id": 1535968, "node_id": "MDQ6VXNlcjE1MzU5Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hvaara", "html_url": "https://github.com/hvaara", "followers_url": "https://api.github.com/users/hvaara/followers", "following_url": "https://api.github.com/users/hvaara/following{/other_user}", "gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}", "starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hvaara/subscriptions", "organizations_url": "https://api.github.com/users/hvaara/orgs", "repos_url": "https://api.github.com/users/hvaara/repos", "events_url": "https://api.github.com/users/hvaara/events{/privacy}", "received_events_url": "https://api.github.com/users/hvaara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @hvaara, thanks for creating this PR!\r\n\r\nAm I correct in saying this PR is to address an error when running: `from transformers import AutoTokenizer` if `tokenizers` isn't installed? \r\n\r\nI can see there's a similar import of `PreTrainedTokenizerFast` in `pipelines/__init__.py`. Could you a safe import there as well? It seems it's just for types so can probably be placed in the `if TYPE_CHECKING` logic. ", "Hi @amyeroberts!\r\n\r\n> Am I correct in saying this PR is to address an error when running: from transformers import AutoTokenizer if tokenizers isn't installed?\r\n\r\nYes, it will attempt to import `tokenizers` when it is not installed and raise an error.\r\n\r\n> I can see there's a similar import of PreTrainedTokenizerFast in `pipelines/__init__.py`. Could you a safe import there as well?\r\n\r\nSG! I created this PR in WIP mode mainly to see if the tests would pass. Handling this in `pipelines/__init__.py` was on my TODO list assuming this PR passed tests.\r\n\r\n> It seems it's just for types so can probably be placed in the if TYPE_CHECKING logic.\r\n\r\nGood feedback. I'll look into updating the logic to reflect this and also handle the case in `pipelines/__init__.py`.\r\n\r\nPerhaps I should also create an additional (set of) test(s) to verify the test suite can be run when certain dependencies (like `tokenizers`) does not exist? Are there similar tests like this already? If so, can you please point me to them, and if not, how do you propose I create a test like that?", "@hvaara Great! Looking forward to having this added and safe imports.\r\n\r\nFor the test questions, I'll hand this over to our expert @ydshieh who will know :) ", "Hi @hvaara \r\n\r\nThank you for the PR 🚀 \r\n\r\n> Perhaps I should also create an additional (set of) test(s) to verify the test suite can be run when certain dependencies (like tokenizers) does not exist? Are there similar tests like this already? If so, can you please point me to them, and if not, how do you propose I create a test like that?\r\n\r\nRegarding the testing, did you see the test suite failed to collect or run some tests when `tokenizers` is not installed? (rather than being skipped).\r\n", "Hi @hvaara Regarding \r\n\r\n> Perhaps I should also create an additional (set of) test(s) to verify the test suite can be run when certain dependencies (like tokenizers) does not exist? Are there similar tests like this already? If so, can you please point me to them, and if not, how do you propose I create a test like that?\r\n\r\nI think we can keep the PR simple as it is.\r\n\r\nOn our CI, we make sure the required dependencies are installed. And if we see some errors caused by this issue, we will fix by installing the dependencies.\r\n\r\nFor community contributors, it's still much better for them to install the dependencies if they want/need to run the tests locally. Otherwise, the test results may be all green (i.e. pass), but (a lot) of the test methods are actually skipped. This may lead to a gap in the communication.\r\n\r\nThank you for the PR again!.", "> > Perhaps I should also create an additional (set of) test(s) to verify the test suite can be run when certain dependencies (like tokenizers) does not exist? Are there similar tests like this already? If so, can you please point me to them, and if not, how do you propose I create a test like that?\r\n> \r\n> I think we can keep the PR simple as it is.\r\n\r\nSGTM.\r\n\r\nI'll update the commit one more time, but that should be the last one assuming the tests pass and it LGTY. Sorry for the spam.", "Thanks a lot for the help, and for merging the PR!" ]
1,679
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? This PR guards the imports of `PreTrainedTokenizerFast` which depends on [huggingface/tokenizers](https://github.com/huggingface/tokenizers). This class could be imported when the `tokenizers` library isn't installed in which case an exception is raised. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22285/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22285", "html_url": "https://github.com/huggingface/transformers/pull/22285", "diff_url": "https://github.com/huggingface/transformers/pull/22285.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22285.patch", "merged_at": 1680182164000 }
https://api.github.com/repos/huggingface/transformers/issues/22284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22284/comments
https://api.github.com/repos/huggingface/transformers/issues/22284/events
https://github.com/huggingface/transformers/issues/22284
1,633,184,799
I_kwDOCUB6oc5hWGwf
22,284
RuntimeError: "topk_cpu" not implemented for 'Half'
{ "login": "MarvinLong", "id": 15308801, "node_id": "MDQ6VXNlcjE1MzA4ODAx", "avatar_url": "https://avatars.githubusercontent.com/u/15308801?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarvinLong", "html_url": "https://github.com/MarvinLong", "followers_url": "https://api.github.com/users/MarvinLong/followers", "following_url": "https://api.github.com/users/MarvinLong/following{/other_user}", "gists_url": "https://api.github.com/users/MarvinLong/gists{/gist_id}", "starred_url": "https://api.github.com/users/MarvinLong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarvinLong/subscriptions", "organizations_url": "https://api.github.com/users/MarvinLong/orgs", "repos_url": "https://api.github.com/users/MarvinLong/repos", "events_url": "https://api.github.com/users/MarvinLong/events{/privacy}", "received_events_url": "https://api.github.com/users/MarvinLong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "Hello @MarvinLong \r\nThe reason behind that issue is that you forgot to pass `input_ids` on the same device as the model (here GPU)\r\nThe script:\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_path, device_map = \"auto\", load_in_8bit=True)\r\ntokenizer = AutoTokenizer.from_pretrained(model_path)\r\ninput_ids = tokenizer(inputs, return_tensors=\"pt\").input_ids\r\noutputs = model.generate(input_ids.to(0), max_new_tokens=300, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.35, repetition_penalty=1.2)\r\n```\r\nShould work\r\nAlso, a warning has been trigged to warn you about this !: \r\n```\r\n/home/miniconda3/lib/python3.10/site-packages/transformers/generation/utils.py:1374: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.\r\n```\r\nThanks! ", "For more context (in case you are interested):\r\nThis is because `device_map=\"auto\"` will load the model using `accelerate`. Loading a model with `accelerate` will place several \"forward hooks\" to it. That will apply some post-processing to the input such as placing the output of the model on the same device as the input.\r\nHere on the snippet you have shared, the input is placed on CPU, and you are loading the model in 8bit that will produce half-precision logits under the hood. In addition to that you are calling a sampling strategy that will involve calling some functions from pytorch such as [topk](https://pytorch.org/docs/stable/generated/torch.topk.html) on these logits that are not supported on CPU in half-precision, hence the error.", "@younesbelkada \r\nThanks a lot, this works" ]
1,679
1,679
1,679
NONE
null
### System Info transformers 4.27.1 WSL2 with Ubuntu 20.04 GPU: 4090 CUDA VERSION: 11.8 ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction **When I infer the model with 8bit for bloom 7b1** ``` model = AutoModelForCausalLM.from_pretrained(model_path, device_map = "auto", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained(model_path) input_ids = tokenizer(inputs, return_tensors="pt").input_ids outputs = model.generate(input_ids, max_new_tokens=300, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.35, repetition_penalty=1.2) ``` ### Expected behavior ``` /home/miniconda3/lib/python3.10/site-packages/transformers/generation/utils.py:1374: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`. warnings.warn( Traceback (most recent call last): File "/mnt/d/project/aigc/belle/test_infer_int8.py", line 13, in <module> outputs = model.generate(input_ids, max_new_tokens=300, do_sample = True, top_k = 30, top_p = 0.85, temperature = 0.35, repetition_penalty=1.2) File "/home/miniconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/miniconda3/lib/python3.10/site-packages/transformers/generation/utils.py", line 1452, in generate return self.sample( File "/home/miniconda3/lib/python3.10/site-packages/transformers/generation/utils.py", line 2482, in sample next_token_scores = logits_warper(input_ids, next_token_scores) File "/home/miniconda3/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 92, in __call__ scores = processor(input_ids, scores) File "/home/miniconda3/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 302, in __call__ indices_to_remove = scores < torch.topk(scores, top_k)[0][..., -1, None] RuntimeError: "topk_cpu" not implemented for 'Half' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22284/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22283/comments
https://api.github.com/repos/huggingface/transformers/issues/22283/events
https://github.com/huggingface/transformers/issues/22283
1,633,175,922
I_kwDOCUB6oc5hWEly
22,283
Is biogpt's tokenizer bugged?
{ "login": "fedshyvana", "id": 39780468, "node_id": "MDQ6VXNlcjM5NzgwNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/39780468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fedshyvana", "html_url": "https://github.com/fedshyvana", "followers_url": "https://api.github.com/users/fedshyvana/followers", "following_url": "https://api.github.com/users/fedshyvana/following{/other_user}", "gists_url": "https://api.github.com/users/fedshyvana/gists{/gist_id}", "starred_url": "https://api.github.com/users/fedshyvana/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fedshyvana/subscriptions", "organizations_url": "https://api.github.com/users/fedshyvana/orgs", "repos_url": "https://api.github.com/users/fedshyvana/repos", "events_url": "https://api.github.com/users/fedshyvana/events{/privacy}", "received_events_url": "https://api.github.com/users/fedshyvana/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@fedshyvana I believe this is how biogpt is trained on fairseq . For more information , you check into official repo of BioGpt.", "@upjabir thanks for pointing it out! I am looking at https://github.com/microsoft/BioGPT/blob/main/src/language_model_prompt_dataset.py which I believe is the code you're referring to. If I understand correctly, they use:\r\n[EOS] token_1, ..., token_n as input\r\nand \r\ntoken_1, ..., token_n [EOS] as target\r\n\r\ni.e. it seems like they just don't use a separate BOS token at all. \r\nBut in the HF BioGPT model config it says: \r\n \"bos_token_id\": 0\r\n \"eos_token_id\": 2\r\n \r\n Should we change it to:\r\n \"bos_token_id\": 2\r\n \"eos_token_id\": 2\r\n \r\n Or would it not make any difference at all? Thank you!", "@fedshyvana bos_token_id , eos_token_id is added to vocabulary as we always do for every tokenizer.But during building inputs with special tokens, we are only considering eos_token_id . Although we are not using bos_token during handling special token , i believe it will helpful in some rare case " ]
1,679
1,679
1,679
NONE
null
### System Info - `transformers` version: 4.27.1 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.1 (True) ### Who can help? @ArthurZucker and @younesbelkada could you please confirm this behavior is intended? Sorry if I mistagged. Thanks! ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer tokenizer_name = "microsoft/BioGPT-Large" tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) print('bos token: ', tokenizer.bos_token, 'id: ', tokenizer.bos_token_id) print('eos token: ', tokenizer.eos_token, 'id: ', tokenizer.eos_token_id) print('token ids: ', tokenizer("this is a test")['input_ids']) print('tokens: ', tokenizer.decode(tokenizer("this is a test")['input_ids'])) ``` Output: ``` bos token: <s> id: 0 eos token: </s> id: 2 token ids: [2, 54, 34, 21, 229] tokens: </s>this is a test ``` ### Expected behavior I would expect the tokenizer to prepend the BOS token (i.e. 0) and append the EOS token (i.e. 2) while currently the tokenizer prepends the EOS token, and does not add a special token to the end of the sequence of tokens.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22283/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22282/comments
https://api.github.com/repos/huggingface/transformers/issues/22282/events
https://github.com/huggingface/transformers/issues/22282
1,632,944,331
I_kwDOCUB6oc5hVMDL
22,282
Getting exception to trace t5 model in torchScript
{ "login": "dhrubo-os", "id": 109556906, "node_id": "U_kgDOBoe0qg", "avatar_url": "https://avatars.githubusercontent.com/u/109556906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhrubo-os", "html_url": "https://github.com/dhrubo-os", "followers_url": "https://api.github.com/users/dhrubo-os/followers", "following_url": "https://api.github.com/users/dhrubo-os/following{/other_user}", "gists_url": "https://api.github.com/users/dhrubo-os/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhrubo-os/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhrubo-os/subscriptions", "organizations_url": "https://api.github.com/users/dhrubo-os/orgs", "repos_url": "https://api.github.com/users/dhrubo-os/repos", "events_url": "https://api.github.com/users/dhrubo-os/events{/privacy}", "received_events_url": "https://api.github.com/users/dhrubo-os/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! As the error mentions, you have to provided `decoder_input_ids` . \r\nThe following works:\r\n```python \r\ninput_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'], torch.Tensor([[2]]).long())\r\n\r\ntraced_model = torch.jit.trace(model, input_tuple)\r\ntorch.jit.save(traced_model, \"flan-t5-small.pt\")\r\n```", "Hi @ArthurZucker ,\r\n\r\nThanks for your reply. I'm able to trace the model now. But how can I load the model back and get the prediction from the traced model?\r\n\r\nI tried to use this:\r\n\r\n```\r\nimport torch\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\n\r\nmodel = torch.jit.load(\"flan-t5-large.pt\")\r\n\r\nmodel.eval()\r\n\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"google/flan-t5-large\")\r\nt_input = \"translate English to French: The universe is a dark forest.\"\r\ntoken = tokenizer(t_input, return_tensors=\"pt\")\r\n\r\ntokens = model.generate(\r\n input_ids=token[\"input_ids\"],\r\n attention_mask=token[\"attention_mask\"],\r\n decoder_input_ids=token[\"input_ids\"],\r\n)\r\n\r\nprint(tokens)\r\n\r\noutput = tokenizer.decode(tokens[0].squeeze(), skip_special_tokens=True)\r\nprint(output)\r\n```\r\n\r\nThen I'm seeing error like:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Volumes/workplace/opensearch-py-ml/src/opensearch-py-ml/test1.py\", line 13, in <module>\r\n tokens = model.generate(\r\n File \"/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/torch/jit/_script.py\", line 785, in __getattr__\r\n return super(RecursiveScriptModule, self).__getattr__(attr)\r\n File \"/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/torch/jit/_script.py\", line 502, in __getattr__\r\n return super(ScriptModule, self).__getattr__(attr)\r\n File \"/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py\", line 1269, in __getattr__\r\n raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\nAttributeError: 'RecursiveScriptModule' object has no attribute 'generate'\r\n```\r\n\r\nI tried with the `forward` method too:\r\n\r\n```\r\nimport torch\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\n\r\nmodel = torch.jit.load(\"flan-t5-large.pt\")\r\n\r\nmodel.eval()\r\n\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"google/flan-t5-large\")\r\nt_input = \"translate English to French: The universe is a dark forest.\"\r\ntoken = tokenizer(t_input, return_tensors=\"pt\")\r\n\r\ntokens = model.forward(\r\n input_ids=token[\"input_ids\"],\r\n attention_mask=token[\"attention_mask\"],\r\n decoder_input_ids=token[\"input_ids\"],\r\n)\r\n\r\nprint(tokens)\r\n\r\noutput = tokenizer.decode(tokens[0].squeeze(), skip_special_tokens=True)\r\nprint(output)\r\n```\r\n\r\nThen I'm facing error like:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Volumes/workplace/opensearch-py-ml/src/opensearch-py-ml/test1.py\", line 21, in <module>\r\n output = tokenizer.decode(tokens[0].squeeze(), skip_special_tokens=True)\r\n File \"/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/transformers/tokenization_utils_base.py\", line 3471, in decode\r\n return self._decode(\r\n File \"/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/transformers/tokenization_utils.py\", line 931, in _decode\r\n filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)\r\n File \"/Users/dhrubo/Library/Python/3.9/lib/python/site-packages/transformers/tokenization_utils.py\", line 906, in convert_ids_to_tokens\r\n index = int(index)\r\nTypeError: int() argument must be a string, a bytes-like object or a number, not 'list'\r\n```\r\n\r\n", "1. Not entirely sure that the way to script the generate function is correct as the error mentions. I suspect that only the forward path is supported. I am guessing that you should try something like `torch.jit.trace(model.generate,...)` but pinging @gante here as I am not very familiar with our current jit support\r\n2. You are not decoding correctly. The output of the model are not individual tokens but `logits` which is a distribution of probability over what the next token is. This means that you first have to extract the argmax, and then decode the index.", "Hey @dhrubo-os 👋 \r\n\r\n`model.generate` is not fully exportable with `torch.jit`, but the model forward pass is. We have just added it to our examples, the workaround is to create an ad hoc model class with the jitted model + the generate function -- see [here](https://github.com/huggingface/transformers/blob/5fd4e3c87c685fba2dd9615be62131748a8b5ee3/examples/pytorch/text-generation/run_generation.py#L384)\r\n\r\nI hope this helps 🤗 ", "@ArthurZucker Could you kindly give an example of how to correctly decode?\r\n\r\n> You are not decoding correctly. The output of the model are not individual tokens but logits which is a distribution of probability over what the next token is. This means that you first have to extract the argmax, and then decode the index.", "@gante Hi, I tried out the ad-hoc model class with the jitted model you mentioned [here](https://github.com/huggingface/transformers/blob/5fd4e3c87c685fba2dd9615be62131748a8b5ee3/examples/pytorch/text-generation/run_generation.py#L384) on a `AutoModelForSeq2SeqLM` for FLAN-T5. But I am getting an error. Is there any adjustment needed to the script you linked to be compatible with T5 models?\r\n\r\nEnv:\r\n```\r\nOptimum - 1.8.8\r\nTransformers - 4.29.2\r\nTorch - 1.13.1\r\nonnxruntime-gpu - 1.15.1\r\nonnx - 1.14.0\r\nLinux\r\nPython 3.8.16\r\n```\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/dir/ai/models/py/summarization/modeling/flan/torchscript.py\", line 79, in <module>\r\n outputs = fallback_model.generate(**tokenized_dict)\r\n File \"/home/.virtualenvs/ai/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/.virtualenvs/ai/lib/python3.8/site-packages/transformers/generation/utils.py\", line 1515, in generate\r\n return self.greedy_search(\r\n File \"/home/.virtualenvs/ai/lib/python3.8/site-packages/transformers/generation/utils.py\", line 2332, in greedy_search\r\n outputs = self(\r\n File \"/home/dir/ai/models/py/summarization/modeling/flan/torchscript.py\", line 26, in __call__\r\n outputs = self._optimized(*trace_graph_inputs)\r\n File \"/home/.virtualenvs/ai/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nRuntimeError: forward() expected at most 4 argument(s) but received 5 argument(s). Declaration: forward(__torch__.transformers.models.t5.modeling_t5.___torch_mangle_2224.T5ForConditionalGeneration self, Tensor input_ids, Tensor attention_mask, Tensor decoder_input_ids) -> ((Tensor, ((Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor), (Tensor, Tensor, Tensor, Tensor)), Tensor))\r\n```", "Hi @shannonphu 👋 \r\n\r\nOur support for torchscript is mainly hands-off, I'm afraid I don't have the bandwidth to dive deeper on bugs :) Looking at the trace, it seems like there is something wrong with the input preprocessing.", "@shannonphu I'm running into something similar, what did you do to address the mismatch in number of arguments?", "[Update]\r\nJust making sure that I was clear, the class that is defined in the notebook ([readable format](https://github.com/pytorch/text/blob/bd3481896b7ac7cfcbeba43336158f841734aa70/notebooks/torchscriptable_t5_with_torchtext.ipynb)) allows to save and load a T5 model directly with `torch.jit.save` so no need for `torch.jit.trace`:\r\n```\r\nfrom torchtext.models import T5_LARGE_GENERATION\r\n\r\nt5_large = get_jit_from_bundle(T5_LARGE_GENERATION)\r\nmodel_filename = 'flan_t5_large_generation.pt'\r\ntorch.jit.save(t5_large, model_filename)\r\n```\r\n\r\n[Previous comment]\r\nI have created a [notebook](https://github.com/rbahumi/text/blob/torchscriptable_t5/notebooks/torchscriptable_t5_with_torchtext.ipynb) that defines a Torchscriptable T5 class which works with torch.jit.script() directly instead of trace. \r\n\r\nI have a pending pull request here:\r\nhttps://github.com/pytorch/text/pull/2122/commits/bd3481896b7ac7cfcbeba43336158f841734aa70\r\n\r\nHope that helps\r\n@dhrubo-os @wumadeline @gante @ArthurZucker @shannonphu \r\n" ]
1,679
1,704
1,679
NONE
null
### System Info ``` import torch from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large", torchscript=True) model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", torchscript=True) tokenized_dict = tokenizer( ["please answer the following question: what is the boiling point of nitrogen",], ["-320.4F",], return_tensors="pt" ) input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask']) traced_model = torch.jit.trace(model, input_tuple) torch.jit.save(traced_model, "flan-t5-large.pt") ``` I was trying to trace `google/flan-t5-large` model in torchScript. But I'm facing following exception: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [29], in <cell line: 13>() 7 tokenized_dict = tokenizer( 8 ["please answer the following question: what is the boiling point of nitrogen",], ["-320.4F",], 9 return_tensors="pt" 10 ) 11 input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask']) ---> 13 traced_model = torch.jit.trace(model, input_tuple) 14 torch.jit.save(traced_model, "flan-t5-large.pt") File ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:759, in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit) 756 return func 758 if isinstance(func, torch.nn.Module): --> 759 return trace_module( 760 func, 761 {"forward": example_inputs}, 762 None, 763 check_trace, 764 wrap_check_inputs(check_inputs), 765 check_tolerance, 766 strict, 767 _force_outplace, 768 _module_class, 769 ) 771 if ( 772 hasattr(func, "__self__") 773 and isinstance(func.__self__, torch.nn.Module) 774 and func.__name__ == "forward" 775 ): 776 return trace_module( 777 func.__self__, 778 {"forward": example_inputs}, (...) 785 _module_class, 786 ) File ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:976, in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit) 972 argument_names = get_callable_argument_names(func) 974 example_inputs = make_tuple(example_inputs) --> 976 module._c._create_method_from_trace( 977 method_name, 978 func, 979 example_inputs, 980 var_lookup_fn, 981 strict, 982 _force_outplace, 983 argument_names, 984 ) 985 check_trace_method = module._c._get_method(method_name) 987 # Check the trace against new traces created from user-specified inputs File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs) 1180 recording_scopes = False 1181 try: -> 1182 result = self.forward(*input, **kwargs) 1183 finally: 1184 if recording_scopes: File ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:1660, in T5ForConditionalGeneration.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1657 decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device) 1659 # Decode -> 1660 decoder_outputs = self.decoder( 1661 input_ids=decoder_input_ids, 1662 attention_mask=decoder_attention_mask, 1663 inputs_embeds=decoder_inputs_embeds, 1664 past_key_values=past_key_values, 1665 encoder_hidden_states=hidden_states, 1666 encoder_attention_mask=attention_mask, 1667 head_mask=decoder_head_mask, 1668 cross_attn_head_mask=cross_attn_head_mask, 1669 use_cache=use_cache, 1670 output_attentions=output_attentions, 1671 output_hidden_states=output_hidden_states, 1672 return_dict=return_dict, 1673 ) 1675 sequence_output = decoder_outputs[0] 1677 # Set device for model parallelism File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs) 1180 recording_scopes = False 1181 try: -> 1182 result = self.forward(*input, **kwargs) 1183 finally: 1184 if recording_scopes: File ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:949, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 947 else: 948 err_msg_prefix = "decoder_" if self.is_decoder else "" --> 949 raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds") 951 if inputs_embeds is None: 952 assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings" ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds ``` I also tried following way: ``` from transformers import T5ForConditionalGeneration import torch tokens_tensor = torch.ones(1, 10, dtype=torch.long) model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", torchscript=True) model.eval() scripted_model = torch.jit.trace(model, (tokens_tensor, tokens_tensor)) torch.jit.save(traced_model, "flan-t5-large.pt") ``` But this giving me following error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [34], in <cell line: 7>() 5 model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", torchscript=True) 6 model.eval() ----> 7 scripted_model = torch.jit.trace(model, (tokens_tensor, tokens_tensor)) 8 torch.jit.save(traced_model, "flan-t5-large.pt") File ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:759, in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit) 756 return func 758 if isinstance(func, torch.nn.Module): --> 759 return trace_module( 760 func, 761 {"forward": example_inputs}, 762 None, 763 check_trace, 764 wrap_check_inputs(check_inputs), 765 check_tolerance, 766 strict, 767 _force_outplace, 768 _module_class, 769 ) 771 if ( 772 hasattr(func, "__self__") 773 and isinstance(func.__self__, torch.nn.Module) 774 and func.__name__ == "forward" 775 ): 776 return trace_module( 777 func.__self__, 778 {"forward": example_inputs}, (...) 785 _module_class, 786 ) File ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:976, in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit) 972 argument_names = get_callable_argument_names(func) 974 example_inputs = make_tuple(example_inputs) --> 976 module._c._create_method_from_trace( 977 method_name, 978 func, 979 example_inputs, 980 var_lookup_fn, 981 strict, 982 _force_outplace, 983 argument_names, 984 ) 985 check_trace_method = module._c._get_method(method_name) 987 # Check the trace against new traces created from user-specified inputs File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs) 1180 recording_scopes = False 1181 try: -> 1182 result = self.forward(*input, **kwargs) 1183 finally: 1184 if recording_scopes: File ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:1660, in T5ForConditionalGeneration.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1657 decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device) 1659 # Decode -> 1660 decoder_outputs = self.decoder( 1661 input_ids=decoder_input_ids, 1662 attention_mask=decoder_attention_mask, 1663 inputs_embeds=decoder_inputs_embeds, 1664 past_key_values=past_key_values, 1665 encoder_hidden_states=hidden_states, 1666 encoder_attention_mask=attention_mask, 1667 head_mask=decoder_head_mask, 1668 cross_attn_head_mask=cross_attn_head_mask, 1669 use_cache=use_cache, 1670 output_attentions=output_attentions, 1671 output_hidden_states=output_hidden_states, 1672 return_dict=return_dict, 1673 ) 1675 sequence_output = decoder_outputs[0] 1677 # Set device for model parallelism File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs) 1190 # If we don't have any hooks, we want to skip the rest of the logic in 1191 # this function, and just call forward. 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs) 1180 recording_scopes = False 1181 try: -> 1182 result = self.forward(*input, **kwargs) 1183 finally: 1184 if recording_scopes: File ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:949, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 947 else: 948 err_msg_prefix = "decoder_" if self.is_decoder else "" --> 949 raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds") 951 if inputs_embeds is None: 952 assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings" ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds ``` How should I trace t5 model? Can you provide any example? Thanks ### Who can help? @ArthurZucker @patric ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large", torchscript=True) model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", torchscript=True) tokenized_dict = tokenizer( ["please answer the following question: what is the boiling point of nitrogen",], ["-320.4F",], return_tensors="pt" ) input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask']) traced_model = torch.jit.trace(model, input_tuple) torch.jit.save(traced_model, "flan-t5-large.pt") ``` ### Expected behavior I should be able to trace the model in torchscript file.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22282/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22281/comments
https://api.github.com/repos/huggingface/transformers/issues/22281/events
https://github.com/huggingface/transformers/pull/22281
1,632,692,121
PR_kwDOCUB6oc5Me0sX
22,281
Fix various imports
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? While building the v2 of the test fetcher, I discovered (I mean the util discovered) that some imports in the source code are wrong. This PR fixes all of them.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22281/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22281", "html_url": "https://github.com/huggingface/transformers/pull/22281", "diff_url": "https://github.com/huggingface/transformers/pull/22281.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22281.patch", "merged_at": 1679582058000 }
https://api.github.com/repos/huggingface/transformers/issues/22280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22280/comments
https://api.github.com/repos/huggingface/transformers/issues/22280/events
https://github.com/huggingface/transformers/pull/22280
1,632,559,218
PR_kwDOCUB6oc5MeYFM
22,280
fix: Text splitting in the BasicTokenizer
{ "login": "connor-henderson", "id": 78612354, "node_id": "MDQ6VXNlcjc4NjEyMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-henderson", "html_url": "https://github.com/connor-henderson", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "repos_url": "https://api.github.com/users/connor-henderson/repos", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think this was still a fix we were interesting in having for users who don't have `ftfy` installed.", "> I think this was still a fix we were interesting in having for users who don't have `ftfy` installed.\r\n\r\nOh ok reopening in that case", "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for your comments Arthur! Looking back at it to make your changes I realized two things:\r\n- we actually don't need to add the pattern splitting to the BasicTokenizer, we just need to *not* split on punctuation (just need to get out of the way of the byte pair encoding happening later)\r\n- the already written [test_check_encoding_slow_fast](https://github.com/huggingface/transformers/blob/b29fd6971d9cd6ba2a824628effe243f543b8f61/tests/models/clip/test_tokenization_clip.py#L78) test is a bit more comprehensive, so I used that and it passes locally now without ftfy, I'll call out the additional edits I had to make below for this\r\n\r\nWill wait for your review again before replicating it in the other BasicTokenizers. \r\n\r\nAlso, would you like me to add additional testing? I.e., should tokenizers that use BasicTokenizer each have a test that checks slow and fast match like the comprehensive check like the one mentioned above that's in CLIP, or should they have a simple truncated version of it", "Okay! I'll review again, can you make sure `make quality` and `make repo-consistency` both pass? \r\n", "Hey @ArthurZucker just checking in, anything else wanted here?", "Nope thanks for the ping, it is just that it is a lot of changes on a lot of models (a lot of old models too 😉 ). Getting to it! ", "Update: just keeping this PR to the punc splitting param, reasoning below. Lmk if you have other thoughts!\r\n\r\nWrote a [script](https://colab.research.google.com/drive/1tz4yZ_tHsGFvMlhQoihW4kCbCsxAGXco#scrollTo=bIaelmkHsWie) I ran locally to get a directional sense of how much of a difference each of these 3 changes (punctuation split, remove control chars, normalizing) was having to help choose how to address the above. It appears the punc split edit this PR was primarily addressing does help a fair bit, seems to increase compatibility with the CLIP fast tokenizer ~20% to near 100% for the languages tested. The control chars and normalizing edits don’t appear to make much of a difference at all (0% and ~0.1% improvement, respectively). Again this analysis was imperfect but I figure from this the cost-benefit for this PR suggests just keeping it to the punctuation splitting.\r\n\r\nAlso, I misspoke earlier saying control chars are in the CLIP vocab, they aren’t. Instead the discrepancy between the basic tokenizer and the fast one I was addressing was that the former strips them and the latter marks them as UNK. I don’t believe having this is likely to make much of a difference for inference or tokenization, as the script run suggests, since control chars are rare and UNK tokens don’t provide much info to the model.", "Looking at the output for `ar` it seems NEW + normalize is the best match isn't it ?\r\n\r\nI think this proves that `NFC` is indeed a good addition which was previously missing ! \r\n\r\nThanks a lot for this testing script, this adds a lot of value for future potential changes !", ">I think this proves that NFC is indeed a good addition which was previously missing !\r\n\r\nThanks and sounds good, I'll put it back in. I had removed it only because I couldn't test all languages due to the time it would take so I wasn't certain if there could be other issues with it, and the improvements were somewhat modest.", "Hey @Narsil just checking in, anything else wanted here?", "No we just need a core maintainer's approval.\r\n\r\nSorry I forgot about this PR.\r\n\r\n@sgugger for final review.", "Noticed the linked issue was marked stale, this PR probably will be soon too. Any other action wanted here? \r\n\r\nI think as the script shows this will significantly improve how well the Basic Tokenizer matches up with the fast one for CLIP, the lingering question was just around whether the NFC normalizing change was approved or whether that part should be removed. @Narsil @sgugger ", "Just waited for a clarification on whether @Narsil was fine with merging this or not. :-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Since @Narsil is not commenting, I'm guessing we can merge :man_shrugging: ", "Oh sorry ! Missed this one it's OK !" ]
1,679
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Note: The tests are failing because of repo-consistency, but I intentionally haven't made the change across the repo until we confirm what change we want to make. Will remove [Don't merge] tag once done. This fixes an issue related to the BasicTokenizer. Initially looked to fix #22166 by updating the _run_split_on_punc method in the BasicTokenizer to split apostrophes without starting a new word. In the issue, it was noted that apostrophes weren't being split properly as `should've` was being converted to `should`, `'`, and `ve` instead of `should` and `'ve`. However, when adding testing it became apparent there are other cases where the BasicTokenizer is failing too, such as capturing '!!' as separate tokens with id 256 as opposed to one 748 token, which is in the [vocab](https://huggingface.co/openai/clip-vit-base-patch32/resolve/main/vocab.json). To address these I modified `_run_split_on_punc` in the BasicTokenizer to also split on passed patterns and renamed it `_split_on_punc_or_pattern` and added tests for them. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22280/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22280", "html_url": "https://github.com/huggingface/transformers/pull/22280", "diff_url": "https://github.com/huggingface/transformers/pull/22280.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22280.patch", "merged_at": 1689088079000 }
https://api.github.com/repos/huggingface/transformers/issues/22279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22279/comments
https://api.github.com/repos/huggingface/transformers/issues/22279/events
https://github.com/huggingface/transformers/pull/22279
1,632,520,272
PR_kwDOCUB6oc5MePws
22,279
Move torch.compile() wrapping after DDP/FSDP wrapping to ensure correct graph breaks during training
{ "login": "ani300", "id": 919977, "node_id": "MDQ6VXNlcjkxOTk3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/919977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ani300", "html_url": "https://github.com/ani300", "followers_url": "https://api.github.com/users/ani300/followers", "following_url": "https://api.github.com/users/ani300/following{/other_user}", "gists_url": "https://api.github.com/users/ani300/gists{/gist_id}", "starred_url": "https://api.github.com/users/ani300/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ani300/subscriptions", "organizations_url": "https://api.github.com/users/ani300/orgs", "repos_url": "https://api.github.com/users/ani300/repos", "events_url": "https://api.github.com/users/ani300/events{/privacy}", "received_events_url": "https://api.github.com/users/ani300/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I don't have permissions to merge the PR, so I don't know what the process looks like from here", "I was just waiting for the tests to complete ;-)" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? This is a simple PR that moves the wrapper for torch.compile() after those for DDP and FSDP, given that the order is important for those two pieces to work together during training. Fixes #22215 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22279/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22279", "html_url": "https://github.com/huggingface/transformers/pull/22279", "diff_url": "https://github.com/huggingface/transformers/pull/22279.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22279.patch", "merged_at": 1679334841000 }
https://api.github.com/repos/huggingface/transformers/issues/22278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22278/comments
https://api.github.com/repos/huggingface/transformers/issues/22278/events
https://github.com/huggingface/transformers/pull/22278
1,632,410,463
PR_kwDOCUB6oc5Md4C-
22,278
Example of pad_to_multiple_of for padding and truncation guide & docstring update
{ "login": "MKhalusova", "id": 1065417, "node_id": "MDQ6VXNlcjEwNjU0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MKhalusova", "html_url": "https://github.com/MKhalusova", "followers_url": "https://api.github.com/users/MKhalusova/followers", "following_url": "https://api.github.com/users/MKhalusova/following{/other_user}", "gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}", "starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions", "organizations_url": "https://api.github.com/users/MKhalusova/orgs", "repos_url": "https://api.github.com/users/MKhalusova/repos", "events_url": "https://api.github.com/users/MKhalusova/events{/privacy}", "received_events_url": "https://api.github.com/users/MKhalusova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for iterating!" ]
1,679
1,699
1,679
CONTRIBUTOR
null
This PR adds a minor update to the docs as previously it was not clear that `pad_to_multiple_of` has to be used with `padding=True`. Based on https://huggingface.slack.com/archives/C027NLU6CE9/p1679325764920509
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22278/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22278", "html_url": "https://github.com/huggingface/transformers/pull/22278", "diff_url": "https://github.com/huggingface/transformers/pull/22278.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22278.patch", "merged_at": 1679336335000 }
https://api.github.com/repos/huggingface/transformers/issues/22277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22277/comments
https://api.github.com/repos/huggingface/transformers/issues/22277/events
https://github.com/huggingface/transformers/issues/22277
1,632,410,242
I_kwDOCUB6oc5hTJqC
22,277
deploy whisper by passing last transcribed sentences to decoder's past_key values
{ "login": "hannan72", "id": 8229163, "node_id": "MDQ6VXNlcjgyMjkxNjM=", "avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hannan72", "html_url": "https://github.com/hannan72", "followers_url": "https://api.github.com/users/hannan72/followers", "following_url": "https://api.github.com/users/hannan72/following{/other_user}", "gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}", "starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hannan72/subscriptions", "organizations_url": "https://api.github.com/users/hannan72/orgs", "repos_url": "https://api.github.com/users/hannan72/repos", "events_url": "https://api.github.com/users/hannan72/events{/privacy}", "received_events_url": "https://api.github.com/users/hannan72/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @hannan72, thanks for raising an issue!\r\n\r\nQuestions like this should be asked in the [forum](https://discuss.huggingface.co/) as we try to reserve github issues for bugs and specific feature requests. ", "> Hi @hannan72, thanks for raising an issue!\r\n> \r\n> Questions like this should be asked in the [forum](https://discuss.huggingface.co/) as we try to reserve github issues for bugs and specific feature requests.\r\n\r\nThanks to inform me. I posted there. ", "Hey @hannan72 - do you have the link to the forum post? I can reply directly there :) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
NONE
null
I'm working on using whisper model for real time live transcription. I have to deploy audio chucks on model to have a sense of real time transcription, i.e. every 1 second I deploy audio of last 5 seconds. For such a task, I have to merge transcribed text since the audio has 4 seconds of overlapping with previous samples. Thus, the output transcription at each 1-second time-step has some words in common with previous ones. There are several solutions for merging such transcribed texts such as using a language model or dynamic programming. But I have an Idea to use whisper model itself for merging text while it has a language model. I want to pass the previous transcriptions to its decoder's past key and do the generation based on the initial texts generated at previous time-steps. Do you have any idea that how can I implement such idea? @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22277/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22276/comments
https://api.github.com/repos/huggingface/transformers/issues/22276/events
https://github.com/huggingface/transformers/issues/22276
1,632,404,504
I_kwDOCUB6oc5hTIQY
22,276
run_summarization requires a dataset_name or train_file or validation_file in all cases
{ "login": "coreyfournier", "id": 1676610, "node_id": "MDQ6VXNlcjE2NzY2MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1676610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/coreyfournier", "html_url": "https://github.com/coreyfournier", "followers_url": "https://api.github.com/users/coreyfournier/followers", "following_url": "https://api.github.com/users/coreyfournier/following{/other_user}", "gists_url": "https://api.github.com/users/coreyfournier/gists{/gist_id}", "starred_url": "https://api.github.com/users/coreyfournier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/coreyfournier/subscriptions", "organizations_url": "https://api.github.com/users/coreyfournier/orgs", "repos_url": "https://api.github.com/users/coreyfournier/repos", "events_url": "https://api.github.com/users/coreyfournier/events{/privacy}", "received_events_url": "https://api.github.com/users/coreyfournier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed. Would you like to open a PR to fix this?" ]
1,679
1,679
1,679
NONE
null
### System Info In the latest version of run_summarization on line 264 it requires dataset_name or train_file or validation_file. I was trying to perform a "do_predict" with the parameter "test_file" set, but the validation will not let me proceed. Looking at the code, do_predict only uses the test dataset anyway, so this appears to be a bug. See the specific file https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py ### Who can help? @sgugger Unable to "do_predict" due to validation not checking for "train_file". As a workaround I set "validation_file". ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction in a Colab notebook: %%shell /content/transformers/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path /content/gs/models/Clinical-T5-Large/ \ --do_predict \ --test_file "/content/gs/models/Clinical-T5-Large/validation.csv" \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /content/gs/models/Clinical-T5-Large \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate \ --max_source_length=1024 You get the error "Need either a dataset name or a training/validation file." ### Expected behavior Expected normal prediction output like below: Running tokenizer on prediction dataset: 0% 0/5040 [00:00<?, ? examples/s]03/20/2023 15:40:20 - INFO - datasets.arrow_dataset - Caching processed dataset at /root/.cache/huggingface/datasets/csv/default-73bac40c02a24f34/0.0.0/6b34fb8fcf56f7c8ba51dc895bfa2bfbe43546f190a60fcf74bb5e8afdcc2317/cache-1e88c7213311bd88.arrow Downloading builder script: 100% 6.27k/6.27k [00:00<00:00, 3.59MB/s] 03/20/2023 15:40:52 - INFO - __main__ - *** Predict *** [INFO|trainer.py:3066] 2023-03-20 15:40:52,611 >> ***** Running Prediction ***** [INFO|trainer.py:3068] 2023-03-20 15:40:52,611 >> Num examples = 5040 [INFO|trainer.py:3071] 2023-03-20 15:40:52,611 >> Batch size = 4 [WARNING|logging.py:280] 2023-03-20 15:40:52,624 >> You're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. [INFO|configuration_utils.py:575] 2023-03-20 15:40:52,636 >> Generate config GenerationConfig { "_from_model_config": true, "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0, "transformers_version": "4.28.0.dev0" }
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22276/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22275/comments
https://api.github.com/repos/huggingface/transformers/issues/22275/events
https://github.com/huggingface/transformers/pull/22275
1,632,356,205
PR_kwDOCUB6oc5MdsYz
22,275
[New GitHub Action Job] Automatically create/update tiny models
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Just a minor update, including the revision number. Will merge once the CI being green. Thank you for the reviews 🚀 " ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? [New GitHub Action Job] Automatically create/update tiny models ## Goal **A scheduled job that create/update tiny models periodically** - so **we will have tiny versions for newly added models in `transformers` as soon as possible** ### Some properties - For a new model type: The Hub repo. is created - For a new framework implementation of an existing model type: A Hub repo. PR is opened - We keep track of the commit hash information for tiny models on the Hub - The pipeline tests will use the commit hash information stored in `tiny_model_summary.json` file - To avoid sudden CI failures due to new commits - The CI job will produce a file `updated_tiny_model_summary.json` - We should open a PR in `transformers` to update `tiny_model_summary.json` - If all pipeline tests pass, we are good to merge and use the new/updated tiny models on the Hub.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22275/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22275", "html_url": "https://github.com/huggingface/transformers/pull/22275", "diff_url": "https://github.com/huggingface/transformers/pull/22275.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22275.patch", "merged_at": 1679595258000 }
https://api.github.com/repos/huggingface/transformers/issues/22274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22274/comments
https://api.github.com/repos/huggingface/transformers/issues/22274/events
https://github.com/huggingface/transformers/pull/22274
1,632,318,760
PR_kwDOCUB6oc5MdkNY
22,274
Fix doc links
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? Resolves issue with some dead links in the documentation resulting from relative paths. Equivalent links were searched for in the translated docs but were not found. Hence only changes in files in `docs/source/en` Fixes # 21596 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22274/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22274", "html_url": "https://github.com/huggingface/transformers/pull/22274", "diff_url": "https://github.com/huggingface/transformers/pull/22274.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22274.patch", "merged_at": 1679332052000 }
https://api.github.com/repos/huggingface/transformers/issues/22273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22273/comments
https://api.github.com/repos/huggingface/transformers/issues/22273/events
https://github.com/huggingface/transformers/pull/22273
1,632,290,110
PR_kwDOCUB6oc5Mdd5r
22,273
Proper map location for optimizer load
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? I have been thinking more about #22159 and now remember why it might be better to load the optimizer state on the device directly: in multi-GPU training, the optimizer state is load in each process, so that would load it num_processes times on the CPU and risk a CPU RAM OOM. Therefore, this adjusts #22159 to load the optimizer state: - on CPU when there is only one process - on each device directly when there are multiple.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22273/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22273", "html_url": "https://github.com/huggingface/transformers/pull/22273", "diff_url": "https://github.com/huggingface/transformers/pull/22273.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22273.patch", "merged_at": 1679326247000 }
https://api.github.com/repos/huggingface/transformers/issues/22272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22272/comments
https://api.github.com/repos/huggingface/transformers/issues/22272/events
https://github.com/huggingface/transformers/pull/22272
1,632,285,094
PR_kwDOCUB6oc5MdczG
22,272
Fixed gradient checkpoint bug for TimeSeriesTransformer
{ "login": "mollerup23", "id": 69806327, "node_id": "MDQ6VXNlcjY5ODA2MzI3", "avatar_url": "https://avatars.githubusercontent.com/u/69806327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mollerup23", "html_url": "https://github.com/mollerup23", "followers_url": "https://api.github.com/users/mollerup23/followers", "following_url": "https://api.github.com/users/mollerup23/following{/other_user}", "gists_url": "https://api.github.com/users/mollerup23/gists{/gist_id}", "starred_url": "https://api.github.com/users/mollerup23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mollerup23/subscriptions", "organizations_url": "https://api.github.com/users/mollerup23/orgs", "repos_url": "https://api.github.com/users/mollerup23/repos", "events_url": "https://api.github.com/users/mollerup23/events{/privacy}", "received_events_url": "https://api.github.com/users/mollerup23/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Strange, I will see if I can install Python for my environment then. Thank you for all your help, and thanks for running make for me! " ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Moved gradient checkpointing clause to above the decoder layer implementation. This should fix the bug this issue addresses. Fixes #21737 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [GitHub Issue](https://github.com/huggingface/transformers/pull/21733) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22272/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22272", "html_url": "https://github.com/huggingface/transformers/pull/22272", "diff_url": "https://github.com/huggingface/transformers/pull/22272.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22272.patch", "merged_at": 1679575513000 }
https://api.github.com/repos/huggingface/transformers/issues/22271
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22271/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22271/comments
https://api.github.com/repos/huggingface/transformers/issues/22271/events
https://github.com/huggingface/transformers/pull/22271
1,632,276,455
PR_kwDOCUB6oc5Mda91
22,271
Fix balanced and auto device_map
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? In #22095 some of the arguments passed to `infer_auto_device_map` were grouped in kwargs. The problem is that one of those (`max_memory`) is not updated anymore after being changed (when device_map is `"auto"`, `"balanced"` or `"balanced_low_0"`). This PR fixes that. Note: this is a regression so this will need to go in a patch.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22271/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22271", "html_url": "https://github.com/huggingface/transformers/pull/22271", "diff_url": "https://github.com/huggingface/transformers/pull/22271.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22271.patch", "merged_at": 1679325858000 }
https://api.github.com/repos/huggingface/transformers/issues/22270
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22270/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22270/comments
https://api.github.com/repos/huggingface/transformers/issues/22270/events
https://github.com/huggingface/transformers/pull/22270
1,632,181,352
PR_kwDOCUB6oc5MdGrP
22,270
Fix the gradient checkpointing bug of the llama model
{ "login": "yqy2001", "id": 55196500, "node_id": "MDQ6VXNlcjU1MTk2NTAw", "avatar_url": "https://avatars.githubusercontent.com/u/55196500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yqy2001", "html_url": "https://github.com/yqy2001", "followers_url": "https://api.github.com/users/yqy2001/followers", "following_url": "https://api.github.com/users/yqy2001/following{/other_user}", "gists_url": "https://api.github.com/users/yqy2001/gists{/gist_id}", "starred_url": "https://api.github.com/users/yqy2001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yqy2001/subscriptions", "organizations_url": "https://api.github.com/users/yqy2001/orgs", "repos_url": "https://api.github.com/users/yqy2001/repos", "events_url": "https://api.github.com/users/yqy2001/events{/privacy}", "received_events_url": "https://api.github.com/users/yqy2001/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? The gradient checkpoint of the LLaMA model does not work. This PR fixed it following the [GPT-2 model's gradient checkpointing implementation](https://github.com/huggingface/transformers/blob/cf0af9a31beb84e8feec77af51f72d063ba905aa/src/transformers/models/gpt2/modeling_gpt2.py#L482). ### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 2.1.0.dev20230317+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Reproducing the Bug The bug is tested on 4 A100 40GB GPUs. Please first cloning Stanford Alpaca's repo that finetunes the LLaMA model: ```shell git clone git@github.com:tatsu-lab/stanford_alpaca.git cd stanford_alpaca.git pip install -r requirements.txt ``` CUDA out of memory error will raise if we run the training script with `per_device_train_batch_size=1`: ```sh torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py \ --model_name_or_path <your_path_to_hf_converted_llama_ckpt_and_tokenizer> \ --data_path ./alpaca_data.json \ --bf16 True \ --output_dir <your_output_dir> \ --num_train_epochs 3 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LLaMADecoderLayer' \ --tf32 True --gradient_checkpointing ``` ### Test after this PR: After this PR, we can successfully train the LLaMA-7B models on 4 40GB GPUs with `per_device_train_batch_size=8`, using gradient checkpointing: ```sh torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py \ --model_name_or_path <your_path_to_hf_converted_llama_ckpt_and_tokenizer> \ --data_path ./alpaca_data.json \ --bf16 True \ --output_dir <your_output_dir> \ --num_train_epochs 3 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LLaMADecoderLayer' \ --tf32 True --gradient_checkpointing ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @ArthurZucker @zphang @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22270/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22270/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22270", "html_url": "https://github.com/huggingface/transformers/pull/22270", "diff_url": "https://github.com/huggingface/transformers/pull/22270.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22270.patch", "merged_at": 1679322410000 }
https://api.github.com/repos/huggingface/transformers/issues/22269
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22269/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22269/comments
https://api.github.com/repos/huggingface/transformers/issues/22269/events
https://github.com/huggingface/transformers/issues/22269
1,632,138,314
I_kwDOCUB6oc5hSHRK
22,269
Batch elements interfere with each other with int8
{ "login": "leonweber", "id": 6436442, "node_id": "MDQ6VXNlcjY0MzY0NDI=", "avatar_url": "https://avatars.githubusercontent.com/u/6436442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leonweber", "html_url": "https://github.com/leonweber", "followers_url": "https://api.github.com/users/leonweber/followers", "following_url": "https://api.github.com/users/leonweber/following{/other_user}", "gists_url": "https://api.github.com/users/leonweber/gists{/gist_id}", "starred_url": "https://api.github.com/users/leonweber/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leonweber/subscriptions", "organizations_url": "https://api.github.com/users/leonweber/orgs", "repos_url": "https://api.github.com/users/leonweber/repos", "events_url": "https://api.github.com/users/leonweber/events{/privacy}", "received_events_url": "https://api.github.com/users/leonweber/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,687
1,687
NONE
null
### System Info - `transformers` version: [cf0af9a31beb84e8feec77af51f72d063ba905aa](https://github.com/huggingface/transformers/commit/cf0af9a31beb84e8feec77af51f72d063ba905aa) - `bitsandbytes` version: 0.37.1 - Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Using GPU in script?: yes: A100 in MIG mode - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger @muell ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The outputs of a model for a given batch element depend on the other elements in the batch when using int8 inference. See minimal example below. I'm not sure whether this is expected? ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", load_in_8bit=True, device_map="auto") tokenizer = transformers.AutoTokenizer.from_pretrained("bigscience/bloom-560m") out1 = model(**tokenizer(["A"], return_tensors="pt").to("cuda")) out2 = model(**tokenizer(["A"], ["B"], return_tensors="pt").to("cuda")) print(out1['logits'][0][0]) print(out2['logits'][0][0]) print(out1['logits'][0][0] == out2['logits'][0][0]) > tensor([345.0000, 348.2500, 354.2500, ..., 206.2500, 206.2500, 206.2500], device='cuda:0', dtype=torch.float16, grad_fn=<SelectBackward0>) > tensor([344.7500, 347.7500, 353.7500, ..., 206.0000, 206.0000, 206.0000], device='cuda:0', dtype=torch.float16, grad_fn=<SelectBackward0>) > tensor([False, False, False, ..., False, False, False], device='cuda:0') ``` ### Expected behavior The computation should be independent of the other batch elements, as for fp32 (see below): ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", load_in_8bit=False, device_map="auto").to("cuda") tokenizer = transformers.AutoTokenizer.from_pretrained("bigscience/bloom-560m") out1 = model(**tokenizer(["A"], return_tensors="pt").to("cuda")) out2 = model(**tokenizer(["A"], ["B"], return_tensors="pt").to("cuda")) print(out1['logits'][0][0]) print(out2['logits'][0][0]) print(out1['logits'][0][0] == out2['logits'][0][0]) > tensor([343.6242, 346.4580, 352.7924, ..., 205.3806, 205.3800, 205.3746], grad_fn=<SelectBackward0>) > tensor([343.6242, 346.4580, 352.7924, ..., 205.3806, 205.3800, 205.3746], grad_fn=<SelectBackward0>) > tensor([ True, True, True, ..., True, True, False]) ``` *Edit 2023/03/22 Corrected the code for FP32.*
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22269/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22268
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22268/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22268/comments
https://api.github.com/repos/huggingface/transformers/issues/22268/events
https://github.com/huggingface/transformers/pull/22268
1,632,113,123
PR_kwDOCUB6oc5Mc3yM
22,268
More doctests
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Currently 46 failed tests - error message to be sent to Slack is too long and failed the report sending.\r\nSo I remove them for the list to be tested. Let's deal with them and add them back step by step, @amyeroberts \r\n\r\nFor reference, here [the job run page](https://github.com/huggingface/transformers/actions/runs/4468541471/jobs/7849422399)\r\n\r\n```bash\r\nsrc/transformers/models/auto/tokenization_auto.py\r\nsrc/transformers/models/bart/tokenization_bart.py\r\nsrc/transformers/models/bart/tokenization_bart_fast.py\r\nsrc/transformers/models/bertweet/tokenization_bertweet.py\r\nsrc/transformers/models/blenderbot/tokenization_blenderbot.py\r\nsrc/transformers/models/blenderbot/tokenization_blenderbot_fast.py\r\nsrc/transformers/models/bloom/tokenization_bloom_fast.py\r\nsrc/transformers/models/codegen/tokenization_codegen.py\r\nsrc/transformers/models/codegen/tokenization_codegen_fast.py\r\nsrc/transformers/models/deberta/tokenization_deberta.py\r\nsrc/transformers/models/deberta/tokenization_deberta_fast.py\r\nsrc/transformers/models/dpr/tokenization_dpr.py\r\nsrc/transformers/models/dpr/tokenization_dpr_fast.py\r\nsrc/transformers/models/gpt2/tokenization_gpt2.py\r\nsrc/transformers/models/gpt2/tokenization_gpt2_fast.py\r\nsrc/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py\r\nsrc/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py\r\nsrc/transformers/models/gpt_neox/tokenization_gpt_neox_fast.py\r\nsrc/transformers/models/gpt_sw3/tokenization_gpt_sw3.py\r\nsrc/transformers/models/led/tokenization_led.py\r\nsrc/transformers/models/led/tokenization_led_fast.py\r\nsrc/transformers/models/longformer/tokenization_longformer.py\r\nsrc/transformers/models/longformer/tokenization_longformer_fast.py\r\nsrc/transformers/models/luke/tokenization_luke.py\r\nsrc/transformers/models/m2m_100/tokenization_m2m_100.py\r\nsrc/transformers/models/marian/tokenization_marian.py\r\nsrc/transformers/models/mvp/tokenization_mvp.py\r\nsrc/transformers/models/mvp/tokenization_mvp_fast.py\r\nsrc/transformers/models/roberta/tokenization_roberta.py\r\nsrc/transformers/models/roberta/tokenization_roberta_fast.py\r\nsrc/transformers/models/roformer/tokenization_roformer.py\r\nsrc/transformers/models/roformer/tokenization_roformer_fast.py\r\nsrc/transformers/models/transfo_xl/tokenization_transfo_xl.py\r\nsrc/transformers/models/transfo_xl/tokenization_transfo_xl.py\r\nsrc/transformers/models/auto/image_processing_auto.py\r\nsrc/transformers/models/auto/feature_extraction_auto.py\r\nsrc/transformers/models/markuplm/feature_extraction_markuplm.py\r\nsrc/transformers/models/auto/processing_auto.py\r\n\r\n```" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? Add all files (tokenization / image processor / feature extractor / processor) to doctests. Currently the list is not sorted - it might be better to add a check to sort the list in order (in `utils/check_doctest_list.py`).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22268/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22268", "html_url": "https://github.com/huggingface/transformers/pull/22268", "diff_url": "https://github.com/huggingface/transformers/pull/22268.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22268.patch", "merged_at": 1679401650000 }
https://api.github.com/repos/huggingface/transformers/issues/22267
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22267/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22267/comments
https://api.github.com/repos/huggingface/transformers/issues/22267/events
https://github.com/huggingface/transformers/pull/22267
1,632,096,616
PR_kwDOCUB6oc5Mc0OC
22,267
Fix error in mixed precision training of `TFCvtModel`
{ "login": "gcuder", "id": 60609608, "node_id": "MDQ6VXNlcjYwNjA5NjA4", "avatar_url": "https://avatars.githubusercontent.com/u/60609608?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gcuder", "html_url": "https://github.com/gcuder", "followers_url": "https://api.github.com/users/gcuder/followers", "following_url": "https://api.github.com/users/gcuder/following{/other_user}", "gists_url": "https://api.github.com/users/gcuder/gists{/gist_id}", "starred_url": "https://api.github.com/users/gcuder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gcuder/subscriptions", "organizations_url": "https://api.github.com/users/gcuder/orgs", "repos_url": "https://api.github.com/users/gcuder/repos", "events_url": "https://api.github.com/users/gcuder/events{/privacy}", "received_events_url": "https://api.github.com/users/gcuder/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? This PR fixes the issue that the `TFCvtModel` cannot be trained with `keras.fit` using `mixed-precision`. The issue was in this [line](https://github.com/huggingface/transformers/blob/c4bf6f38bda1de3798095515875a119298bf0611/src/transformers/models/cvt/modeling_tf_cvt.py#L96) when a random tensor is initialized without specifying the correct `dtype`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22267/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22267", "html_url": "https://github.com/huggingface/transformers/pull/22267", "diff_url": "https://github.com/huggingface/transformers/pull/22267.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22267.patch", "merged_at": 1679400777000 }
https://api.github.com/repos/huggingface/transformers/issues/22266
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22266/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22266/comments
https://api.github.com/repos/huggingface/transformers/issues/22266/events
https://github.com/huggingface/transformers/pull/22266
1,631,881,246
PR_kwDOCUB6oc5McFzu
22,266
Update training_args.py -- a nightly install is not required anymore for torch.compile
{ "login": "pminervini", "id": 227357, "node_id": "MDQ6VXNlcjIyNzM1Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pminervini", "html_url": "https://github.com/pminervini", "followers_url": "https://api.github.com/users/pminervini/followers", "following_url": "https://api.github.com/users/pminervini/following{/other_user}", "gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}", "starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pminervini/subscriptions", "organizations_url": "https://api.github.com/users/pminervini/orgs", "repos_url": "https://api.github.com/users/pminervini/repos", "events_url": "https://api.github.com/users/pminervini/events{/privacy}", "received_events_url": "https://api.github.com/users/pminervini/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
A nightly install is not required anymore for `torch.compile`. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22266/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22266", "html_url": "https://github.com/huggingface/transformers/pull/22266", "diff_url": "https://github.com/huggingface/transformers/pull/22266.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22266.patch", "merged_at": 1679313606000 }
https://api.github.com/repos/huggingface/transformers/issues/22265
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22265/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22265/comments
https://api.github.com/repos/huggingface/transformers/issues/22265/events
https://github.com/huggingface/transformers/pull/22265
1,631,859,860
PR_kwDOCUB6oc5McBC0
22,265
Enable traced model for text-generation task
{ "login": "jiqing-feng", "id": 107918818, "node_id": "U_kgDOBm614g", "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiqing-feng", "html_url": "https://github.com/jiqing-feng", "followers_url": "https://api.github.com/users/jiqing-feng/followers", "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}", "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions", "organizations_url": "https://api.github.com/users/jiqing-feng/orgs", "repos_url": "https://api.github.com/users/jiqing-feng/repos", "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}", "received_events_url": "https://api.github.com/users/jiqing-feng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger @gante", "@gante Thanks for your attention. Would you please help me to merge it? Thanks! I think the demand for `jit trace` will grow, and I hope we can keep on working on it so it will be adapted to all models and all tasks in the future." ]
1,679
1,679
1,679
CONTRIBUTOR
null
@gante Hi, Gante. Refer to: https://github.com/huggingface/transformers/pull/22072 Thanks for your advice. This PR only changed the example, would you please help me to review it? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22265/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22265", "html_url": "https://github.com/huggingface/transformers/pull/22265", "diff_url": "https://github.com/huggingface/transformers/pull/22265.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22265.patch", "merged_at": 1679480367000 }
https://api.github.com/repos/huggingface/transformers/issues/22264
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22264/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22264/comments
https://api.github.com/repos/huggingface/transformers/issues/22264/events
https://github.com/huggingface/transformers/pull/22264
1,631,812,796
PR_kwDOCUB6oc5Mb20d
22,264
Adding Llama FastTokenizer support.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the ping. We'll need the actual fast tokenizer file to merge this though :sweat_smile: ", "True, I uncovered more issues around multiple space handling, I'm nailing down on the pre_tokenizer combo for it.", "More troublesome than anticipated.\r\n\r\nWhen encoding `\" Hello\"` from a pure BPE perspectivve, `tokenizers` does `[259, 10994]` (`\" \"` + `Hello`)\r\nwhereas spm does `[29871, 15043]` (`\" \"` + `\" Hello\"`) which from a pure ids & merges perspectives seems worse.\r\n\r\nI though of fixing that using a `pre_tokenizer` that splits words onto their own.\r\n\r\nHowever on encoding `\" ird\"` this time `spm` DOES do `[259, 1823]`.\r\nSeems this is where the score comes into play.", "What is the status of this PR?\r\n", "For the doc builder, we're going to need an update on the docker image so that it pulls 0.13.3 to generate the doc.", "Hi @Narsil ,\r\n\r\nthe `warning.warn` to `raise RuntimeError` change in `src/transformers/convert_slow_tokenizer.py` breaks a lot of things: I wanted to fine-tune a mT5 model and it is now no longer possible (I'm using the PyTorch example from [documentation](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering#fine-tuning-t5-on-squad20).)\r\n\r\nHow is it possible to rubustify it -> also DeBERTa v3 has byte fallback vocab (but I didn't test it yet) :thinking: ", "> Hi @Narsil ,\r\n> \r\n> the `warning.warn` to `raise RuntimeError` change in `src/transformers/convert_slow_tokenizer.py` breaks a lot of things: I wanted to fine-tune a mT5 model and it is now no longer possible (I'm using the PyTorch example from \r\n\r\n How is it possible to rubustify it -> also DeBERTa v3 has byte fallback vocab (but I didn't test it yet) thinking\r\n\r\n\r\nFirst of all we could revert by all means, but since now `tokenizers` has `ByteFallback` we could make it 1-1 for those, that was the idea behind upping to an error.\r\n\r\nIt's a relatively sizeable issue if there are models deployed out there which have inconsistent behavior regarding this though (slow using byte fallback, fast not using it). I'm not sure why it was a warning in the first place.\r\n\r\n\r\n> DeBERTa v3 \r\n\r\nLet's have a look too.\r\n\r\nAs a user, what's your opinion here, should we just fix the various conversion scripts, or would you rather keep the warning with the previous pitfalls ?", "Both are using Unigram with ByteFallback which isn't supported yet. ", "@Narsil After this commit `AutoTokenizer.from_pretrained` is extremely slow, spending time in `convert_slow_tokenizer.py` at every call. Is it expected? Or I am doing something wrong?", "Which repo are you using? We need to create the fast files on the repo.\n\n\nConverting from slow is super slow and there's nothing to be done about it (tokenizers needs to recreate a structure by doing O(n2) search over the vocab because spm does not store this information. ", "@ArthurZucker ", "I see thanks!" ]
1,679
1,680
1,680
CONTRIBUTOR
null
- Requires https://github.com/huggingface/tokenizers/pull/1183 version - Only support byte_fallback for llama, raise otherwise (safety net). - Lots of questions are special tokens How to test: ```python #! pip install -e https://github.com/huggingface/tokenizers@byte_fallback#egg=tokenizers from transformers.convert_slow_tokenizer import convert_slow_tokenizer from transformers import AutoTokenizer from tokenizers import Tokenizer tokenizer = AutoTokenizer.from_pretrained("huggingface/llama-7b") if False: new_tokenizer = Tokenizer.from_file("tok.json") else: new_tokenizer = convert_slow_tokenizer(tokenizer) new_tokenizer.save("tok.json") strings = [ "This is a test", "生活的真谛是", "生活的真谛是[MASK]。", # XXX: This one is problematic because of special tokens # "<s> Something something", ] for string in strings: encoded = tokenizer(string)["input_ids"] encoded2 = new_tokenizer.encode(string).ids assert encoded == encoded2, f"{encoded} != {encoded2}" decoded = tokenizer.decode(encoded) decoded2 = new_tokenizer.decode(encoded2) assert decoded.strip() == decoded2, f"{repr(decoded)} != {repr(decoded2)}" ``` # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22264/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 4, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22264/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22264", "html_url": "https://github.com/huggingface/transformers/pull/22264", "diff_url": "https://github.com/huggingface/transformers/pull/22264.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22264.patch", "merged_at": 1680767583000 }
https://api.github.com/repos/huggingface/transformers/issues/22263
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22263/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22263/comments
https://api.github.com/repos/huggingface/transformers/issues/22263/events
https://github.com/huggingface/transformers/issues/22263
1,631,755,468
I_kwDOCUB6oc5hQpzM
22,263
AdamW implementation
{ "login": "StrangeTcy", "id": 2532099, "node_id": "MDQ6VXNlcjI1MzIwOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/2532099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StrangeTcy", "html_url": "https://github.com/StrangeTcy", "followers_url": "https://api.github.com/users/StrangeTcy/followers", "following_url": "https://api.github.com/users/StrangeTcy/following{/other_user}", "gists_url": "https://api.github.com/users/StrangeTcy/gists{/gist_id}", "starred_url": "https://api.github.com/users/StrangeTcy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StrangeTcy/subscriptions", "organizations_url": "https://api.github.com/users/StrangeTcy/orgs", "repos_url": "https://api.github.com/users/StrangeTcy/repos", "events_url": "https://api.github.com/users/StrangeTcy/events{/privacy}", "received_events_url": "https://api.github.com/users/StrangeTcy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @StrangeTcy, thanks for raising this issue! \r\n\r\nPlease don't be worried :) The warning is there so that there isn't any unexpected changes for users when `AdamW` is eventually removed from the library and is part of the deprecation cycle. We advise that the torch implementation is used instead of the one in the transformers library, and making this switch now in the relevant places in your code will ensure that nothing breaks when the time comes. Until then, the `AdamW` class will remain in transformers. \r\n\r\nOne thing this warning is missing is specific information about when i.e. which version, this will happen and should be added! ", "Great, thanks. Looking forward to the next versions" ]
1,679
1,679
1,679
NONE
null
### Feature request I'm getting the warning from optimization (https://github.com/huggingface/transformers/blob/main/src/transformers/optimization.py, lines 391 and on): ``` "This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch" " implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this" " warning" ``` How worried should I really be? Are there plans to use the torch AdamW version and eventually discarding your own implementation? ### Motivation Presumably the torch.optim.AdamW implementation is better and using it would make the whole library a bit leaner ### Your contribution Not sure
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22263/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22262
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22262/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22262/comments
https://api.github.com/repos/huggingface/transformers/issues/22262/events
https://github.com/huggingface/transformers/pull/22262
1,631,623,035
PR_kwDOCUB6oc5MbOIs
22,262
[WIP] Add H3
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22262). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey @NielsRogge ✋,\r\n\r\nNice work! Do you know if this model would be integrated in the near future inside HuggingFace? Was this PR staled given complexities with custom ops?\r\n\r\nCould you give an overview of the missing steps needed in this PR to have a functional H3 model integrated into HF? 🙏 \r\nThanks for your work! 🙌 \r\n", "Hi @gaceladri the PR is actually totally ready, the only thing that needs to done is perhaps make [this function](https://github.com/NielsRogge/transformers/blob/5199d3d3a08264f1b17442504559c28304ce619c/src/transformers/models/h3/modeling_h3.py#L139) more like the other Attention classes in the library (like [this class](https://github.com/huggingface/transformers/blob/17a55534f5e5df10ac4804d4270bf6b8cc24998d/src/transformers/models/llama/modeling_llama.py#L158)).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? This PR adds the H3 model by Hazy Research (Stanford University). I've removed the Flash Attention dependency, and main author @DanFu09 has removed the einops dependency (🙏 ). I've kept an optional soft dependency on `pykeops`, to allow for speedups. The model runs fine if the user doesn't have this library installed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22262/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22262/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22262", "html_url": "https://github.com/huggingface/transformers/pull/22262", "diff_url": "https://github.com/huggingface/transformers/pull/22262.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22262.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22261
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22261/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22261/comments
https://api.github.com/repos/huggingface/transformers/issues/22261/events
https://github.com/huggingface/transformers/issues/22261
1,631,565,391
I_kwDOCUB6oc5hP7ZP
22,261
H
{ "login": "lil-fahad", "id": 73719703, "node_id": "MDQ6VXNlcjczNzE5NzAz", "avatar_url": "https://avatars.githubusercontent.com/u/73719703?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lil-fahad", "html_url": "https://github.com/lil-fahad", "followers_url": "https://api.github.com/users/lil-fahad/followers", "following_url": "https://api.github.com/users/lil-fahad/following{/other_user}", "gists_url": "https://api.github.com/users/lil-fahad/gists{/gist_id}", "starred_url": "https://api.github.com/users/lil-fahad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lil-fahad/subscriptions", "organizations_url": "https://api.github.com/users/lil-fahad/orgs", "repos_url": "https://api.github.com/users/lil-fahad/repos", "events_url": "https://api.github.com/users/lil-fahad/events{/privacy}", "received_events_url": "https://api.github.com/users/lil-fahad/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lil-fahad, thanks for raising an issue. \r\n\r\nSo that we can best help you, could you fill in the issue template including information such as the environment (run `transformers-cli env` in the terminal), the issue being encountered, the expected behaviour and a full traceback please? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("icyGS/StockPredictor") model = AutoModelForSequenceClassification.from_pretrained("icyGS/StockPredictor")
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22261/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22260
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22260/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22260/comments
https://api.github.com/repos/huggingface/transformers/issues/22260/events
https://github.com/huggingface/transformers/issues/22260
1,631,359,151
I_kwDOCUB6oc5hPJCv
22,260
How to load local code for model with `trust_remote_code=True`?
{ "login": "LZY-the-boys", "id": 72137647, "node_id": "MDQ6VXNlcjcyMTM3NjQ3", "avatar_url": "https://avatars.githubusercontent.com/u/72137647?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LZY-the-boys", "html_url": "https://github.com/LZY-the-boys", "followers_url": "https://api.github.com/users/LZY-the-boys/followers", "following_url": "https://api.github.com/users/LZY-the-boys/following{/other_user}", "gists_url": "https://api.github.com/users/LZY-the-boys/gists{/gist_id}", "starred_url": "https://api.github.com/users/LZY-the-boys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LZY-the-boys/subscriptions", "organizations_url": "https://api.github.com/users/LZY-the-boys/orgs", "repos_url": "https://api.github.com/users/LZY-the-boys/repos", "events_url": "https://api.github.com/users/LZY-the-boys/events{/privacy}", "received_events_url": "https://api.github.com/users/LZY-the-boys/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @LZY-the-boys, thanks for raising this issue, \r\n\r\nIf I've understood correctly, the question being asked is how to load in a customized version of the model on the ['THUDM/glm-large-chinese' repo](https://huggingface.co/THUDM/glm-large-chinese). \r\n\r\nWhen running: \r\n```python\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained('THUDM/glm-large-chinese', trust_remote_code=True)\r\n```\r\n\r\nThe model being downloaded will be the one defined in [THUDM/glm-large-chinese](https://huggingface.co/THUDM/glm-large-chinese). `trust_remote_code=True` is simply saying that it's OK for this model code to be downloaded and run from the hub. \r\n\r\nIf you wish to load a local model, then this model should be saved out to either the hub or locally and the path to its location passed to `from_pretrained` e.g.:\r\n\r\n```\r\nmodel.save_pretained('path/to/my/model') # Model with adapted methods\r\nmodel = ModelClass.from_pretrained('path/to/my/model', trust_remote_code=True)\r\n```\r\n\r\nThere's more information about using models with [custom code here](https://huggingface.co/docs/transformers/v4.27.1/en/custom_models#using-a-model-with-custom-code).", "OK, the `model.save_pretrained` indeed is a choice to custom the remote code in local folder, though it will copy these local files to a `transformers/local` dir and run . In early times I change the code in that temporary directory so cause the above doubt. " ]
1,679
1,679
1,679
NONE
null
### Feature request When I use the model with `trust_remote_code=True`, I cannot directly change these remote codes because everytime I load model it will request new codes from remote hub. So how can I avoid that ? Can I custom these codes in local? example: ``` model = AutoModelForSeq2SeqLM.from_pretrained('THUDM/glm-large-chinese', trust_remote_code=True) model.forward(...) # which I want to change the code ``` ### Motivation The remote code is not always good to fit user needs. So the user should have ways to change the remote code. ### Your contribution if there is no other way I can sub a PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22260/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/22260/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22259
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22259/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22259/comments
https://api.github.com/repos/huggingface/transformers/issues/22259/events
https://github.com/huggingface/transformers/issues/22259
1,631,313,243
I_kwDOCUB6oc5hO91b
22,259
Different outputs of the official LLaMA repo and transformers' implementation
{ "login": "yqy2001", "id": 55196500, "node_id": "MDQ6VXNlcjU1MTk2NTAw", "avatar_url": "https://avatars.githubusercontent.com/u/55196500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yqy2001", "html_url": "https://github.com/yqy2001", "followers_url": "https://api.github.com/users/yqy2001/followers", "following_url": "https://api.github.com/users/yqy2001/following{/other_user}", "gists_url": "https://api.github.com/users/yqy2001/gists{/gist_id}", "starred_url": "https://api.github.com/users/yqy2001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yqy2001/subscriptions", "organizations_url": "https://api.github.com/users/yqy2001/orgs", "repos_url": "https://api.github.com/users/yqy2001/repos", "events_url": "https://api.github.com/users/yqy2001/events{/privacy}", "received_events_url": "https://api.github.com/users/yqy2001/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I met the same problem.", "cc @gante ", "Hey @yqy2001 @TempleX98 👋 \r\n\r\nUnless the code is exactly the same, it is impossible to compare `sample` implementations based on a few examples. Small things like the order of operations will produce very small logits differences and, unless the logits are exactly the same, the sampling step will pick different tokens for the same seed. \r\n\r\nThe best way to compare implementations is with greedy approaches with long outputs (especially if the comparison is done at a logit level!). In `transformers`, that is done by passing `do_sample=False`, `return_dict=True`, and `output_scores=True`. \r\n\r\nEDIT: please note that since this issue was originally opened, a [few llama-specific fixes and performance improvements were merged](https://github.com/huggingface/transformers/commits/main/src/transformers/models/llama) :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "It looks like this behavior depends on what model you are using, try to change to chat model like Llama-2-7b-chat-hf will solve this issue. " ]
1,679
1,691
1,684
CONTRIBUTOR
null
### System Info - `transformers` version: main - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.10 - Python version: 3.8.16 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Not ### Who can help? @zphang @ArthurZucker @gan ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The official LLaMA repo generates a coherent and meaningful response to the below prompt, while the Huggingface LLaMA generates multiple responses that are not relevant to the prompt. ## Official LLaMA Outputs ```shell git clone git@github.com:facebookresearch/llama.git cd llama pip install -r requirements.txt pip install -e . ``` Please first substitute the [prompt](https://github.com/facebookresearch/llama/blob/57b0eb62de0636e75af471e49e2f1862d908d9d8/example.py#L82) as: ```python prompts = ["I believe the meaning of life is"] ``` Run for inference with the 13B model: ```sh torchrun --nproc_per_node 2 example.py --ckpt_dir $TARGET_FOLDER/13B --tokenizer_path $TARGET_FOLDER/tokenizer.model ``` The output is: ``` I believe the meaning of life is to love others, love ourselves, and love our God. The way we do that is by showing compassion and acceptance. We have to love the people around us even when they are struggling. We have to love ourselves even when we are failing. We have to love God even when we are not certain. This is the meaning of life. ``` ## Huggingface LLaMA The code to generate output with transformers' llama: ```py import transformers import torch torch.manual_seed(1) tokenizer = transformers.LlamaTokenizer.from_pretrained("$YOUR_CONVERTED_DIR/tokenizer/") model = transformers.LlamaForCausalLM.from_pretrained("$YOUR_CONVERTED_DIR/llama-13b/").half() model.cuda() prompt = "I believe the meaning of life is" inputs = tokenizer(prompt, return_tensors="pt") generated_ids = model.generate(inputs.input_ids.cuda(), max_new_tokens=256, do_sample=True, top_p=0.95, temperature=0.8) print(tokenizer.batch_decode(generated_ids)[0]) ``` The outputs seem to be more illogical (many sentences have nothing to do with `the meaning of life`): ``` I believe the meaning of life is to give life meaning I believe that we are here to be of service to others I believe that the purpose of life is to grow in wisdom and love I believe that life is not all about me I believe that what I give I receive and what I receive I give I believe that the journey is more important than the destination I believe that we have a gift to share and that that gift is not for ourselves I believe that I am the right person in the right place at the right time I believe that the only thing we have to be concerned about is the present moment I believe that God is in everyone and everything I believe that we are all connected I believe that we are all equal and unique I believe that we are all responsible for the world we live in I believe that we are all perfect and whole I believe that we are all worthy of love I believe that we are all on a journey of self-discovery I believe that we are all meant to do what we do I believe that we are all perfect in our own way I believe that we are all loved I believe that we are all loved by God I believe that there is only one God I believe that God ``` ## Analysis: In LLaMA's official repo, they set the [`temperature` to 0.8 and `top_p` to 0.95](https://github.com/facebookresearch/llama/blob/57b0eb62de0636e75af471e49e2f1862d908d9d8/example.py#L69) for generation. I have aligned this in the transformers' generation. One difference is that LLaMA's official repo uses FSDP and my transformers' code has no distributed set-up. But I think this will not affect the inference performance (not certain). ### Expected behavior A script to reproduce the official LLaMA repo's results is expected, which will be a great sanity check about the huggingface llama implementation. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22259/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22259/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22258
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22258/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22258/comments
https://api.github.com/repos/huggingface/transformers/issues/22258/events
https://github.com/huggingface/transformers/issues/22258
1,631,148,060
I_kwDOCUB6oc5hOVgc
22,258
HuggingFace Transformers Trainer._maybe_log_save_evaluate IndexError: invalid index to scalar variable
{ "login": "JeffMII", "id": 80857218, "node_id": "MDQ6VXNlcjgwODU3MjE4", "avatar_url": "https://avatars.githubusercontent.com/u/80857218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JeffMII", "html_url": "https://github.com/JeffMII", "followers_url": "https://api.github.com/users/JeffMII/followers", "following_url": "https://api.github.com/users/JeffMII/following{/other_user}", "gists_url": "https://api.github.com/users/JeffMII/gists{/gist_id}", "starred_url": "https://api.github.com/users/JeffMII/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JeffMII/subscriptions", "organizations_url": "https://api.github.com/users/JeffMII/orgs", "repos_url": "https://api.github.com/users/JeffMII/repos", "events_url": "https://api.github.com/users/JeffMII/events{/privacy}", "received_events_url": "https://api.github.com/users/JeffMII/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I finally got an answer to my issue on StackOverflow. Here is the [link](https://stackoverflow.com/questions/75780103/huggingface-transformers-trainer-maybe-log-save-evaluate-indexerror-invalid-in/75792634#75792634) to the answer:\r\n\r\n> Your issue comes from your compute_metrics function as you're using a QA metric with a text-generation model.\r\n> \r\n> To fix it, replace metric = load(\"squad\") with a text-generation metric, for example bleu: metric = load(\"bleu\"). And adapt your compute_metrics function in consequence:\r\n> \r\n> ```py\r\n> def compute_metrics(eval_pred):\r\n> predictions, references = eval_pred\r\n> predictions = tokenizer.batch_decode(predictions)\r\n> references = tokenizer.batch_decode(references)\r\n> references = [[ref] for ref in references]\r\n> return metric.compute(predictions=predictions, references=references)\r\n> ```" ]
1,679
1,679
1,679
NONE
null
@sshleifer So, I'm working on fine tuning a BART model for question generation, and it seems to be going through training okay. Then all of a sudden, it stops at the end of the first validation with an `IndexError` which you can see below. The problem is occurring in the `Trainer._maybe_log_save_evaluate` method that is being called. ![IndexError: invalid index to scalar variable](https://user-images.githubusercontent.com/80857218/226214542-72cce7dd-6f09-4eaf-89e5-3d585fb07790.png) Here is my code for setting up the model, tokenizer, dataset, etc.: ```py from datasets import load_dataset from evaluate import load from accelerate import Accelerator from transformers import BartForConditionalGeneration, BartConfig, BartTokenizer from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer dataset = load_dataset("squad") metric = load("squad") accelerator = Accelerator() def model_init(): config = BartConfig() return accelerator.prepare(BartForConditionalGeneration(config).from_pretrained("facebook/bart-base").cuda()) tokenizer = accelerator.prepare(BartTokenizer.from_pretrained("facebook/bart-base")) def preprocess_function(data): inputs = tokenizer(data['context'], add_special_tokens=True, max_length=256, padding="max_length", truncation=True) targets = tokenizer(data['question'], add_special_tokens=True, max_length=32, padding="max_length", truncation=True) return {'input_ids': inputs['input_ids'], 'attention_mask': inputs['attention_mask'], 'labels': targets['input_ids']} dataset = dataset.map(preprocess_function, batched=True).shuffle(seed=777) training_args = Seq2SeqTrainingArguments( output_dir="./results", evaluation_strategy="steps", eval_steps=500, save_steps=50000, learning_rate=2e-5, per_device_train_batch_size=4, per_device_eval_batch_size=4, num_train_epochs=2, weight_decay=0.01, predict_with_generate=True, ) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = predictions.argmax(axis=-1) return metric.compute(predictions=predictions, references=labels) trainer = Seq2SeqTrainer( args=training_args, train_dataset=dataset["train"], eval_dataset=dataset["validation"], tokenizer=tokenizer, model_init=model_init, compute_metrics=compute_metrics, ) trainer.train() ``` I can't seem to figure out why this is happening and nothing I've found online has helped.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22258/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22257
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22257/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22257/comments
https://api.github.com/repos/huggingface/transformers/issues/22257/events
https://github.com/huggingface/transformers/issues/22257
1,631,053,719
I_kwDOCUB6oc5hN-eX
22,257
Ernie-M for pretraining multilingual models
{ "login": "KnutJaegersberg", "id": 17965169, "node_id": "MDQ6VXNlcjE3OTY1MTY5", "avatar_url": "https://avatars.githubusercontent.com/u/17965169?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KnutJaegersberg", "html_url": "https://github.com/KnutJaegersberg", "followers_url": "https://api.github.com/users/KnutJaegersberg/followers", "following_url": "https://api.github.com/users/KnutJaegersberg/following{/other_user}", "gists_url": "https://api.github.com/users/KnutJaegersberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/KnutJaegersberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KnutJaegersberg/subscriptions", "organizations_url": "https://api.github.com/users/KnutJaegersberg/orgs", "repos_url": "https://api.github.com/users/KnutJaegersberg/repos", "events_url": "https://api.github.com/users/KnutJaegersberg/events{/privacy}", "received_events_url": "https://api.github.com/users/KnutJaegersberg/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Hi @KnutJaegersberg, thanks for making this suggestion! \r\n\r\nWould you like to try and open a PR to add the model? We have guidance written on adding [models here](https://huggingface.co/docs/transformers/v4.27.2/en/add_new_model). As the modeling file already exists - adding this component is even easier than a whole new model. For example, see [this PR](https://github.com/huggingface/transformers/pull/21754) for adding `WhisperForAudioClassification`.", "Currently in the middle of something, will try to look at it later! ", "I'd love to try this out! ", "Seems like this will require more than simply copy-pasting the BertForPretraining code, but actually implementing cross-attention Masked Language Modeling and Back-translation Masked Language Modeling. " ]
1,679
1,681
null
NONE
null
### Feature request Two things that might help in that regard: - To train TSDAE, one needs support as class ErnieMForPreTraining, just as for Ernie https://huggingface.co/docs/transformers/model_doc/ernie#transformers.ErnieForPreTraining - To train cross-encoders with contrastive loss, a bit like SimCSE, one needs standard support for getting the 'attention_mask' out of the tokenizer sbert uses. Sbert just expects those. Tried to hack it in into sbert, but failed. ### Motivation Suspect getting Ernie-M-large for pretraining multilingual sentence embeddings will yield close to sota results. According to mSimCSE, we can get top multilingual embeddings just on training on their 300k dataset of english pairs, alone (worked better than cross-lingual training). With a stronger base model (they used xlm-roberta), sota embeddings might just lie on the streets. https://github.com/yaushian/mSimCSE ### Your contribution Can't do it alone, plz help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22257/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22256
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22256/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22256/comments
https://api.github.com/repos/huggingface/transformers/issues/22256/events
https://github.com/huggingface/transformers/pull/22256
1,630,947,351
PR_kwDOCUB6oc5MY_-2
22,256
[Docs] fix typos in some tokenizer docs
{ "login": "yesinkim", "id": 83568823, "node_id": "MDQ6VXNlcjgzNTY4ODIz", "avatar_url": "https://avatars.githubusercontent.com/u/83568823?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yesinkim", "html_url": "https://github.com/yesinkim", "followers_url": "https://api.github.com/users/yesinkim/followers", "following_url": "https://api.github.com/users/yesinkim/following{/other_user}", "gists_url": "https://api.github.com/users/yesinkim/gists{/gist_id}", "starred_url": "https://api.github.com/users/yesinkim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yesinkim/subscriptions", "organizations_url": "https://api.github.com/users/yesinkim/orgs", "repos_url": "https://api.github.com/users/yesinkim/repos", "events_url": "https://api.github.com/users/yesinkim/events{/privacy}", "received_events_url": "https://api.github.com/users/yesinkim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Note: the difference in documented output and true output was mentioned in a [previous LongFormer PR](https://github.com/huggingface/transformers/pull/19346/files#r988044763). " ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fix the typos in tokenizer examples. It would be 4 tokens. Thx ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22256/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22256", "html_url": "https://github.com/huggingface/transformers/pull/22256", "diff_url": "https://github.com/huggingface/transformers/pull/22256.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22256.patch", "merged_at": 1679314652000 }
https://api.github.com/repos/huggingface/transformers/issues/22255
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22255/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22255/comments
https://api.github.com/repos/huggingface/transformers/issues/22255/events
https://github.com/huggingface/transformers/issues/22255
1,630,928,895
I_kwDOCUB6oc5hNf__
22,255
Re
{ "login": "aanonymousbeing5", "id": 126067694, "node_id": "U_kgDOB4Oj7g", "avatar_url": "https://avatars.githubusercontent.com/u/126067694?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aanonymousbeing5", "html_url": "https://github.com/aanonymousbeing5", "followers_url": "https://api.github.com/users/aanonymousbeing5/followers", "following_url": "https://api.github.com/users/aanonymousbeing5/following{/other_user}", "gists_url": "https://api.github.com/users/aanonymousbeing5/gists{/gist_id}", "starred_url": "https://api.github.com/users/aanonymousbeing5/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aanonymousbeing5/subscriptions", "organizations_url": "https://api.github.com/users/aanonymousbeing5/orgs", "repos_url": "https://api.github.com/users/aanonymousbeing5/repos", "events_url": "https://api.github.com/users/aanonymousbeing5/events{/privacy}", "received_events_url": "https://api.github.com/users/aanonymousbeing5/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,679
1,679
1,679
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22255/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22254
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22254/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22254/comments
https://api.github.com/repos/huggingface/transformers/issues/22254/events
https://github.com/huggingface/transformers/issues/22254
1,630,908,033
I_kwDOCUB6oc5hNa6B
22,254
Trying to save a model with TFT5ForConditionalGeneration
{ "login": "erlichsefisalesforce", "id": 59247215, "node_id": "MDQ6VXNlcjU5MjQ3MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/59247215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erlichsefisalesforce", "html_url": "https://github.com/erlichsefisalesforce", "followers_url": "https://api.github.com/users/erlichsefisalesforce/followers", "following_url": "https://api.github.com/users/erlichsefisalesforce/following{/other_user}", "gists_url": "https://api.github.com/users/erlichsefisalesforce/gists{/gist_id}", "starred_url": "https://api.github.com/users/erlichsefisalesforce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/erlichsefisalesforce/subscriptions", "organizations_url": "https://api.github.com/users/erlichsefisalesforce/orgs", "repos_url": "https://api.github.com/users/erlichsefisalesforce/repos", "events_url": "https://api.github.com/users/erlichsefisalesforce/events{/privacy}", "received_events_url": "https://api.github.com/users/erlichsefisalesforce/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[ "cc @gante ", "Hey @erlichsefisalesforce 👋 looking at the stack trace, we see that `inputs`'s first dimension, the batch size, is unknown (shape = `[None, 1]`). It is possible that our generate function is not fully serializable with a dynamic batch size, and may need some tweaks.\r\n\r\nI'm not sure when I'll be able to fix this problem in particular (it may be complex to solve). However, meanwhile, can you try exporting with a fixed batch size? In other words, define the input as `inputs = tf.keras.layers.Input(shape=(1,), dtype=tf.string, name=\"inputs\", batch_size=<some integer>)`", "Hi @gante, I'm still getting \r\n\r\n```\r\n File \"/venv/lib/python3.9/site-packages/transformers/generation/tf_utils.py\", line 767, in generate\r\n return self.greedy_search(\r\n File \"venv/lib/python3.9/site-packages/transformers/generation/tf_utils.py\", line 1452, in greedy_search\r\n if greedy_search_cond_fn(generated, finished_sequences, cur_len, model_kwargs):\r\ntensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: Exception encountered when calling layer 'summarizer' (type CompleteSentenceTransformer).\r\n\r\nUsing a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.\r\n\r\nCall arguments received by layer 'summarizer' (type CompleteSentenceTransformer):\r\n • args=('tf.Tensor(shape=(3, 1), dtype=string)',)\r\n • kwargs={'training': 'False'}\r\n```", "Hey @erlichsefisalesforce -- in that case, I will need a reproducible example to debug. The example you shared above contains references to local files :)", "@gante, the folder contains [flan-t5-large](https://huggingface.co/google/flan-t5-large), `save_dir` is can be populated with any path to your local machine, and I think, that it.", "It seems like the root issue persists -- `text.SentencepieceTokenizer().tokenize()` returns a tensor with an unknown batch size, regardless of the input batch size being defined, causing the same problem.\r\n\r\nThe fix should be straightforward, so I will have a go at it.\r\n\r\n_____________________________________________________________________\r\nScript to reproduce it:\r\n```py\r\n# run these commands in advance:\r\n# mkdir /tmp/test\r\n# cd /tmp/test\r\n# git clone https://huggingface.co/google/flan-t5-small\r\n\r\nfrom transformers import TFT5ForConditionalGeneration\r\nimport tensorflow as tf\r\nimport tensorflow_text as text\r\nfrom tensorflow.python.platform import gfile\r\n\r\nsave_dir = '/tmp/test/flan-t5-small'\r\nclass CompleteSentenceTransformer(tf.keras.layers.Layer):\r\n\r\n def __init__(self):\r\n super().__init__()\r\n self._pad_token = 1\r\n self.tokenizer = text.SentencepieceTokenizer(model=gfile.GFile('/tmp/test/flan-t5-small/spiece.model', 'rb').read())\r\n self.model = TFT5ForConditionalGeneration.from_pretrained('/tmp/test/flan-t5-small', from_pt=True)\r\n\r\n def call(self, inputs, *args, **kwargs):\r\n tokens = self.tokenizer.tokenize(inputs)\r\n breakpoint()\r\n input_ids, attention_mask = text.pad_model_inputs(tokens, max_seq_length=512, pad_value=self.model.config.pad_token_id)\r\n outputs = self.model.generate(input_ids=input_ids, attention_mask=attention_mask)\r\n return self.tokenizer.detokenize(outputs)\r\n\r\n\r\ncomplete_model = CompleteSentenceTransformer()\r\ninputs = tf.keras.layers.Input(shape=(1,), dtype=tf.string, name=\"inputs\", batch_size=4)\r\noutputs = complete_model(inputs)\r\nkeras_model = tf.keras.Model(inputs, outputs)\r\nkeras_model.save(save_dir)\r\n```", "@erlichsefisalesforce after #22310 gets merged, you should be able to run it on your end :) (you will need to install `transformers` from `main`)", "Thank you @gante! will close the issue once I validate the solution on my end. :) ", "The solution was validated! " ]
1,679
1,679
1,679
NONE
null
### System Info transformers-cli env ouput: - `transformers` version: 4.28.0.dev0 - Platform: macOS-13.2.1-x86_64-i386-64bit - Python version: 3.9.6 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no -------------- I've compiled a TensorFlow graph that uses a pre-trained [flan-t5-large](https://huggingface.co/google/flan-t5-large), which means one of the layers uses `TFT5ForConditionalGeneration` but there are more layers before and after and my goal is the export the graph for TF serving framework. When I'm trying to `.save` the model I get the following error from Tensorflow: ``` Traceback (most recent call last): Traceback (most recent call last): File "/Users/serlich/Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/223.8214.51/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec exec(exp, global_vars, local_vars) File "<string>", line 1, in <module> File "venv/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 955, in __bool__ self._disallow_bool_casting() File "venv/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 554, in _disallow_bool_casting self._disallow_when_autograph_enabled( File "venv/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 537, in _disallow_when_autograph_enabled raise errors.OperatorNotAllowedInGraphError( tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. Traceback (most recent call last): File "venv/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/contextlib.py", line 124, in __exit__ next(self.gen) File "/Users/serlich/Documents/case-wrap-up/t5_tensorflow_code.py", line 58, in call outputs = self.model.generate(input_ids=input_ids, attention_mask=attention_mask) File "venv/lib/python3.9/site-packages/transformers/generation/tf_utils.py", line 925, in generate return self.greedy_search( File "venv/lib/python3.9/site-packages/transformers/generation/tf_utils.py", line 1728, in greedy_search if greedy_search_cond_fn(generated, finished_sequences, cur_len, model_kwargs): tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: Exception encountered when calling layer 'complete_sentence_transformer' (type CompleteSentenceTransformer). Using a symbolic `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. Call arguments received by layer 'complete_sentence_transformer' (type CompleteSentenceTransformer): • args=('tf.Tensor(shape=(None, 1), dtype=string)',) • kwargs={'training': 'False'} Process finished with exit code 1 ``` the error resonates from the following[if statement](https://github.com/huggingface/transformers/blob/60d51ef5123d949fd8c59cd4d3254e711541d278/src/transformers/generation/tf_utils.py#L1728): ``` # 1st generation step has to be run before to initialize `past_key_values` generated, finished_sequences, cur_len, model_kwargs = greedy_search_body_fn( generated, finished_sequences, cur_len, model_kwargs ) # 2-to-n generation steps can then be run in autoregressive fashion # only in case 1st generation step does NOT yield EOS token though if greedy_search_cond_fn(generated, finished_sequences, cur_len, model_kwargs): maximum_iterations = max_length - cur_len generated, _, cur_len, _ = tf.while_loop( greedy_search_cond_fn, greedy_search_body_fn, (generated, finished_sequences, cur_len, model_kwargs), maximum_iterations=maximum_iterations, ) ``` during saving `finished_sequences` is a symbolic tensor and so, TensorFlow prevents evaluating an if statement of a symbolic tensor. Commenting out `if greedy_search_cond_fn(generated, finished_sequences, cur_len, model_kwargs) ` allow me to save the model and load it later, however, remove the safeguard if the model predicts EOS in the first generation (which is very not likely). @ArthurZucker @younesbelkada @gante ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import TFT5ForConditionalGeneration import tensorflow as tf import tensorflow_text as text from tensorflow.python.platform import gfile save_dir = '' class CompleteSentenceTransformer(tf.keras.layers.Layer): def __init__(self): super().__init__() self._pad_token = 1 self.tokenizer = text.SentencepieceTokenizer(model=gfile.GFile('test/flan-t5-large/spiece.model', 'rb').read()) self.model = TFT5ForConditionalGeneration.from_pretrained('test/flan-t5-large', from_pt=True) def call(self, inputs, *args, **kwargs): tokens = self.tokenizer.tokenize(inputs) input_ids, attention_mask = text.pad_model_inputs(tokens, max_seq_length=self._max_seq_length, pad_value=self._pad_token) outputs = self.model.generate(input_ids=input_ids, attention_mask=attention_mask) return self.tokenizer.detokenize(outputs) complete_model = CompleteSentenceTransformer() inputs = tf.keras.layers.Input(shape=(1,), dtype=tf.string, name="inputs") outputs = complete_model(inputs) keras_model = tf.keras.Model(inputs, outputs) keras_model.save(save_dir) ``` Python 3.9.6 tensorflow 2.11.0 tensorflow-text 2.11.0 transformers 4.28.0.dev0 (from master) ### Expected behavior save model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22254/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22253
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22253/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22253/comments
https://api.github.com/repos/huggingface/transformers/issues/22253/events
https://github.com/huggingface/transformers/pull/22253
1,630,903,505
PR_kwDOCUB6oc5MY3BL
22,253
Add `BioGPTForSequenceClassification`
{ "login": "awinml", "id": 97467100, "node_id": "U_kgDOBc863A", "avatar_url": "https://avatars.githubusercontent.com/u/97467100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/awinml", "html_url": "https://github.com/awinml", "followers_url": "https://api.github.com/users/awinml/followers", "following_url": "https://api.github.com/users/awinml/following{/other_user}", "gists_url": "https://api.github.com/users/awinml/gists{/gist_id}", "starred_url": "https://api.github.com/users/awinml/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awinml/subscriptions", "organizations_url": "https://api.github.com/users/awinml/orgs", "repos_url": "https://api.github.com/users/awinml/repos", "events_url": "https://api.github.com/users/awinml/events{/privacy}", "received_events_url": "https://api.github.com/users/awinml/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@NielsRogge @sgugger Is there a way to skip the check for specific lines when I run `make repo-consistency`.\r\n\r\nIt gives an error when I add this:\r\n`# Copied from transformers.models.opt.modeling_opt.OPTForSequenceClassification with OPT->BioGpt`.\r\nThere are some attributes like word_embed_proj_dim which do not exist for the BioGpt model.\r\nAlso it changes the case of the docstring variable, which leads to a variable not found error.\r\n\r\nShould I drop the copy attribution comment?\r\n", "If some attributes do not exist, let's just add the `# Adapted from` mention, and put the `# Copied from` only where it properly fits! ", "@younesbelkada You're right, I haven't figured out how to solve this failing test.", "@ArthurZucker Any suggestions as to how to fix this failing test?\r\n\r\nI went through #18123. The code is extremely similar, but I still don't get why the test is failing. Maybe I am missing something. I need help to fix it.\r\n\r\n\r\n```python\r\n_____________________________ BioGptModelTest.test_load_with_mismatched_shapes _____________________________\r\n\r\nself = <tests.models.biogpt.test_modeling_biogpt.BioGptModelTest testMethod=test_load_with_mismatched_shapes>\r\n\r\n def test_load_with_mismatched_shapes(self):\r\n if not self.test_mismatched_shapes:\r\n return\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n \r\n for model_class in self.all_model_classes:\r\n if model_class.__name__ not in get_values(MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES):\r\n continue\r\n \r\n with self.subTest(msg=f\"Testing {model_class}\"):\r\n with tempfile.TemporaryDirectory() as tmp_dir:\r\n model = model_class(config)\r\n model.save_pretrained(tmp_dir)\r\n \r\n # Fails when we don't set ignore_mismatched_sizes=True\r\n with self.assertRaises(RuntimeError):\r\n new_model = AutoModelForSequenceClassification.from_pretrained(tmp_dir, num_labels=42)\r\n with self.assertRaises(RuntimeError):\r\n> new_model_without_prefix = AutoModel.from_pretrained(tmp_dir, vocab_size=10)\r\nE AssertionError: RuntimeError not raised\r\n\r\ntests/test_modeling_common.py:2640: AssertionError\r\n\r\n```\r\n\r\n", "Hey! I'll try to have a look, it looks like setting the `vocab_size` does not change the shape of the model which means that it does not raise an error when it should! ", "@ArthurZucker Thanks! The `vocab_size` argument had my suspicion as well. Since we inherit from `BioGptModel`, I thought that already does the needful. I could not figure out what I was missing. Looking forward to your suggestions.", "@ArthurZucker Those changes seemed to do the trick, all the CI tests pass now. Thanks for your help!" ]
1,679
1,682
1,682
CONTRIBUTOR
null
# What does this PR do? Add Sequence Classification support for BioGPT. Fixes #21530 Fixes #21535 This PR completes the stalled PR #21535. <!--- ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? --> ## Who can review? @ArthurZucker @younesbelkada @NielsRogge @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22253/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22253", "html_url": "https://github.com/huggingface/transformers/pull/22253", "diff_url": "https://github.com/huggingface/transformers/pull/22253.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22253.patch", "merged_at": 1682947047000 }
https://api.github.com/repos/huggingface/transformers/issues/22252
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22252/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22252/comments
https://api.github.com/repos/huggingface/transformers/issues/22252/events
https://github.com/huggingface/transformers/issues/22252
1,630,881,221
I_kwDOCUB6oc5hNUXF
22,252
clip loss
{ "login": "hljjjmssyh", "id": 24326757, "node_id": "MDQ6VXNlcjI0MzI2NzU3", "avatar_url": "https://avatars.githubusercontent.com/u/24326757?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hljjjmssyh", "html_url": "https://github.com/hljjjmssyh", "followers_url": "https://api.github.com/users/hljjjmssyh/followers", "following_url": "https://api.github.com/users/hljjjmssyh/following{/other_user}", "gists_url": "https://api.github.com/users/hljjjmssyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/hljjjmssyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hljjjmssyh/subscriptions", "organizations_url": "https://api.github.com/users/hljjjmssyh/orgs", "repos_url": "https://api.github.com/users/hljjjmssyh/repos", "events_url": "https://api.github.com/users/hljjjmssyh/events{/privacy}", "received_events_url": "https://api.github.com/users/hljjjmssyh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Hi @hljjjmssyh The loss computation management is in the `Trainer` class, see\r\n\r\nhttps://github.com/huggingface/transformers/blob/da005253b82395b6097623bcee44b819bfe72b87/src/transformers/trainer.py#L2649-L2650", "That's only for models wrapped in DataParallel @ydshieh \r\n\r\n@hljjjmssyh We don't include code requiring torch.distributed as it then fails when the script is used on one GPU. However we could use the Accelerate library to have something that works in both situation. If you want to explore this and open a PR, I'll be happy to review!", "I think I’m missing something, it looks like this could be done for CLIP today with accelerate’s implementation in `examples/pytorch/image-classification/run_image_classification_no_trainer.py` running it with the appropriate args? Or maybe accelerate would nonetheless be a welcome addition somewhere else for the above mentioned purpose? \r\n\r\nIt also looks [here](https://github.com/huggingface/transformers/blob/5990743fddb4780b15b8af2ed7ab55145ab40455/src/transformers/trainer.py#L1386-L1388) like the model would in fact be wrapped in DataParallel when training on multiple gpus.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
https://github.com/mlfoundations/open_clip/blob/37b729bc69068daa7e860fb7dbcf1ef1d03a4185/src/open_clip/loss.py#L49 In the implementation of open_clip, logits distributed across multiple gpus are gathered for calculating loss. However, I cannot find the code related to this feature in this repository. I think more negative samples are very important for contrastive learning. @younesbelkada @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22252/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22251
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22251/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22251/comments
https://api.github.com/repos/huggingface/transformers/issues/22251/events
https://github.com/huggingface/transformers/issues/22251
1,630,670,976
I_kwDOCUB6oc5hMhCA
22,251
t5 mlm train example, label generation
{ "login": "GabPrato", "id": 25964820, "node_id": "MDQ6VXNlcjI1OTY0ODIw", "avatar_url": "https://avatars.githubusercontent.com/u/25964820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GabPrato", "html_url": "https://github.com/GabPrato", "followers_url": "https://api.github.com/users/GabPrato/followers", "following_url": "https://api.github.com/users/GabPrato/following{/other_user}", "gists_url": "https://api.github.com/users/GabPrato/gists{/gist_id}", "starred_url": "https://api.github.com/users/GabPrato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GabPrato/subscriptions", "organizations_url": "https://api.github.com/users/GabPrato/orgs", "repos_url": "https://api.github.com/users/GabPrato/repos", "events_url": "https://api.github.com/users/GabPrato/events{/privacy}", "received_events_url": "https://api.github.com/users/GabPrato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Issue was on my side, `~mask[None, :].astype(np.int8)` should be `(~mask[None, :]).astype(np.int8)`.\r\n\r\nBut the resulting labels are still missing the extra id at the end, `batch[\"labels\"]` will be equal to `\"<extra_id_0> .\"` instead of `\"<extra_id_0> . <extra_id_1>\"` and there are also no checks for if the number of sentinel tokens used is greater than the number of available sentinel tokens. Default is max 100 sentinel tokens if using pretrained T5 models." ]
1,679
1,679
1,679
NONE
null
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.12.1 ### Who can help? @sanchit-gandhi @sgugger @stevhliu @MKhalusova ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction In the [T5 doc](https://huggingface.co/docs/transformers/v4.27.1/en/model_doc/t5#training), there is the following example describing the input_ids and labels format to train a T5 model: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids # the forward function automatically creates the correct decoder_input_ids loss = model(input_ids=input_ids, labels=labels).loss loss.item() ``` And right after this piece of code, the following: > If you’re interested in pre-training T5 on a new corpus, check out the [run_t5_mlm_flax.py](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling) script in the Examples directory. So I looked at the example code on line [330](https://github.com/huggingface/transformers/blob/60d51ef5123d949fd8c59cd4d3254e711541d278/examples/flax/language-modeling/run_t5_mlm_flax.py#L330) of that file, but the behavior is different than what is written in the doc. Indeed, the `batch["labels"]` has a different format. If for example the input string is `"Hello world."`, `batch["input_ids"]` is set to `"Hello world<extra_id_0>"` and `batch["labels"]` is set to `"<extra_id_-2><extra_id_-3><extra_id_-6>"`. According to the doc, shouldn't `batch["labels"]` be `"<extra_id_0> . <extra_id_1>"`? To reproduce, you can simply reuse the following 3 functions: `random_spans_noise_mask`, `create_sentinel_ids` and `filter_input_ids` that are right below the `__call__` function on line 330: ```python import numpy as np from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained('t5-small') tokenized_sample = tokenizer("Hello world.", add_special_tokens=False, return_tensors='pt').input_ids # we don't want </s> mask = random_spans_noise_mask(tokenized_sample.shape[1]) input_ids_sentinel = create_sentinel_ids(mask[None, :].astype(np.int8)) labels_sentinel = create_sentinel_ids(~mask[None, :].astype(np.int8)) input_ids = filter_input_ids(tokenized_sample, input_ids_sentinel) labels = filter_input_ids(tokenized_sample, labels_sentinel) print(tokenizer.batch_decode(input_ids, skip_special_tokens=False)[0]) print(tokenizer.batch_decode(labels, skip_special_tokens=False)[0]) ``` This of code of course follows the same structure as the example on line 330. ### Expected behavior Just wondering if the described behavior in the `run_t5_mlm_flax.py` script is intended or not, since the doc describes a different behavior. It is confusing as the doc refers to this example, but the behaviors are different. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22251/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22250
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22250/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22250/comments
https://api.github.com/repos/huggingface/transformers/issues/22250/events
https://github.com/huggingface/transformers/issues/22250
1,630,602,078
I_kwDOCUB6oc5hMQNe
22,250
Is there or will there be support for xformers?
{ "login": "ethansmith2000", "id": 98723285, "node_id": "U_kgDOBeJl1Q", "avatar_url": "https://avatars.githubusercontent.com/u/98723285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ethansmith2000", "html_url": "https://github.com/ethansmith2000", "followers_url": "https://api.github.com/users/ethansmith2000/followers", "following_url": "https://api.github.com/users/ethansmith2000/following{/other_user}", "gists_url": "https://api.github.com/users/ethansmith2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/ethansmith2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ethansmith2000/subscriptions", "organizations_url": "https://api.github.com/users/ethansmith2000/orgs", "repos_url": "https://api.github.com/users/ethansmith2000/repos", "events_url": "https://api.github.com/users/ethansmith2000/events{/privacy}", "received_events_url": "https://api.github.com/users/ethansmith2000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
### Feature request xformers, (i couldn't find anything online or in the docs, but i suspect its very likely I'm just missing something) ### Motivation speed and memory improvement ### Your contribution I am unsure, but willing to help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22250/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22250/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22249
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22249/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22249/comments
https://api.github.com/repos/huggingface/transformers/issues/22249/events
https://github.com/huggingface/transformers/issues/22249
1,630,557,021
I_kwDOCUB6oc5hMFNd
22,249
LLaMa tokenizer is labelled incorrectly when called.
{ "login": "GamerUntouch", "id": 34009512, "node_id": "MDQ6VXNlcjM0MDA5NTEy", "avatar_url": "https://avatars.githubusercontent.com/u/34009512?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GamerUntouch", "html_url": "https://github.com/GamerUntouch", "followers_url": "https://api.github.com/users/GamerUntouch/followers", "following_url": "https://api.github.com/users/GamerUntouch/following{/other_user}", "gists_url": "https://api.github.com/users/GamerUntouch/gists{/gist_id}", "starred_url": "https://api.github.com/users/GamerUntouch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GamerUntouch/subscriptions", "organizations_url": "https://api.github.com/users/GamerUntouch/orgs", "repos_url": "https://api.github.com/users/GamerUntouch/repos", "events_url": "https://api.github.com/users/GamerUntouch/events{/privacy}", "received_events_url": "https://api.github.com/users/GamerUntouch/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,679
1,679
1,679
NONE
null
### System Info Most of the transformers functions call for "LlamaTokenizer", but the actual classes (found under transformers/models/llama) are labelled as "LLaMaTokenizer" ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction There are no steps, changing the class names fixes the loading. ### Expected behavior Either fix the class names or fix the functions that call for them.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22249/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22248
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22248/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22248/comments
https://api.github.com/repos/huggingface/transformers/issues/22248/events
https://github.com/huggingface/transformers/issues/22248
1,630,469,765
I_kwDOCUB6oc5hLv6F
22,248
[Trainer] Use of inspect for model.forward with torch.compile
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I realized that the proper way to use `torch.compile` with `Trainer` is through the `training_args.torch_compile` flag. Using the flag didn't cause the issue (I was manually compiling it outside the trainer). Closing, thanks!" ]
1,679
1,679
1,679
CONTRIBUTOR
null
## Issue In `trainer`, the `inspect` module is used to remove extraneous dataset columns. https://github.com/huggingface/transformers/blob/60d51ef5123d949fd8c59cd4d3254e711541d278/src/transformers/trainer.py#L722-L728 However, `torch.compile` modifies the signature of the forward function of the original model, so `inspect.signature` is unable to correctly identify input arguments. ## Possible Solution If there is a way to recover the original arguments, that would be the cleanest solution. Otherwise, we could check if the model is compiled and modify the logic of the `_set_signature_columns_if_needed` function appropriately, with perhaps added logging to the user that columns won't be dropped due to using `torch.compile`. ## System Information * Python 3.8 * PyTorch 2.0 * transformers 4.27.1 ### Who can help? @stas00 @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python >>> import inspect, torch; from transformers import AutoModel >>> model = AutoModel.from_pretrained("roberta-base") >>> inspect.signature(model.forward) <Signature (input_ids: Union[torch.Tensor, NoneType] = None, attention_mask: Union[torch.Tensor, NoneType] = None, token_type_ids: Union[torch.Tensor, NoneType] = None, position_ids: Union[torch.Tensor, NoneType] = None, head_mask: Union[torch.Tensor, NoneType] = None, inputs_embeds: Union[torch.Tensor, NoneType] = None, encoder_hidden_states: Union[torch.Tensor, NoneType] = None, encoder_attention_mask: Union[torch.Tensor, NoneType] = None, past_key_values: Union[List[torch.FloatTensor], NoneType] = None, use_cache: Union[bool, NoneType] = None, output_attentions: Union[bool, NoneType] = None, output_hidden_states: Union[bool, NoneType] = None, return_dict: Union[bool, NoneType] = None) -> Union[Tuple[torch.Tensor], transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions]> >>> opt_model = torch.compile(model) >>> inspect.signature(opt_model.forward) <Signature (*args, **kwargs)> ``` ### Expected behavior The trainer should only drop unused columns, not all of them (which is what happens when it incorrectly registers `args` and `kwargs` as input arguments).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22248/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22247
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22247/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22247/comments
https://api.github.com/repos/huggingface/transformers/issues/22247/events
https://github.com/huggingface/transformers/pull/22247
1,630,440,450
PR_kwDOCUB6oc5MXUWE
22,247
[Trainer] Add optional communication backends for torch.distributed when using GPU
{ "login": "heya5", "id": 27731754, "node_id": "MDQ6VXNlcjI3NzMxNzU0", "avatar_url": "https://avatars.githubusercontent.com/u/27731754?v=4", "gravatar_id": "", "url": "https://api.github.com/users/heya5", "html_url": "https://github.com/heya5", "followers_url": "https://api.github.com/users/heya5/followers", "following_url": "https://api.github.com/users/heya5/following{/other_user}", "gists_url": "https://api.github.com/users/heya5/gists{/gist_id}", "starred_url": "https://api.github.com/users/heya5/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/heya5/subscriptions", "organizations_url": "https://api.github.com/users/heya5/orgs", "repos_url": "https://api.github.com/users/heya5/repos", "events_url": "https://api.github.com/users/heya5/events{/privacy}", "received_events_url": "https://api.github.com/users/heya5/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Add optional backends for `torch.distributed` when using GPU. I want to use other communication backends according the [pytorch_distribution_tutorial](https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends), but I found Trainer only uses nccl when `self.no_cuda is False` . Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22247/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22247", "html_url": "https://github.com/huggingface/transformers/pull/22247", "diff_url": "https://github.com/huggingface/transformers/pull/22247.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22247.patch", "merged_at": 1679318255000 }
https://api.github.com/repos/huggingface/transformers/issues/22246
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22246/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22246/comments
https://api.github.com/repos/huggingface/transformers/issues/22246/events
https://github.com/huggingface/transformers/issues/22246
1,630,386,203
I_kwDOCUB6oc5hLbgb
22,246
FlaxDataCollatorForT5MLM :ValueError: all input arrays must have the same shape
{ "login": "alexcpn", "id": 1157251, "node_id": "MDQ6VXNlcjExNTcyNTE=", "avatar_url": "https://avatars.githubusercontent.com/u/1157251?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexcpn", "html_url": "https://github.com/alexcpn", "followers_url": "https://api.github.com/users/alexcpn/followers", "following_url": "https://api.github.com/users/alexcpn/following{/other_user}", "gists_url": "https://api.github.com/users/alexcpn/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexcpn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexcpn/subscriptions", "organizations_url": "https://api.github.com/users/alexcpn/orgs", "repos_url": "https://api.github.com/users/alexcpn/repos", "events_url": "https://api.github.com/users/alexcpn/events{/privacy}", "received_events_url": "https://api.github.com/users/alexcpn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi @ArthurZucker maybe", "Hey @alexcpn - great job at digging into the issue and thanks for the gist! It does indeed look like the case that we're hitting this error based on how we compute the `num_noise_spans`:\r\nhttps://github.com/huggingface/transformers/blob/aec10d162f59d809ead3990ef78c51918b622f38/examples/flax/language-modeling/run_t5_mlm_flax.py#L274\r\n\r\nWould you like to open a PR to fix this so that it's robust for `mean_noise_span_length == 1`?\r\n\r\nThe code is largely ported from the original T5 pre-processing, which can be found here: https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/data/preprocessors.py", "HI @sanchit-gandhi ; I have tried to demo the problem and the possible correction; Please find the pull request here https://github.com/huggingface/transformers/pull/22938" ]
1,679
1,683
1,683
CONTRIBUTOR
null
### System Info - transformers version: 4.27.1 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 2.0.0.dev20230202+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: No ### Who can help? @patil-suraj @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am following the script to reproduce the above https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_t5_mlm_flax.py#L336-L346 If I give the `mean_noise_span_length ` > 1, for any value of noise_density, i get the ouput ``` prompt = "The cute dog walks in the green park" encoded = tokenizer(prompt, truncation=False, padding=False, return_tensors="pt").input_ids batch_size =1 input_length = encoded.shape[1] denoiser = FlaxDataCollatorForT5MLM(tokenizer,.35,3) mask_indices = np.asarray([denoiser.random_spans_noise_mask(input_length) for i in range(batch_size)]) labels_mask = ~mask_indices input_ids_sentinel = denoiser.create_sentinel_ids(mask_indices.astype(np.int8)) labels_sentinel = denoiser.create_sentinel_ids(labels_mask.astype(np.int8)) input_ids = denoiser.filter_input_ids(encoded, input_ids_sentinel) labels = denoiser.filter_input_ids(encoded, labels_sentinel) ``` If I give the `mean_noise_span_length ` == 1, for many value of noise_density, i get the error ``` Traceback (most recent call last): File "/home/alex/coding/tranformer_learn/t5_denoising.py", line 133, in <module> mask_indices = np.asarray([denoiser.random_spans_noise_mask(input_length) for i in range(batch_size)]) File "/home/alex/coding/tranformer_learn/t5_denoising.py", line 133, in <listcomp> mask_indices = np.asarray([denoiser.random_spans_noise_mask(input_length) for i in range(batch_size)]) File "/home/alex/coding/tranformer_learn/t5_denoising.py", line 94, in random_spans_noise_mask np.stack([nonnoise_span_lengths, noise_span_lengths], axis=1), [num_noise_spans * 2] File "<__array_function__ internals>", line 200, in stack File "/home/alex/.local/lib/python3.10/site-packages/numpy/core/shape_base.py", line 464, in stack raise ValueError('all input arrays must have the same shape') ValueError: all input arrays must have the same shape ``` Basically, the two arrays are different lengths in numpy stack ``` interleaved_span_lengths = np.reshape( np.stack([nonnoise_span_lengths, noise_span_lengths], axis=1), [num_noise_spans * 2] ) ``` From what I could make out this happens when `num_noise_spans` == `num_noise_tokens` when `mean_noise_span_length == 1` ``` num_noise_spans = int(np.round(num_noise_tokens / self.mean_noise_span_length)) ``` Code that can be run https://gist.github.com/alexcpn/b9bb2b0f01833d1bb862502faf99bab8 ### Expected behavior There should not be exception
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22246/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22245
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22245/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22245/comments
https://api.github.com/repos/huggingface/transformers/issues/22245/events
https://github.com/huggingface/transformers/issues/22245
1,630,353,807
I_kwDOCUB6oc5hLTmP
22,245
ImportError: cannot import name 'AlignModel' from 'transformers
{ "login": "swjtu-jason", "id": 127167242, "node_id": "U_kgDOB5RrCg", "avatar_url": "https://avatars.githubusercontent.com/u/127167242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/swjtu-jason", "html_url": "https://github.com/swjtu-jason", "followers_url": "https://api.github.com/users/swjtu-jason/followers", "following_url": "https://api.github.com/users/swjtu-jason/following{/other_user}", "gists_url": "https://api.github.com/users/swjtu-jason/gists{/gist_id}", "starred_url": "https://api.github.com/users/swjtu-jason/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/swjtu-jason/subscriptions", "organizations_url": "https://api.github.com/users/swjtu-jason/orgs", "repos_url": "https://api.github.com/users/swjtu-jason/repos", "events_url": "https://api.github.com/users/swjtu-jason/events{/privacy}", "received_events_url": "https://api.github.com/users/swjtu-jason/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "ALIGN was only added in v4.27 of Transformers, so you'll need to do `pip install --upgrade transformers` to upgrade to the latest version.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
### System Info ImportError: cannot import name 'AlignModel' from 'transformers transformers.__version__= 4.22.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ImportError: cannot import name 'AlignModel' from 'transformers ### Expected behavior I want to import AlignModel from transformers4.22.1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22245/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22244
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22244/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22244/comments
https://api.github.com/repos/huggingface/transformers/issues/22244/events
https://github.com/huggingface/transformers/issues/22244
1,630,295,804
I_kwDOCUB6oc5hLFb8
22,244
input_ids and labels do not match while using FlaxDataCollatorForT5MLM methods
{ "login": "alexcpn", "id": 1157251, "node_id": "MDQ6VXNlcjExNTcyNTE=", "avatar_url": "https://avatars.githubusercontent.com/u/1157251?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexcpn", "html_url": "https://github.com/alexcpn", "followers_url": "https://api.github.com/users/alexcpn/followers", "following_url": "https://api.github.com/users/alexcpn/following{/other_user}", "gists_url": "https://api.github.com/users/alexcpn/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexcpn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexcpn/subscriptions", "organizations_url": "https://api.github.com/users/alexcpn/orgs", "repos_url": "https://api.github.com/users/alexcpn/repos", "events_url": "https://api.github.com/users/alexcpn/events{/privacy}", "received_events_url": "https://api.github.com/users/alexcpn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "For t5-training `input_ids` and `labels` need not match, unlike in gpt2 \r\nI was thiking that the denoised training will help it to memorise the text and I guess it kind of does\r\n\r\nFrom https://gist.github.com/alexcpn/e33a8b44e9774653d7492fb494fb1009\r\n\r\n```\r\nAfter Training:'The cute dog walks in the'-->'cute dog walks in the cute cute dog'\r\n```\r\nBut the idea in t5 model seems to be to just train it with a specific target (like translation with a prefix) ??" ]
1,679
1,679
1,679
CONTRIBUTOR
null
### System Info - `transformers` version: 4.27.1 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 2.0.0.dev20230202+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: No ### Who can help? @patil-suraj @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction I am following the documentation at https://huggingface.co/docs/transformers/main/model_doc/t5#training for Unsupervised denoising training with my dataset ``` prompt = "The <extra_id_0> walks in <extra_id_1> park" encoded_prompt = tokenizer(prompt, truncation=False, padding=False, return_tensors="pt").input_ids print(f"encoded_prompt ={encoded_prompt}") labels ="<extra_id_0> cute dog <extra_id_1> the <extra_id_2>" encoded_labels = tokenizer(labels, truncation=False, padding=False, return_tensors="pt").input_ids print(f"encoded_labels ={encoded_labels}") print(f"{encoded_prompt.shape} ={encoded_labels.shape}") ``` Output ``` encoded_prompt =tensor([[ 37, 32099, 10681, 16, 32098, 2447, 1]]) encoded_labels =tensor([[32099, 5295, 1782, 32098, 8, 32097, 1]]) torch.Size([1, 7]) =torch.Size([1, 7]) ```` I am following the script to reproduce the above https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_t5_mlm_flax.py#L336-L346 ``` prompt = "The cute dog walks in the green park" encoded = tokenizer(prompt, truncation=False, padding=False, return_tensors="pt").input_ids batch_size =1 input_length = encoded.shape[1] denoiser = FlaxDataCollatorForT5MLM(tokenizer,.35,3) mask_indices = np.asarray([denoiser.random_spans_noise_mask(input_length) for i in range(batch_size)]) labels_mask = ~mask_indices input_ids_sentinel = denoiser.create_sentinel_ids(mask_indices.astype(np.int8)) labels_sentinel = denoiser.create_sentinel_ids(labels_mask.astype(np.int8)) input_ids = denoiser.filter_input_ids(encoded, input_ids_sentinel) labels = denoiser.filter_input_ids(encoded, labels_sentinel) print(f"input_ids decoded = {tokenizer.decode(*input_ids,skip_special_tokens=False)}") print(f"labels decoded = {tokenizer.decode(*labels,skip_special_tokens=False)}") print(f"input_ids.shape {input_ids.shape} should be equal to labels.shape {labels.shape}") ``` This given the denoised output properly, but labels size '(1,5)' is not matching the input size '(1,8)' ``` input_ids decoded = The cute dog walks in the<extra_id_0></s> labels decoded = <extra_id_0> green park</s></s> input_ids.shape (1, 8) should be equal to labels.shape (1, 5) ``` Should I pad the labels with <extra-ids> to match the size of the input_ids, if not with what should I pad, as the `t5-base` or the transformer model needs the input_ids to be the same shape as the labels(targets) for training Code that can be run https://gist.github.com/alexcpn/b9bb2b0f01833d1bb862502faf99bab8 ### Expected behavior The input_ids and labels should be the same shape
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22244/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22243
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22243/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22243/comments
https://api.github.com/repos/huggingface/transformers/issues/22243/events
https://github.com/huggingface/transformers/pull/22243
1,630,199,617
PR_kwDOCUB6oc5MWjZu
22,243
Italian translation perf_infer_cpu
{ "login": "nickprock", "id": 11136646, "node_id": "MDQ6VXNlcjExMTM2NjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/11136646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nickprock", "html_url": "https://github.com/nickprock", "followers_url": "https://api.github.com/users/nickprock/followers", "following_url": "https://api.github.com/users/nickprock/following{/other_user}", "gists_url": "https://api.github.com/users/nickprock/gists{/gist_id}", "starred_url": "https://api.github.com/users/nickprock/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nickprock/subscriptions", "organizations_url": "https://api.github.com/users/nickprock/orgs", "repos_url": "https://api.github.com/users/nickprock/repos", "events_url": "https://api.github.com/users/nickprock/events{/privacy}", "received_events_url": "https://api.github.com/users/nickprock/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
## What does this PR do? Italian translation of doc related to the preprocessing of :hugs: Transformers. * updated _toctree.yml * added perf_infer_cpu.mdx ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). See issue: [[#17459](https://www.linkedin.com/feed/hashtag/?keywords=%2317459)](https://github.com/huggingface/transformers/issues/17459) @sgugger, @stevhliu, @MKhalusova and @omarespejel
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22243/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22243", "html_url": "https://github.com/huggingface/transformers/pull/22243", "diff_url": "https://github.com/huggingface/transformers/pull/22243.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22243.patch", "merged_at": 1679318168000 }
https://api.github.com/repos/huggingface/transformers/issues/22242
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22242/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22242/comments
https://api.github.com/repos/huggingface/transformers/issues/22242/events
https://github.com/huggingface/transformers/pull/22242
1,630,167,401
PR_kwDOCUB6oc5MWcwL
22,242
[deepspeed zero3] need `generate(synced_gpus=True, ...)`
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "So a potential issue I see is that deepspeed is potentially not enabled on all modele, & this would enable `synced_gpus` even for models where it's not enabled? depends on how `is_deepspeed_zero3_enabled` works, which you likely know better than me", "- For Accelerate and HF Trainer everything is done automatically for you.\r\n- If you build your own trainer and follow [the instructions](https://huggingface.co/docs/transformers/main/main_classes/deepspeed#nontrainer-deepspeed-integration) it'll work as well.\r\n\r\n", "1. Are you proposing:\r\n```\r\ndef generate(..., synced_gpus=None)\r\n[...]\r\nif synced_gpus == None:\r\n if is_deepspeed_zero3_enabled() and dist.world_size() > 1:\r\n synced_gpus = True\r\n else:\r\n synced_gpus = False\r\n```\r\nwhich would preserve BC wrt current `synced_gpus=False` in the function definition.\r\n\r\nyes?\r\n\r\n2. and no warning needed then? or still keeping it? \r\n\r\n3. now docs will be mismatching so will need to adapt those to say that by default with multi-gpu it'll be set to `True`, but the user can choose to set it to `False` if they want to.", "Yes, your code is exactly what I'm suggesting. I think it would be a better API since the user wouldn't have to look for warnings (no need for a warning indeed in this case) and would preserve backward compatibility as you mention.", "That sounds good. Thank you for proposing it, Sylvain.\r\n\r\nSo no warning needed, right? As this logic is really about dynamic default setting and it'll be documented as such.", "Yup!", "Thank you for suggesting a more elegant solution than my initial one, Sylvain.", "thanks folks, this is great" ]
1,679
1,679
1,679
CONTRIBUTOR
null
As discussed in https://github.com/huggingface/transformers/issues/22231 `generate` under deepspeed zero3 using different input streams on different gpus may hang. It's documented [here](https://huggingface.co/docs/transformers/main/main_classes/deepspeed#custom-deepspeed-zero-inference) and in the API docs, that `synced_gpus=True` is required but who reads the docs. So this PR will automatically turn this flag on under ZeRO Stage-3, so everything works out of box and it'll warn the user once for their awareness. Fixes: https://github.com/huggingface/transformers/issues/22231
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22242/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22242/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22242", "html_url": "https://github.com/huggingface/transformers/pull/22242", "diff_url": "https://github.com/huggingface/transformers/pull/22242.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22242.patch", "merged_at": 1679512737000 }
https://api.github.com/repos/huggingface/transformers/issues/22241
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22241/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22241/comments
https://api.github.com/repos/huggingface/transformers/issues/22241/events
https://github.com/huggingface/transformers/issues/22241
1,630,146,940
I_kwDOCUB6oc5hKhF8
22,241
How to get T5 decoded logits using TFT5ForConditionalGeneration from encoded outputs?
{ "login": "FrozenWolf-Cyber", "id": 57902078, "node_id": "MDQ6VXNlcjU3OTAyMDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57902078?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FrozenWolf-Cyber", "html_url": "https://github.com/FrozenWolf-Cyber", "followers_url": "https://api.github.com/users/FrozenWolf-Cyber/followers", "following_url": "https://api.github.com/users/FrozenWolf-Cyber/following{/other_user}", "gists_url": "https://api.github.com/users/FrozenWolf-Cyber/gists{/gist_id}", "starred_url": "https://api.github.com/users/FrozenWolf-Cyber/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrozenWolf-Cyber/subscriptions", "organizations_url": "https://api.github.com/users/FrozenWolf-Cyber/orgs", "repos_url": "https://api.github.com/users/FrozenWolf-Cyber/repos", "events_url": "https://api.github.com/users/FrozenWolf-Cyber/events{/privacy}", "received_events_url": "https://api.github.com/users/FrozenWolf-Cyber/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Without using an encoded vector, this gives me the required output:\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom transformers import AutoTokenizer, T5Config, TFT5ForConditionalGeneration, set_seed\r\nset_seed(0)\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-small\", padding='max_length', truncation=True)\r\ntf_model = TFT5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\ninputs = tokenizer(\"i got permission to begin a start up company by my own..</s>\",return_tensors='tf')\r\nattn = inputs['attention_mask']\r\n\r\ndecoder_input = tf.zeros((1,1), dtype=tf.int64)\r\noutput = tf_model(input_ids=inputs['input_ids'], attention_mask = attn, decoder_input_ids=decoder_input).logits\r\n\r\nprint(tokenizer.batch_decode(output.numpy().argmax(-1).tolist()), output.numpy().argmax(-1).tolist())\r\n```\r\nOutput:\r\n\r\n`[''] [[3]]`\r\n\r\nBut I get a different answer when I try to use the encoded vector as below.\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom transformers import AutoTokenizer, T5Config, TFT5ForConditionalGeneration, set_seed\r\nset_seed(0)\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-small\", padding='max_length', truncation=True)\r\ntf_model = TFT5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\ninputs = tokenizer(\"i got permission to begin a start up company by my own..</s>\",return_tensors='tf')\r\nattn = inputs['attention_mask']\r\n\r\nencoder_outputs = tf_model.encoder(inputs['input_ids'], attention_mask = attn, return_dict = True)\r\noutput = tf_model.decoder(decoder_input, encoder_hidden_states=encoder_outputs.last_hidden_state).last_hidden_state\r\n\r\nprint(tokenizer.batch_decode(output.numpy().argmax(-1).tolist()), output.numpy().argmax(-1).tolist())\r\n```\r\nOutput:\r\n\r\n`['une'] [[245]]`", "Hi @FrozenWolf-Cyber, thanks for raising this issue. \r\n\r\nThis difference is arising because the two scripts are not equivalent. In the forward pass of the T5 model, the output of the decoder is passed to the language model head to produce the outputs - see the [relevant lines here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_tf_t5.py#L1429-L1433). ", "@amyeroberts Thanks for replying,\r\n\r\nI tried do:\r\n```python\r\ntf_model.lm_head(output[0])\r\n```\r\n\r\nBut I seem to be getting the following error:\r\n\r\n```\r\nAttributeError Traceback (most recent call last)\r\n[<ipython-input-13-8324bea7f5ea>](https://localhost:8080/#) in <module>\r\n----> 1 tf_model.lm_head(output[0])\r\nAttributeError: 'TFT5ForConditionalGeneration' object has no attribute 'lm_head'\r\n```\r\n\r\n", "This is because, for the `\"t5-small\"` checkpoint config, `tie_word_embeddings==True`. In this case, there isn't a `lm_head` layer, and instead the shared weights are used. The relevant lines [are here.](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_tf_t5.py#L1430-L1431) ", "```python\r\nimport tensorflow as tf\r\nfrom transformers import AutoTokenizer, T5Config, TFT5ForConditionalGeneration, set_seed\r\nset_seed(0)\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-small\", padding='max_length', truncation=True)\r\ntf_model = TFT5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\ninputs = tokenizer(\"i got permission to begin a start up company by my own..</s>\",return_tensors='tf')\r\nattn = inputs['attention_mask']\r\n\r\nencoder_outputs = tf_model.encoder(inputs['input_ids'], attention_mask = attn)\r\ndecoder_input = tf.zeros((1,1), dtype=tf.int64)\r\nsequence_output = tf_model.decoder(decoder_input, encoder_hidden_states=encoder_outputs[0])[0]\r\nsequence_output = sequence_output * (tf_model.model_dim**-0.5)\r\nlogits = tf.matmul(sequence_output, tf_model.shared.weights, transpose_b=True)\r\n\r\nprint(tokenizer.batch_decode(logits.numpy().argmax(-1).tolist()))\r\n```\r\n\r\n@amyeroberts Thank you very much this code works now :)" ]
1,679
1,679
1,679
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-6.1.11-76060111-generic-x86_64-with-glibc2.35 - Python version: 3.10.9 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): 2.10.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @Rocketknight1 @gante ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import numpy as np import tensorflow as tf from transformers import AutoTokenizer, T5Config, TFT5ForConditionalGeneration distill_config = T5Config(d_model=256, d_kv = 32, d_ff=512, num_heads=4, decoder_start_token_id=0) tf_model = TFT5ForConditionalGeneration(config=distill_config) tokenizer = AutoTokenizer.from_pretrained("t5-small", padding='max_length', truncation=True) inputs = tokenizer("this is a random input", return_tensors="tf")['input_ids'] encoder_outputs = tf_model.encoder(inputs) decoder_input_ids = tf.convert_to_tensor(np.asarray([[0]]).astype(np.int32)) output = tf_model.decoder(decoder_input_ids = decoder_input_ids, encoder_outputs=encoder_outputs.last_hidden_state) ``` Error: ```python ValueError Traceback (most recent call last) <ipython-input-5-face8f4fd36f> in <module> 10 encoder_outputs = tf_model.encoder(inputs) 11 decoder_input_ids = tf.convert_to_tensor(np.asarray([[0]]).astype(np.int32)) ---> 12 output = tf_model.decoder(decoder_input_ids = decoder_input_ids, encoder_outputs=encoder_outputs.last_hidden_state) 1 frames /usr/local/lib/python3.9/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs) 68 # To get the full stack trace, call: 69 # `tf.debugging.disable_traceback_filtering()` ---> 70 raise e.with_traceback(filtered_tb) from None 71 finally: 72 del filtered_tb /usr/local/lib/python3.9/dist-packages/keras/utils/layer_utils.py in split_out_first_arg(self, args, kwargs) 807 inputs = kwargs.pop(self._arg_names[0]) 808 else: --> 809 raise ValueError( 810 "The first argument to `Layer.call` must always be passed." 811 ) ValueError: The first argument to `Layer.call` must always be passed. ``` ### Expected behavior I am trying to convert a TFT5ForConditionalGeneration with custom config into a TFLite model, and as far as I see, implementing a greedy approach on my own seems faster, but if you know a more straightforward process, please let me know. I am currently trying to generate the decoder output using the encoder output, which I will generate only the first time when I pass the entire sentence. And then, I tried to reuse this encoded vector for the rest of the greedy search as input for the decoder.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22241/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22240
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22240/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22240/comments
https://api.github.com/repos/huggingface/transformers/issues/22240/events
https://github.com/huggingface/transformers/issues/22240
1,630,091,925
I_kwDOCUB6oc5hKTqV
22,240
Add InternImage
{ "login": "Weiyun1025", "id": 47669167, "node_id": "MDQ6VXNlcjQ3NjY5MTY3", "avatar_url": "https://avatars.githubusercontent.com/u/47669167?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Weiyun1025", "html_url": "https://github.com/Weiyun1025", "followers_url": "https://api.github.com/users/Weiyun1025/followers", "following_url": "https://api.github.com/users/Weiyun1025/following{/other_user}", "gists_url": "https://api.github.com/users/Weiyun1025/gists{/gist_id}", "starred_url": "https://api.github.com/users/Weiyun1025/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Weiyun1025/subscriptions", "organizations_url": "https://api.github.com/users/Weiyun1025/orgs", "repos_url": "https://api.github.com/users/Weiyun1025/repos", "events_url": "https://api.github.com/users/Weiyun1025/events{/privacy}", "received_events_url": "https://api.github.com/users/Weiyun1025/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Can I take it up?", "> Can I take it up?\r\n\r\nOf course, thank you!", "@souravpy Are you currently working on this? If not, I would love to take a look to see if I could help in adding this model to HF Transformers!", "The [modeling code and weights](https://huggingface.co/OpenGVLab/internimage_s_1k_224/blob/main/intern_image.py) for Intern Image are already on the hub, and so the model can already be used directly with the `AutoModel` API. \r\n\r\ncf. https://github.com/huggingface/transformers/pull/23782#issuecomment-1568459737" ]
1,679
1,685
null
NONE
null
### Model description InternImage is a new large-scale CNN-based foundation model, which can obtain the gain from increasing parameters and training data like ViTs. Different from the recent CNNs that focus on large dense kernels, InternImage takes deformable convolution as the core operator, so that this model not only has the large effective receptive field required for downstream tasks such as detection and segmentation, but also has the adaptive spatial aggregation conditioned by input and task information. InternImage-H achieved a new record 65.4 mAP on COCO test-dev and 62.9 mIoU on ADE20K, outperforming current leading CNNs and ViTs. It is worth noting that InternImage relies on a custom cuda operator, so if this causes problems for model addition, you can replace [the cuda operator](https://github.com/OpenGVLab/InternImage/blob/master/classification/ops_dcnv3/modules/dcnv3.py#L218) with [a pytorch implementation](https://github.com/OpenGVLab/InternImage/blob/master/classification/ops_dcnv3/modules/dcnv3.py#L91). In fact, we have already submitted [a version of the code on transformers](https://huggingface.co/OpenGVLab/internimage_t_1k_224/tree/main), however, due to security reasons, the code we submitted cannot call your web inference api, so we would like you to add InternImage to transformers. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/OpenGVLab/InternImage
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22240/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22239
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22239/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22239/comments
https://api.github.com/repos/huggingface/transformers/issues/22239/events
https://github.com/huggingface/transformers/issues/22239
1,630,089,365
I_kwDOCUB6oc5hKTCV
22,239
bos_token and eos_token for Llama tokenizer
{ "login": "yujianll", "id": 46540151, "node_id": "MDQ6VXNlcjQ2NTQwMTUx", "avatar_url": "https://avatars.githubusercontent.com/u/46540151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yujianll", "html_url": "https://github.com/yujianll", "followers_url": "https://api.github.com/users/yujianll/followers", "following_url": "https://api.github.com/users/yujianll/following{/other_user}", "gists_url": "https://api.github.com/users/yujianll/gists{/gist_id}", "starred_url": "https://api.github.com/users/yujianll/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yujianll/subscriptions", "organizations_url": "https://api.github.com/users/yujianll/orgs", "repos_url": "https://api.github.com/users/yujianll/repos", "events_url": "https://api.github.com/users/yujianll/events{/privacy}", "received_events_url": "https://api.github.com/users/yujianll/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! There must be a typo in your `generation_config` as the `convert_llama_weights_to_hf.py` as well as `configuration_llama` both set it to `2`. Are you sure that you are using the latest scripts? \r\nThe fix is just `model.config.eos_token_id = 2` in this case. ", "I see. The [config.json](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/config.json) and [generation_config.json](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/generation_config.json) both set it to 1. So I will change it to 2 for now.", "Thank you \n\n\nSent from Yahoo Mail for iPhone\n\n\nOn Monday, March 20, 2023, 11:16 AM, Yujian Liu ***@***.***> wrote:\n\n\n\n\nI see. The config.json and generation_config.json both set it to 1. So I will change it to 2 for now.\n\n—\nReply to this email directly, view it on GitHub, or unsubscribe.\nYou are receiving this because you are subscribed to this thread.Message ID: ***@***.***>\n\n\n\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @ArthurZucker @zphan ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` model = AutoModelForCausalLM.from_pretrained("./llama-7b-hf") tokenizer = AutoTokenizer.from_pretrained("./llama-7b-hf", use_fast=False) ``` `model.config.eos_token_id` shows 1, but `tokenizer.eos_token_id` shows 2. ### Expected behavior I wonder if they should be the same, or am I missing something?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22239/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22238
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22238/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22238/comments
https://api.github.com/repos/huggingface/transformers/issues/22238/events
https://github.com/huggingface/transformers/pull/22238
1,630,016,187
PR_kwDOCUB6oc5MV-5r
22,238
replace_8bit_linear modules_to_not_convert default value fix
{ "login": "BlackSamorez", "id": 16901341, "node_id": "MDQ6VXNlcjE2OTAxMzQx", "avatar_url": "https://avatars.githubusercontent.com/u/16901341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BlackSamorez", "html_url": "https://github.com/BlackSamorez", "followers_url": "https://api.github.com/users/BlackSamorez/followers", "following_url": "https://api.github.com/users/BlackSamorez/following{/other_user}", "gists_url": "https://api.github.com/users/BlackSamorez/gists{/gist_id}", "starred_url": "https://api.github.com/users/BlackSamorez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BlackSamorez/subscriptions", "organizations_url": "https://api.github.com/users/BlackSamorez/orgs", "repos_url": "https://api.github.com/users/BlackSamorez/repos", "events_url": "https://api.github.com/users/BlackSamorez/events{/privacy}", "received_events_url": "https://api.github.com/users/BlackSamorez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes the default value of `modules_to_not_convert` of `utils.bitsandbytes.replace_8bit_linear`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22238/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22238", "html_url": "https://github.com/huggingface/transformers/pull/22238", "diff_url": "https://github.com/huggingface/transformers/pull/22238.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22238.patch", "merged_at": 1679393768000 }
https://api.github.com/repos/huggingface/transformers/issues/22237
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22237/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22237/comments
https://api.github.com/repos/huggingface/transformers/issues/22237/events
https://github.com/huggingface/transformers/pull/22237
1,629,741,000
PR_kwDOCUB6oc5MVD5l
22,237
Update vision docstring bool masked pos
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? Add the missing `bool_masked_pos` information in the docstring for vision models. Fixes #21484 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22237/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22237", "html_url": "https://github.com/huggingface/transformers/pull/22237", "diff_url": "https://github.com/huggingface/transformers/pull/22237.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22237.patch", "merged_at": 1679342776000 }
https://api.github.com/repos/huggingface/transformers/issues/22236
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22236/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22236/comments
https://api.github.com/repos/huggingface/transformers/issues/22236/events
https://github.com/huggingface/transformers/pull/22236
1,629,730,916
PR_kwDOCUB6oc5MVBvK
22,236
Rework a bit the LLaMA conversion script
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I don't see how you can reduce the memory requirement since the files provided by Meta each contain a part of all weights, so you need to have them all loaded to reconstruct just one of the weights. That's why I didn't bother implementing sharding on the fly.", "Indeed, just realised you have to `cat` them 😞 my bad!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22236). All of your documentation changes will be reflected on that endpoint." ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? This PR makes sure the LLaMA conversion script stays up to date with `save_pretrained` by having the checkpoint being loaded in an actual model then saved via that method. This avoids a lot of hard-coded values in JSON files. It keeps the old logic and merely re-loads the result in a Transformer model (after cleaning anything to make sure we never go above the model size in CPU RAM). It also changes a bit the API to put everything in the output folder like we usually have in repos on huggingface. cc @zphang so you are aware of this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22236/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22236", "html_url": "https://github.com/huggingface/transformers/pull/22236", "diff_url": "https://github.com/huggingface/transformers/pull/22236.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22236.patch", "merged_at": 1679326237000 }
https://api.github.com/repos/huggingface/transformers/issues/22235
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22235/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22235/comments
https://api.github.com/repos/huggingface/transformers/issues/22235/events
https://github.com/huggingface/transformers/pull/22235
1,629,606,384
PR_kwDOCUB6oc5MUmwf
22,235
Wav2Vec2ProcessorWithLM can return N best hypotheses now
{ "login": "vsokolovskii", "id": 48914918, "node_id": "MDQ6VXNlcjQ4OTE0OTE4", "avatar_url": "https://avatars.githubusercontent.com/u/48914918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vsokolovskii", "html_url": "https://github.com/vsokolovskii", "followers_url": "https://api.github.com/users/vsokolovskii/followers", "following_url": "https://api.github.com/users/vsokolovskii/following{/other_user}", "gists_url": "https://api.github.com/users/vsokolovskii/gists{/gist_id}", "starred_url": "https://api.github.com/users/vsokolovskii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vsokolovskii/subscriptions", "organizations_url": "https://api.github.com/users/vsokolovskii/orgs", "repos_url": "https://api.github.com/users/vsokolovskii/repos", "events_url": "https://api.github.com/users/vsokolovskii/events{/privacy}", "received_events_url": "https://api.github.com/users/vsokolovskii/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "We might have more luck with @sanchit-gandhi ;-)", "> Thanks a lot for the PR @vsokolovskii,\r\n> \r\n> Just to better understand what happens now in case we decoder a batch of logits with `n_best > 1` - > will we return a list of a list of text in this case?\r\n> \r\n> Wondering if that's the API that we want - @sanchit-gandhi wdyt?\r\n\r\nTake a look at the [description of the output class arguments](https://github.com/huggingface/transformers/blob/48327c57182fdade7f7797d1eaad2d166de5c55b/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L45) that you have, you've already prepared everything for this change and I just added the return statement. There should be the possibility to get more than one hypothesis from the ASR in order to rescore it with a larger model, take a look at the motivation section in the linked issue. 🤗 \r\n\r\n", "@sanchit-gandhi aha... got it. Check out the new changes, please.\r\n\r\n\r\n> Very cool feature @vsokolovskii! Regarding @patrickvonplaten's question about batch decoding, we don't actually have the argument `n_best` for the `batch_decode` method, it's only for the single-item, `decode` method. So currently, we'd never be returning batches of n-best hypothesis.\r\n> \r\n> WDYT about adding `n_best` to the `batch_decode` method as well @vsokolovskii? In this case, I think we should match the output format to generate's beam search method as `[batches * num_sequences, output_sequences]` (see https://huggingface.co/docs/transformers/internal/generation_utils#transformers.generation.BeamSearchDecoderOnlyOutput.sequences)\r\n\r\n", "@ArthurZucker @amyeroberts , could you please rerun the tests once the pipeline is fixed, I believe that it's not caused by my changes.", "The code quality check not passing is not due to your PR at first glance, but to make sure, could you rebase on main? It has been fixed on the main branch.", "> The code quality check not passing is not due to your PR at first glance, but to make sure, could you rebase on main? It has been fixed on the main branch.\r\n\r\nthanks! forgot yo update my fork" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #22150 , now the user can specify the number of hypotheses which will be returned after the decoding stage. If the specified number is higher than the actual number of hypotheses then all hypotheses will be returned. This is useful when the user wants to run the rescoring on the n-best hypotheses (check out the motivation in the linked issue). Wav2Vec2DecoderWithLMOutput class was already prepared for this feature and [this comment in the code](https://github.com/huggingface/transformers/blob/2355e463955a5392c1acf1964d89747e8b146a6f/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L571) said that this feature will be eventually added, so here it is. I tried not to break anything that relies on the current version of the decode function, the doc string is updated with a new parameter. All tests passed. The code was well-formatted. ## Before submitting - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? **Is this necessary for a such small feature?** @younesbelkada @ArthurZucker @sanchit-gandhi , does this make sense to you, guys? Is there anything else I should add? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22235/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22235", "html_url": "https://github.com/huggingface/transformers/pull/22235", "diff_url": "https://github.com/huggingface/transformers/pull/22235.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22235.patch", "merged_at": 1679927867000 }
https://api.github.com/repos/huggingface/transformers/issues/22234
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22234/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22234/comments
https://api.github.com/repos/huggingface/transformers/issues/22234/events
https://github.com/huggingface/transformers/pull/22234
1,629,581,594
PR_kwDOCUB6oc5MUhUI
22,234
Fix Unnecessary move of tensors from CPU to GPU in LlamaRotaryEmbedding
{ "login": "ma787639046", "id": 63697972, "node_id": "MDQ6VXNlcjYzNjk3OTcy", "avatar_url": "https://avatars.githubusercontent.com/u/63697972?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ma787639046", "html_url": "https://github.com/ma787639046", "followers_url": "https://api.github.com/users/ma787639046/followers", "following_url": "https://api.github.com/users/ma787639046/following{/other_user}", "gists_url": "https://api.github.com/users/ma787639046/gists{/gist_id}", "starred_url": "https://api.github.com/users/ma787639046/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ma787639046/subscriptions", "organizations_url": "https://api.github.com/users/ma787639046/orgs", "repos_url": "https://api.github.com/users/ma787639046/repos", "events_url": "https://api.github.com/users/ma787639046/events{/privacy}", "received_events_url": "https://api.github.com/users/ma787639046/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Did you accidentally break meta loading?\r\n`with init_empty_weights():`\r\nleaves `cos_cached` and `sin_cached` on meta device and they won't be initialized because they are not persistent. ", "Since `inv_freq` is a persistent buffer, it should be ok to also make harmonics persistent", "Hi @BlackSamorez, would you like to open a PR with these suggested changes including details about the issue they resolve? ", "@BlackSamorez `init_empty_weights` ignores buffers by default, so this should not cause any problem. We have multiple instance of non-persistent buffers in the lib and this is not a problem. I've also run Llama without any issue after it being initialized on the meta device.", "Hi, I test the following two codes on my device. It seems the meta device works correctly in this PR.\r\n\r\n```python\r\nimport pickle\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\r\n\r\nimport torch\r\nfrom transformers.models.llama.configuration_llama import LlamaConfig\r\nfrom transformers.models.llama.modeling_llama import LlamaForCausalLM\r\n\r\nmodel_name_or_path = \"decapoda-research/llama-7b-hf\"\r\n\r\nmodel1 = LlamaForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16)\r\nmodel1 = model1.to(torch.device(\"cuda:0\"))\r\n\r\n# Save a initialized cos_cached tensor to `cos1.pt`, for comparasion with meta device loading\r\ncos1 = model1.model.layers[0].self_attn.rotary_emb.cos_cached.to(torch.device(\"cpu\"))\r\npickle.dump(cos1, open(\"cos1.pt\", 'wb'))\r\n```\r\n\r\n```python\r\nimport pickle\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\r\n\r\nimport torch\r\nfrom transformers.models.llama.configuration_llama import LlamaConfig\r\nfrom transformers.models.llama.modeling_llama import LlamaForCausalLM\r\n\r\nfrom accelerate import init_empty_weights, infer_auto_device_map, load_checkpoint_and_dispatch\r\n\r\nmodel_name_or_path = \"decapoda-research/llama-7b-hf\"\r\n\r\nconfig = LlamaConfig.from_pretrained(model_name_or_path, torch_dtype=torch.float16)\r\nwith init_empty_weights():\r\n model0 = LlamaForCausalLM(config)\r\n\r\nmodel0 = load_checkpoint_and_dispatch(\r\n model0, model_name_or_path, device_map='auto', \r\n)\r\n# Compare the `cos_cached` tensor\r\ncos0 = model0.model.layers[0].self_attn.rotary_emb.cos_cached.to(torch.device(\"cpu\"))\r\ncos1 = pickle.load(open(\"cos1.pt\", 'rb'))\r\nall((cos0==cos1).tolist()) # True\r\n```\r\n\r\n@BlackSamorez Maybe you can check the results on your device.", "Yes, you're right and I was wrong. It works and the problem was in entirely different part of my program.\r\nConsider https://github.com/huggingface/transformers/pull/22234#discussion_r1143320748 and https://github.com/huggingface/transformers/pull/22234#discussion_r1143321732 invalid.\r\nThank you!", "I'm still facing this issue with latest deepspeed (0.9.5+1491e14e) and transformers (4.31.0.dev0). I feel this issue is more likely related to the LLaMA implementation here (LlamaRotaryEmbedding).\r\n\r\n```\r\nRuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: \r\nindices should be either on cpu or on the same device as the indexed tensor (cpu)\r\n RuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: \r\nindices should be either on cpu or on the same device as the indexed tensor (cpu) \r\nRuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: \r\nindices should be either on cpu or on the same device as the indexed tensor (cpu)\r\nRuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) \r\ncos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]\r\nRuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)\r\n cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]\r\nRuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)\r\n cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]\r\nRuntimeError : cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]indices should be either on cpu or on the same device as the indexed tensor (cpu)\r\n\r\nRuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)\r\n```", "> I'm still facing this issue with latest deepspeed (0.9.5+1491e14e) and transformers (4.31.0.dev0). I feel this issue is more likely related to the LLaMA implementation here (LlamaRotaryEmbedding).\r\n> \r\n> ```\r\n> RuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: \r\n> indices should be either on cpu or on the same device as the indexed tensor (cpu)\r\n> RuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: \r\n> indices should be either on cpu or on the same device as the indexed tensor (cpu) \r\n> RuntimeErrorcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]: \r\n> indices should be either on cpu or on the same device as the indexed tensor (cpu)\r\n> RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) \r\n> cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]\r\n> RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)\r\n> cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]\r\n> RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)\r\n> cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]\r\n> RuntimeError : cos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]indices should be either on cpu or on the same device as the indexed tensor (cpu)\r\n> \r\n> RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)\r\n> ```\r\n\r\nI encountered exactly the same issue,training failed when using zero3", "Thanks for the input, will investigate! ", "any updates? I also meet this issues. with ds==0.9.3, transformers==4.32.0dev", "Did not have time to investigate, we are going to need a reproducer if you want some help here. Pinging @pacman100 when we have a reproducer shared! \r\n" ]
1,679
1,692
1,679
CONTRIBUTOR
null
# What does this PR do? The original implementation of LlamaRotaryEmbedding does not use `cos_cached` & `sin_cached` tensors as the PyTorch Parameter or Buffer, thus these tensors do not move to GPU when we use `model.to(gpu_id)` or `model.cuda()`. They will keep in the device CPU. This PR adjusts the `cos_cached` & `sin_cached` tensors to the Buffer with persistent=False. This keeps these tensors moving from CPU to GPU together with the model, while keeping them out of the model's state_dict as original. # Fixes: Fix unnecessary moves of tensors from CPU to GPU in LlamaRotaryEmbedding, for saving a large amount of CPU usage especially when we do inference. Code for Reproducing the issue: ```python import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" # Single card Generation from tqdm import tqdm import torch from transformers.models.llama.modeling_llama import LlamaForCausalLM from transformers.models.llama.tokenization_llama import LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") model = LlamaForCausalLM.from_pretrained("decapoda-research/llama-7b-hf", torch_dtype=torch.float16) model = model.cuda() model.eval() # Batch generation inputs = [ "LLaMa is a large language model developed by Meta AI, for", ] * 32 batch = tokenizer(inputs, return_tensors="pt", add_special_tokens=False) batch = batch.to(model.device) # Here we do some high computational batched generation for i in tqdm(range(5000)): generated = model.generate(batch["input_ids"], temperature=0.7, top_p=0.9, do_sample=True, num_beams=1, max_new_tokens=600,) ``` Use `top` command in bash to watch the CPU usage. Here are the comparison before applying this PR and after this PR: Before: | Fix | USER | PR | NI | VIRT | RES | SHR | S | %CPU | %MEM | TIME+ | COMMAND | |--------|------|----|----|--------|------|--------|---|------|------|---------|----------| | Before | root | 20 | 0 | 108.6g | 1.9g | 411620 | R | 6263 | 0.2 | 40:28.1 | python | After: | Fix | USER | PR | NI | VIRT | RES | SHR | S | %CPU | %MEM | TIME+ | COMMAND | |-------|------|----|----|--------|------|--------|---|------|------|---------|----------| | After | root | 20 | 0 | 108.6g | 1.8g | 414360 | R | 98.3 | 0.2 | 03:21.6 | python | Here the CPU usage drops to a normal level because the `cos_cached` & `sin_cached` tensors can move to GPU correctly with the model. This helps avoid unnecessary moves of tensors from CPU to GPU in LlamaRotaryEmbedding. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22234/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22234/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22234", "html_url": "https://github.com/huggingface/transformers/pull/22234", "diff_url": "https://github.com/huggingface/transformers/pull/22234.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22234.patch", "merged_at": 1679075792000 }
https://api.github.com/repos/huggingface/transformers/issues/22233
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22233/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22233/comments
https://api.github.com/repos/huggingface/transformers/issues/22233/events
https://github.com/huggingface/transformers/pull/22233
1,629,538,602
PR_kwDOCUB6oc5MUX-j
22,233
Revert "Use `dash==2.8.1` for now for daily CI"
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22233). All of your documentation changes will be reflected on that endpoint." ]
1,679
1,679
1,679
COLLABORATOR
null
Reverts huggingface/transformers#22227 New version [dash 2.9.1](https://github.com/plotly/dash/releases/tag/v2.9.1) works with our CI. Tested and it works. We no longer need the change in #22227
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22233/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22233", "html_url": "https://github.com/huggingface/transformers/pull/22233", "diff_url": "https://github.com/huggingface/transformers/pull/22233.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22233.patch", "merged_at": 1679068468000 }
https://api.github.com/repos/huggingface/transformers/issues/22232
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22232/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22232/comments
https://api.github.com/repos/huggingface/transformers/issues/22232/events
https://github.com/huggingface/transformers/pull/22232
1,629,470,598
PR_kwDOCUB6oc5MUJcY
22,232
Fix llama_tokenizer
{ "login": "Splo2t", "id": 31382462, "node_id": "MDQ6VXNlcjMxMzgyNDYy", "avatar_url": "https://avatars.githubusercontent.com/u/31382462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Splo2t", "html_url": "https://github.com/Splo2t", "followers_url": "https://api.github.com/users/Splo2t/followers", "following_url": "https://api.github.com/users/Splo2t/following{/other_user}", "gists_url": "https://api.github.com/users/Splo2t/gists{/gist_id}", "starred_url": "https://api.github.com/users/Splo2t/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Splo2t/subscriptions", "organizations_url": "https://api.github.com/users/Splo2t/orgs", "repos_url": "https://api.github.com/users/Splo2t/repos", "events_url": "https://api.github.com/users/Splo2t/events{/privacy}", "received_events_url": "https://api.github.com/users/Splo2t/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22232). All of your documentation changes will be reflected on that endpoint." ]
1,679
1,679
1,679
NONE
null
Fixes #22222 This PR fixes the LlamaTokenizer importing Fixed __init__.py file in src/transformers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22232/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22232", "html_url": "https://github.com/huggingface/transformers/pull/22232", "diff_url": "https://github.com/huggingface/transformers/pull/22232.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22232.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22231
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22231/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22231/comments
https://api.github.com/repos/huggingface/transformers/issues/22231/events
https://github.com/huggingface/transformers/issues/22231
1,629,434,199
I_kwDOCUB6oc5hHzFX
22,231
Detect Accelerate's DeepSpeed level 3 Env Vars and warn if synced_gpus is False
{ "login": "JulesGM", "id": 3231217, "node_id": "MDQ6VXNlcjMyMzEyMTc=", "avatar_url": "https://avatars.githubusercontent.com/u/3231217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JulesGM", "html_url": "https://github.com/JulesGM", "followers_url": "https://api.github.com/users/JulesGM/followers", "following_url": "https://api.github.com/users/JulesGM/following{/other_user}", "gists_url": "https://api.github.com/users/JulesGM/gists{/gist_id}", "starred_url": "https://api.github.com/users/JulesGM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesGM/subscriptions", "organizations_url": "https://api.github.com/users/JulesGM/orgs", "repos_url": "https://api.github.com/users/JulesGM/repos", "events_url": "https://api.github.com/users/JulesGM/events{/privacy}", "received_events_url": "https://api.github.com/users/JulesGM/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "cc @stas00 and @pacman100 ", "Totally. Thank you for bringing it up, @JulesGM \r\n\r\nThe API for checking this situation is already available and is being used in the HF Trainer:\r\n\r\nhttps://github.com/huggingface/transformers/blob/bec075612a293a66022937f65ba0c0df25224d29/src/transformers/trainer_seq2seq.py#L180-L188\r\n\r\nFor DIY integration we can \r\n1. document it here: https://huggingface.co/docs/transformers/main/main_classes/deepspeed#nontrainer-deepspeed-integration\r\n2. and add an assert inside `generate` if it is called w/o this flag and WORLD_SIZE>1 and zero3. No warnings please - nobody sees those. (need to think how to check world_size inside `generate` but checking for deepspeed first will enable a definite use of `torch.distributed.get_world_size()` so should be easy).\r\n\r\nWould you like to work on that, @JulesGM? I'd be happy to support you or I might find time to do it myself some time later. Totally up to you.", "That's great to hear Stas. \n\nHonestly I'm kind of working night and day for my thesis deadline right now, so if you want to do it, it would be much appreciated.", "Thank you for letting me know your preference, please try this PR and let me know if it solves the problem for you, @JulesGM \r\n\r\nhttps://github.com/huggingface/transformers/pull/22242\r\n\r\nI decided to just set it automatically if it wasn't set.\r\n\r\nThe docs were already correct, so no need to change them." ]
1,679
1,679
1,679
NONE
null
### Feature request If `ACCELERATE_DEEPSPEED_ZERO_STAGE` == 3 and generate is called without `synced_gpus`, it would be reasonable to warn the user that if they're doing a distributed call to generate with a deepspeed model, they need to give generate the `synced_gpus` arguments. ### Motivation ## Background Deepspeed level 3 shards the parameters, so it requires that `model.forward` be called the same amount of times on each process even at inference time, so the weights can be moved around in time. `model.forward` is called for each token generated at generation time. If a process stops generating before other processes, Deepspeed level 3 breaks because `model.forward` isn't called in processes where generation is over. That's why the `synced_gpus` argument is present in `model.generate`, the `model.forward` function keeps getting called until all processes are done generating. ## Accelerate Has Env Vars that Indicate Stage 3 When using Deepspeed, accelerate has an env var called `ACCELERATE_DEEPSPEED_ZERO_STAGE` that contains the level. While `ACCELERATE_DEEPSPEED_ZERO_STAGE` being set to 3 doesn't guarantee that the model is being called is distributed, it is a pretty big indication in practice, and it would be reasonable to give a warning if `model.generate` (and possibly `model.greedy_search` etc) are called without `synced_gpus`, as new users will probably not know about this. If there is a way for `model.generate` to know in a more reliable way if the model is distributed with Deepspeed level 3, then that could be used to warn the user as well ofc. ### Your contribution I can do it, but for these nuanced, low coding qty things, you folks are probably better placed than me.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22231/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22231/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22230
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22230/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22230/comments
https://api.github.com/repos/huggingface/transformers/issues/22230/events
https://github.com/huggingface/transformers/pull/22230
1,629,388,485
PR_kwDOCUB6oc5MT3_c
22,230
Removed .mdx extension in two links
{ "login": "MKhalusova", "id": 1065417, "node_id": "MDQ6VXNlcjEwNjU0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MKhalusova", "html_url": "https://github.com/MKhalusova", "followers_url": "https://api.github.com/users/MKhalusova/followers", "following_url": "https://api.github.com/users/MKhalusova/following{/other_user}", "gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}", "starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions", "organizations_url": "https://api.github.com/users/MKhalusova/orgs", "repos_url": "https://api.github.com/users/MKhalusova/repos", "events_url": "https://api.github.com/users/MKhalusova/events{/privacy}", "received_events_url": "https://api.github.com/users/MKhalusova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,699
1,679
CONTRIBUTOR
null
This PR fixes two links that had .mdx in them that shouldn't have been there.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22230/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22230", "html_url": "https://github.com/huggingface/transformers/pull/22230", "diff_url": "https://github.com/huggingface/transformers/pull/22230.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22230.patch", "merged_at": 1679063233000 }
https://api.github.com/repos/huggingface/transformers/issues/22229
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22229/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22229/comments
https://api.github.com/repos/huggingface/transformers/issues/22229/events
https://github.com/huggingface/transformers/pull/22229
1,629,365,658
PR_kwDOCUB6oc5MTzCC
22,229
Fix natten
{ "login": "alihassanijr", "id": 68103095, "node_id": "MDQ6VXNlcjY4MTAzMDk1", "avatar_url": "https://avatars.githubusercontent.com/u/68103095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alihassanijr", "html_url": "https://github.com/alihassanijr", "followers_url": "https://api.github.com/users/alihassanijr/followers", "following_url": "https://api.github.com/users/alihassanijr/following{/other_user}", "gists_url": "https://api.github.com/users/alihassanijr/gists{/gist_id}", "starred_url": "https://api.github.com/users/alihassanijr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alihassanijr/subscriptions", "organizations_url": "https://api.github.com/users/alihassanijr/orgs", "repos_url": "https://api.github.com/users/alihassanijr/repos", "events_url": "https://api.github.com/users/alihassanijr/events{/privacy}", "received_events_url": "https://api.github.com/users/alihassanijr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@alihassanijr Thanks for such a quick fix! Just double checking - is this version of `natten` compatible with later versions of PyTorch, or just >= 2.x.x?", "My pleasure, sorry I didn't realize this earlier.\r\nYes, `0.14.5` still supports torch >= 1.8 and comes with wheels for those:\r\nhttps://shi-labs.com/natten\r\n\r\nSo the problem was that we had a pull request a couple of months ago that added an additional argument to one of the C functions. We didn't immediately rebuild and push out a new release at the time.\r\nWe did however push out a new build to support PyTorch 2.0, and it included this change, which is why we had to open a PR here as well.\r\n\r\nOn a different note, we only had to change this here in the first place because we explicitly use the C function calls in the models using NA:\r\n\r\nhttps://github.com/alihassanijr/transformers/blob/6125d62e05aba0bd1f6a53bd3bf44b4d86b58f25/src/transformers/models/dinat/modeling_dinat.py#L350\r\n\r\nhttps://github.com/alihassanijr/transformers/blob/6125d62e05aba0bd1f6a53bd3bf44b4d86b58f25/src/transformers/models/nat/modeling_nat.py#L342\r\n\r\nWe could try and figure out how we would directly import the nn.Module we typically encourage everyone to use, that way any changes to the signatures would not affect `transformers`.\r\n\r\nNATTEN can in theory support future PyTorch versions without any change required (unless PyTorch changes anything in their ATEN backend like they did with the dispatchers in 1.13, which would require us to work those changes into our CPP backend as well.)\r\nThe only slight hitch is that if users want to install NATTEN with wheels (and not have to wait for pip to build it locally), we have to build those on our end and upload them.\r\n\r\nI've been wanting to set up CircleCI or Travis so that I wouldn't have to set that up manually every time there's a new PyTorch release, but just haven't found the time to do so yet. But we will to the best of our abilities try to build them upon new PyTorch releases and push them out as soon as possible." ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? The new NATTEN 0.14.5 supports PyTorch 2.0, but also adds an additional argument to the QK operation to allow optional RPBs. This ends up failing NATTEN tests. This commit adds NATTEN back to circleci and adds the arguments to get it working again. Reverts #22218. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger ## Misc Related issues on NATTEN: https://github.com/SHI-Labs/NATTEN/issues/23 https://github.com/SHI-Labs/NATTEN/issues/19
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22229/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22229", "html_url": "https://github.com/huggingface/transformers/pull/22229", "diff_url": "https://github.com/huggingface/transformers/pull/22229.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22229.patch", "merged_at": 1679065675000 }
https://api.github.com/repos/huggingface/transformers/issues/22228
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22228/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22228/comments
https://api.github.com/repos/huggingface/transformers/issues/22228/events
https://github.com/huggingface/transformers/pull/22228
1,629,317,461
PR_kwDOCUB6oc5MToy1
22,228
Fix state dict loading via symlink on windows
{ "login": "Schmavery", "id": 2154522, "node_id": "MDQ6VXNlcjIxNTQ1MjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2154522?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Schmavery", "html_url": "https://github.com/Schmavery", "followers_url": "https://api.github.com/users/Schmavery/followers", "following_url": "https://api.github.com/users/Schmavery/following{/other_user}", "gists_url": "https://api.github.com/users/Schmavery/gists{/gist_id}", "starred_url": "https://api.github.com/users/Schmavery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Schmavery/subscriptions", "organizations_url": "https://api.github.com/users/Schmavery/orgs", "repos_url": "https://api.github.com/users/Schmavery/repos", "events_url": "https://api.github.com/users/Schmavery/events{/privacy}", "received_events_url": "https://api.github.com/users/Schmavery/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Wauplin looks like something that should be in huggingface_hub (it it's not already).", "_The documentation is not available anymore as the PR was closed or merged._", "Aie, this is a real problem I think. In `huggingface_hub` we return a path to the snapshots/ folder that is indeed a symlink to a file in the blobs/ folder. In the case of a `hf_hub_download`, I would be fine with doing a `os.path.realpath` before returning the path but that would still be an issue when doing `snapshot_download`.\r\n\r\nThe point of having a `snapshots/` folder as we did is to provide the same file structure as in the repo for third-party libraries. But if Windows has a \"funny way to handle symlinks\" by not following them, I'm afraid `huggingface_hub` can't do anything about it except really changing the cache structure.\r\n\r\nWhat I'm wondering here is why is has not been discovered before. @Schmavery would it be possible that you first ran a script in developer mode/as admin that have cached files using symlinks and you are now re-running the script in \"normal\" mode which result in not being able to follow symlinks? (for the record, we already had some issues with symlinks on windows and [decided to duplicate files](https://github.com/huggingface/huggingface_hub/issues/1062#issuecomment-1256054899) for non-dev non-admin users)", "cc @LysandreJik @julien-c about the cache-system design", "@Wauplin thanks for the quick reply!\r\n\r\nI'm also curious why I'm the first to run into this, though at this point I'm used to things not working in Windows because of all the different ways things can be set up!\r\nI don't think I ran anything as admin. I'm happy to run whatever command you need to get more info about the setup, but from some basic `ls` it looks like the permissions/ownership is as I might have expected.\r\n\r\n```\r\nschmavery ~/git/sd-test $ ls -l ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/\r\ntotal 4\r\ndrwxr-xr-x 1 schmavery 0 Mar 17 08:26 blobs\r\ndrwxr-xr-x 1 schmavery 0 Mar 16 22:33 refs\r\ndrwxr-xr-x 1 schmavery 0 Mar 16 22:33 snapshots\r\n\r\nschmavery ~/git/sd-test $ ls -l ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/snapshots/\r\ntotal 4\r\ndrwxr-xr-x 1 schmavery 0 Mar 16 22:33 1cb61502fc8b634cdb04e7cd69e06051a728bedf\r\n\r\nschmavery ~/git/sd-test $ ls -lh ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/blobs/\r\ntotal 2.5G\r\n-rw-r--r-- 1 schmavery 160M Mar 16 22:33 11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030\r\n-rw-r--r-- 1 schmavery 607 Mar 16 22:33 14bcdff46ade71e94221b696cefbad2382223370\r\n-rw-r--r-- 1 schmavery 1.7G Mar 16 22:35 34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650\r\n-rw-r--r-- 1 schmavery 1.1M Mar 16 22:33 469be27c5c010538f845f518c4f5e8574c78f7c8\r\n-rw-r--r-- 1 schmavery 340 Mar 16 22:33 4a37db2129e08cb00670e652398a8f3960d97d0e\r\n-rw-r--r-- 1 schmavery 513K Mar 16 22:33 76e821f1b6f0a9709293c3b6b51ed90980b3166b\r\n-rw-r--r-- 1 schmavery 905 Mar 16 22:33 9e3e87514708d0a2b44abfa0096ec14802862f5d\r\n-rw-r--r-- 1 schmavery 511 Mar 16 22:33 9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d\r\n-rw-r--r-- 1 schmavery 629 Mar 16 22:33 a08e9e082e6ab9044bdd2926092ce2e4f33d2272\r\n-rw-r--r-- 1 schmavery 460 Mar 16 22:33 ae0c5be6f35217e51c4c000fd325d8de0294e99c\r\n-rw-r--r-- 1 schmavery 820 Mar 16 22:33 e966b0b8955e8c66a0717acb2ce5041274d7c60a\r\n-rw-r--r-- 1 schmavery 650M Mar 17 08:26 f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730\r\n```", "Hi @Schmavery, thanks for reporting this.\r\n\r\nI sorry that this bug has being introduced recently. It seems that Windows has issues following absolute symlinks in some cases. It has been reported in https://github.com/huggingface/huggingface_hub/issues/1398, https://github.com/huggingface/diffusers/issues/2729 and https://github.com/huggingface/transformers/pull/22228 (and mentioned in https://github.com/huggingface/huggingface_hub/issues/1396). I'll provide a quick ASAP.", "@Schmavery could you please retry using [`huggingface_hub==0.13.3`](https://github.com/huggingface/huggingface_hub/releases/tag/v0.13.3)? It should fix your problem. Before that you need to delete your folder `\"~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/snapshots/\"` to delete the existing (non-working) symlinks.\r\n\r\nIf the issue persists, please let me know.", "@Wauplin I just tried your new version and something still doesn't seem to be working, though it seems like it's something else now.\r\n\r\nThe relative symlink is being created, but the blob that it is supposed to be pointing to is missing from the blobs folder.\r\n\r\nMore specifically, I get this error:\r\n```\r\nOSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\models--stabilityai--stable-diffusion-2-base\\\\snapshots\\\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\\\text_encoder\\\\pytorch_model.bin'\r\n```\r\n\r\nAnd then looking around on disk I see this:\r\n\r\n```\r\nschmavery ~/git/sd-test $ ls -lh C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\models--stabilityai--stable-diffusion-2-base\\\\snapshots\\\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\\\text_encoder\\\\pytorch_model.bin\r\nlrwxrwxrwx 1 schmavery 79 Mar 20 10:05 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin' -> ../../../blobs/f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730\r\n\r\nschmavery ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/blobs $ ls\r\n11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030 469be27c5c010538f845f518c4f5e8574c78f7c8 9e3e87514708d0a2b44abfa0096ec14802862f5d ae0c5be6f35217e51c4c000fd325d8de0294e99c\r\n14bcdff46ade71e94221b696cefbad2382223370 4a37db2129e08cb00670e652398a8f3960d97d0e 9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d e966b0b8955e8c66a0717acb2ce5041274d7c60a\r\n34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650 76e821f1b6f0a9709293c3b6b51ed90980b3166b a08e9e082e6ab9044bdd2926092ce2e4f33d2272\r\n```\r\n\r\nIt seems the blob starting with `f2a06cf32c` is nowhere to be found. If you think this is an unrelated problem, I'm happy to open another issue (on the huggingface_hub repo, I'd imagine)", "Hi @Schmavery, maybe let's continue here for now. Could you delete entirely the `~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base` folder and try again? I tested your script in a colab notebook using the latest version and it worked for me: https://colab.research.google.com/drive/1xYy-3Q5hXptZ4TKef8kP7EeeSYiUISpa?usp=sharing", "@Wauplin with huggingface-hub==0.13.3 installed, I deleted the whole ~/.cache/huggingface folder and ran the script in the initial post and got this as the full output:\r\n\r\n```\r\nschmavery ~/git/sd-test $ python repro.py\r\nA matching Triton is not available, some optimizations will not be enabled.\r\nError caught was: No module named 'triton'\r\nDownloading (…)p16/model_index.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 511/511 [00:00<00:00, 170kB/s]\r\nDownloading (…)okenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 820/820 [00:00<00:00, 51.4kB/s] \r\nDownloading (…)cial_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 460/460 [00:00<00:00, 51.1kB/s] \r\nDownloading (…)cheduler_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 340/340 [00:00<00:00, 113kB/s] \r\nDownloading (…)edf/unet/config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 905/905 [00:00<00:00, 82.3kB/s] \r\nDownloading (…)_encoder/config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 629/629 [00:00<00:00, 57.2kB/s] \r\nDownloading (…)tokenizer/merges.txt: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 5.04MB/s] \r\nDownloading (…)tokenizer/vocab.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 6.42MB/s] \r\nDownloading (…)bedf/vae/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 607/607 [00:00<00:00, 202kB/s] \r\nDownloading (…)on_pytorch_model.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 167M/167M [00:05<00:00, 32.0MB/s] \r\nDownloading pytorch_model.bin: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 681M/681M [00:17<00:00, 39.8MB/s] \r\nDownloading (…)on_pytorch_model.bin: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.73G/1.73G [00:43<00:00, 39.8MB/s]\r\nFetching 12 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:44<00:00, 3.73s/it] \r\nTraceback (most recent call last):n: 40%|██████████████████████████████████████████████████████████████████████████████████▎ | 692M/1.73G [00:16<00:27, 37.2MB/s]\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\transformers\\modeling_utils.py\", line 415, in load_state_dict███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊| 1.73G/1.73G [00:43<00:00, 59.0MB/s]\r\n return torch.load(checkpoint_file, map_location=\"cpu\")\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\torch\\serialization.py\", line 771, in load\r\n with _open_file_like(f, 'rb') as opened_file:\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\torch\\serialization.py\", line 270, in _open_file_like\r\n return _open_file(name_or_buffer, mode)\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\torch\\serialization.py\", line 251, in __init__\r\n super(_open_file, self).__init__(open(name, mode))\r\nOSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\models--stabilityai--stable-diffusion-2-base\\\\snapshots\\\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\\\text_encoder\\\\pytorch_model.bin'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\repro.py\", line 5, in <module>\r\n pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision=\"fp16\")\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\diffusers\\pipelines\\pipeline_utils.py\", line 944, in from_pretrained\r\n loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\transformers\\modeling_utils.py\", line 2429, in from_pretrained\r\n state_dict = load_state_dict(resolved_archive_file)\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\transformers\\modeling_utils.py\", line 418, in load_state_dict\r\n with open(checkpoint_file) as f:\r\nOSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\models--stabilityai--stable-diffusion-2-base\\\\snapshots\\\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\\\text_encoder\\\\pytorch_model.bin'\r\n```\r\n\r\nThe snapshot file still points to a blob starting with `f2a06cf32cf585d03b` which doesn't exist in the blobs folder.", "@Schmavery sorry that you are experiencing this. I'm making more tests on Windows on my side. Could you tell if you enabled developer mode on your laptop? And can you run `huggingface-cli env` and copy-paste this output here please? Just in case it gives me some hint on what is happening.", "@Wauplin No problem, thanks for the help! The crazy thing is that this seemed to all be working last week (when using my realpath patch), but when I ran it this morning after the weekend, I had this issue, even after a clean reinstall of all the packages. I thought maybe there could have been some problematic update to the model itself but if it's running fine for you then I guess that's not it.\r\n\r\n Looks like developer mode is turned on\r\n![image](https://user-images.githubusercontent.com/2154522/226381945-0a9a0e55-4143-4b30-932b-ec92607c3fb7.png)\r\n\r\nHere's the output:\r\n```\r\nschmavery ~/git/sd-test $ huggingface-cli env\r\n\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- huggingface_hub version: 0.13.3\r\n- Platform: Windows-10-10.0.19044-SP0\r\n- Python version: 3.9.12\r\n- Running in iPython ?: No\r\n- Running in notebook ?: No\r\n- Running in Google Colab ?: No\r\n- Token path ?: C:\\Users\\schmavery\\.cache\\huggingface\\token\r\n- Has saved token ?: False\r\n- Configured git credential helpers: manager-core\r\n- FastAI: N/A\r\n- Tensorflow: N/A\r\n- Torch: 1.13.1+cu117\r\n- Jinja2: N/A\r\n- Graphviz: N/A\r\n- Pydot: N/A\r\n- Pillow: 9.4.0\r\n- hf_transfer: N/A\r\n- ENDPOINT: https://huggingface.co\r\n- HUGGINGFACE_HUB_CACHE: C:\\Users\\schmavery\\.cache\\huggingface\\hub\r\n- HUGGINGFACE_ASSETS_CACHE: C:\\Users\\schmavery\\.cache\\huggingface\\assets\r\n- HF_TOKEN_PATH: C:\\Users\\schmavery\\.cache\\huggingface\\token\r\n- HF_HUB_OFFLINE: False\r\n- HF_HUB_DISABLE_TELEMETRY: False\r\n- HF_HUB_DISABLE_PROGRESS_BARS: None\r\n- HF_HUB_DISABLE_SYMLINKS_WARNING: False\r\n- HF_HUB_DISABLE_IMPLICIT_TOKEN: False\r\n- HF_HUB_ENABLE_HF_TRANSFER: False\r\n```", "Thanks for the information @Schmavery . Unfortunately I'm still not able to reproduce your issue. It's good that you have developer mode activated btw (otherwise you wouldn't have symlinks at all and files would be duplicated in the cache).\r\n\r\nCan we try something else?:\r\n1. Delete the `'C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\'` folder (or `'C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\models--stabilityai--stable-diffusion-2-base'` if you want to keep other ones)\r\n2. Install `huggingface_hub==0.12.1`. We had some issues with the 0.13 release and I'd like to be sure if the bug you are facing existed before or not.\r\n3. Rerun the script with debug logging enabled i.e.\r\n\r\n```py\r\n# Add those 2 lines at the beginning of your script:\r\nfrom huggingface_hub.utils.logging import set_verbosity_debug\r\nset_verbosity_debug()\r\n\r\n# Same script as before\r\nfrom diffusers import DiffusionPipeline, DPMSolverMultistepScheduler\r\nimport torch\r\nrepo_id = \"stabilityai/stable-diffusion-2-base\"\r\npipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision=\"fp16\")\r\npipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)\r\npipe = pipe.to(\"cuda\")\r\nprompt = \"High quality photo of an astronaut riding a horse in space\"\r\nimage = pipe(prompt, num_inference_steps=25).images[0]\r\nimage.save(\"astronaut.png\")\r\n```", "@Wauplin FWIW I just tried it with `runwayml/stable-diffusion-v1-5` to see if a different model might work, but got a very similar problem:\r\n\r\n```\r\nschmavery ~/git/sd-test $ python repro.py \r\nA matching Triton is not available, some optimizations will not be enabled.\r\nError caught was: No module named 'triton'\r\nDownloading (…)ain/model_index.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 543/543 [00:00<00:00, 136kB/s]\r\nDownloading (…)rocessor_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 342/342 [00:00<00:00, 85.5kB/s]\r\nDownloading (…)cheduler_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 308/308 [00:00<00:00, 68.8kB/s]\r\nDownloading (…)_checker/config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.72k/4.72k [00:00<00:00, 1.33MB/s]\r\nDownloading (…)cial_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 472/472 [00:00<00:00, 157kB/s]\r\nDownloading (…)_encoder/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 617/617 [00:00<00:00, 154kB/s]\r\nDownloading (…)tokenizer/merges.txt: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 4.14MB/s]\r\nDownloading (…)okenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 806/806 [00:00<00:00, 403kB/s]\r\nDownloading (…)819/unet/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 743/743 [00:00<00:00, 248kB/s]\r\nDownloading (…)d819/vae/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 547/547 [00:00<00:00, 182kB/s]\r\nDownloading (…)tokenizer/vocab.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 3.91MB/s]\r\nDownloading (…)on_pytorch_model.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 335M/335M [00:23<00:00, 14.4MB/s]\r\nDownloading pytorch_model.bin: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 492M/492M [00:32<00:00, 15.2MB/s]\r\nDownloading pytorch_model.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.22G/1.22G [00:58<00:00, 20.6MB/s]\r\nDownloading (…)on_pytorch_model.bin: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.44G/3.44G [01:36<00:00, 35.7MB/s]\r\nFetching 15 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [01:36<00:00, 6.46s/it]\r\nTraceback (most recent call last):%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.22G/1.22G [00:58<00:00, 19.2MB/s]\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\diffusers\\models\\modeling_utils.py\", line 101, in load_state_dict\r\n return torch.load(checkpoint_file, map_location=\"cpu\")\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\torch\\serialization.py\", line 771, in load\r\n with _open_file_like(f, 'rb') as opened_file:\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\torch\\serialization.py\", line 270, in _open_file_like\r\n return _open_file(name_or_buffer, mode)\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\torch\\serialization.py\", line 251, in __init__\r\n super(_open_file, self).__init__(open(name, mode))\r\nOSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\models--runwayml--stable-diffusion-v1-5\\\\snapshots\\\\39593d5650112b4cc580433f6b0435385882d819\\\\vae\\\\diffusion_pytorch_model.bin'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\repro.py\", line 4, in <module>\r\n pipe = StableDiffusionPipeline.from_pretrained(\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\diffusers\\pipelines\\pipeline_utils.py\", line 944, in from_pretrained\r\n loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\diffusers\\models\\modeling_utils.py\", line 563, in from_pretrained\r\n state_dict = load_state_dict(model_file, variant=variant)\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\diffusers\\models\\modeling_utils.py\", line 106, in load_state_dict\r\n with open(checkpoint_file) as f:\r\nOSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\models--runwayml--stable-diffusion-v1-5\\\\snapshots\\\\39593d5650112b4cc580433f6b0435385882d819\\\\vae\\\\diffusion_pytorch_model.bin'\r\n(venv) (base) 11:54:56 schmavery@DESKTOP-ML11APV:~/git/sd-test $ ls -lh C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\models--runwayml--stable-diffusion-v1-5\\\\snapshots\\\\39593d5650112b4cc580433f6b0435385882d819\\\\vae\\\\diffusion_pytorch_model.bin\r\nlrwxrwxrwx 1 schmavery 79 Mar 20 11:53 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--runwayml--stable-diffusion-v1-5\\snapshots\\39593d5650112b4cc580433f6b0435385882d819\\vae\\diffusion_pytorch_model.bin' -> ../../../blobs/1b134cded8eb78b184aefb8805b6b572f36fa77b255c483665dda931fa0130c5\r\n\r\nschmavery ~/git/sd-test $ ls ~/.cache/huggingface/hub/\r\nmodels--runwayml--stable-diffusion-v1-5/ models--stabilityai--stable-diffusion-2-base/ version.txt version_diffusers_cache.txt\r\n\r\nschmavery ~/git/sd-test $ ls ~/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/blobs/\r\n193490b58ef62739077262e833bf091c66c29488058681ac25cf7df3d8190974 4d3e873ab5086ad989f407abd50fdce66db8d657 5dbd88952e7e521aa665e5052e6db7def3641d03 82d05b0e688d7ea94675678646c427907419346e\r\n1a02ee8abc93e840ffbcb2d68b66ccbcb74b3ab3 5294955ff7801083f720b34b55d0f1f51313c5c5 6866dceb3a870b077eb970ecf702ce4e1a83b934 c7da0e21ba7ea50637bee26e81c220844defdf01aafca02b2c42ecdadb813de4\r\n2c2130b544c0c5a72d5d00da071ba130a9800fb2 55d78924fee13e4220f24320127c5f16284e13b9 76e821f1b6f0a9709293c3b6b51ed90980b3166b\r\n469be27c5c010538f845f518c4f5e8574c78f7c8 5ba7bf706515bc60487ad0e1816b4929b82542d6 770a47a9ffdcfda0b05506a7888ed714d06131d60267e6cf52765d61cf59fd67\r\n```\r\n\r\nI wonder if it's possible that the hash used in the symlink could be wrong under some circumstances.", "@Schmavery not sure you saw it but could you try my suggestion from https://github.com/huggingface/transformers/pull/22228#issuecomment-1476499630? Thanks in advance\r\n", "Oops, missed your message, running that now.\r\nI assume you meant `huggingface_hub==0.12.1` rather than `huggingface==0.12.1` but lmk if that's wrong (the latter gave me an error when trying to pip install)", "Ah, yes of course. `huggingface_hub==0.12.1` is the one I meant", "@Wauplin \r\n```\r\nschmavery ~/git/sd-test $ rm -rf ~/.cache/huggingface/hub/\r\n\r\nschmavery ~/git/sd-test $ huggingface-cli env\r\n\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- huggingface_hub version: 0.12.1\r\n- Platform: Windows-10-10.0.19044-SP0\r\n- Python version: 3.9.12\r\n- Running in iPython ?: No\r\n- Running in notebook ?: No\r\n- Running in Google Colab ?: No\r\n- Token path ?: C:\\Users\\schmavery\\.cache\\huggingface\\token\r\n- Has saved token ?: False\r\n- Configured git credential helpers: manager-core\r\n- FastAI: N/A\r\n- Tensorflow: N/A\r\n- Torch: 1.13.1+cu117\r\n- Jinja2: N/A\r\n- Graphviz: N/A\r\n- Pydot: N/A\r\n- Pillow: 9.4.0\r\n- hf_transfer: N/A\r\n- ENDPOINT: https://huggingface.co\r\n- HUGGINGFACE_HUB_CACHE: C:\\Users\\schmavery\\.cache\\huggingface\\hub\r\n- HUGGINGFACE_ASSETS_CACHE: C:\\Users\\schmavery\\.cache\\huggingface\\assets\r\n- HF_HUB_OFFLINE: False\r\n- HF_TOKEN_PATH: C:\\Users\\schmavery\\.cache\\huggingface\\token\r\n- HF_HUB_DISABLE_PROGRESS_BARS: None\r\n- HF_HUB_DISABLE_SYMLINKS_WARNING: False\r\n- HF_HUB_DISABLE_IMPLICIT_TOKEN: False\r\n- HF_HUB_ENABLE_HF_TRANSFER: False\r\n\r\nschmavery ~/git/sd-test $ python repro.py\r\nA matching Triton is not available, some optimizations will not be enabled.\r\nError caught was: No module named 'triton'\r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/model_index.json to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmppiuhc6qi\r\nDownloading (…)p16/model_index.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 511/511 [00:00<00:00, 170kB/s]\r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/model_index.json in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d\r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\model_index.json\r\nFetching 12 files: 0%| | 0/12 [00:00<?, ?it/s]downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/scheduler/scheduler_config.json to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmp376tmhv9\r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/special_tokens_map.json to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmp9onvvyfj\r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/config.json to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmp88p6fmgk\r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/pytorch_model.bin to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmpbiq7cjj2\r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/vocab.json to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmp7skqyuqq\r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/unet/config.json to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmpvrthjk23\r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/merges.txt to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmpyi6kdwbo\r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/tokenizer_config.json to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmpy9q44g25\r\nDownloading (…)cial_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 460/460 [00:00<00:00, 115kB/s] \r\nDownloading (…)\"pytorch_model.bin\";: 0%| | 0.00/681M [00:00<?, ?B/s] \r\nDownloading (…)cial_tokens_map.json: 0%| | 0.00/460 [00:00<?, ?B/s] \r\nDownloading (…)cheduler_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 340/340 [00:00<00:00, 68.0kB/s]bDownloading (…)_encoder/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 629/629 [00:00<00:00, 210kB/s] \r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/scheduler/scheduler_config.json in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\4a37db2129e08cb00670e652398a8f3960d97d0eson: 0%| | 0.00/629 [00:00<?, ?B/s] \r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/config.json in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\a08e9e082e6ab9044bdd2926092ce2e4f33d2272\r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\ae0c5be6f35217e51c4c000fd325d8de0294e99c from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\tokenizer\\special_tokens_map.json\r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\4a37db2129e08cb00670e652398a8f3960d97d0e from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\scheduler\\scheduler_config.json\r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\a08e9e082e6ab9044bdd2926092ce2e4f33d2272 from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\config.json\r\nDownloading (…)edf/unet/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 905/905 [00:00<00:00, 226kB/s] \r\n\r\nDownloading (…)edf/unet/config.json: 0%| | 0.00/905 [00:00<?, ?B/s] \r\nDownloading (…)okenizer_config.json: 0%| | 0.00/820 [00:00<?, ?B/s]sDownloading (…)okenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 820/820 [00:00<00:00, 164kB/s]0storing https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/tokenizer_config.json in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\e966b0b8955e8c66a0717acb2ce5041274d7c60a\r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\9e3e87514708d0a2b44abfa0096ec14802862f5d from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\unet\\config.json\r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\e966b0b8955e8c66a0717acb2ce5041274d7c60a from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\tokenizer\\tokenizer_config.json\r\n downloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/diffusion_pytorch_model.bin to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmpzk2qle5p\r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/unet/diffusion_pytorch_model.bin to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmpj5ly573o | 0.00/525k [00:00<?, ?B/s] \r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/config.json to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmp43prkgdv | 0.00/1.06M [00:00<?, ?B/s] \r\nDownloading (…)tokenizer/merges.txt: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 4.64MB/s] \r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/merges.txt in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\76e821f1b6f0a9709293c3b6b51ed90980b3166bzer/merges.txt: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 4.69MB/s] \r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\76e821f1b6f0a9709293c3b6b51ed90980b3166b from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\tokenizer\\merges.txt\r\nDownloading (…)tokenizer/vocab.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 6.97MB/s] \r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/tokenizer/vocab.json in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\469be27c5c010538f845f518c4f5e8574c78f7c8\r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\469be27c5c010538f845f518c4f5e8574c78f7c8 from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\tokenizer\\vocab.json\r\nDownloading (…)bedf/vae/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 607/607 [00:00<00:00, 152kB/s] \r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/config.json in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\14bcdff46ade71e94221b696cefbad2382223370edf/vae/config.json: 0%| | 0.00/607 [00:00<?, ?B/s] \r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\14bcdff46ade71e94221b696cefbad2382223370 from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\vae\\config.json\r\nDownloading (…)\"pytorch_model.bin\";: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 681M/681M [00:07<00:00, 92.9MB/s] \r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/pytorch_model.bin in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 | 73.4M/1.73G [00:06<02:07, 13.0MB/s] \r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin\r\nDownloading (…)_pytorch_model.bin\";: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 167M/167M [00:10<00:00, 15.8MB/s] \r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/vae/diffusion_pytorch_model.bin in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030█████▊ | 189M/1.73G [00:10<00:44, 34.3MB/s] \r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030 from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\vae\\diffusion_pytorch_model.bin\r\nDownloading (…)_pytorch_model.bin\";: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.73G/1.73G [00:51<00:00, 33.4MB/s]\r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/unet/diffusion_pytorch_model.bin in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊| 1.73G/1.73G [00:51<00:00, 52.2MB/s] \r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650 from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\unet\\diffusion_pytorch_model.bin\r\nFetching 12 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:52<00:00, 4.38s/it] \r\nTraceback (most recent call last):\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\transformers\\modeling_utils.py\", line 415, in load_state_dict\r\n return torch.load(checkpoint_file, map_location=\"cpu\")\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\torch\\serialization.py\", line 771, in load\r\n with _open_file_like(f, 'rb') as opened_file:\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\torch\\serialization.py\", line 270, in _open_file_like\r\n return _open_file(name_or_buffer, mode)\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\torch\\serialization.py\", line 251, in __init__\r\n super(_open_file, self).__init__(open(name, mode))\r\nOSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\models--stabilityai--stable-diffusion-2-base\\\\snapshots\\\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\\\text_encoder\\\\pytorch_model.bin'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\repro.py\", line 24, in <module>\r\n pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision=\"fp16\")\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\diffusers\\pipelines\\pipeline_utils.py\", line 944, in from_pretrained\r\n loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\transformers\\modeling_utils.py\", line 2429, in from_pretrained\r\n state_dict = load_state_dict(resolved_archive_file)\r\n File \"C:\\Users\\schmavery\\git\\sd-test\\venv\\lib\\site-packages\\transformers\\modeling_utils.py\", line 418, in load_state_dict\r\n with open(checkpoint_file) as f:\r\nOSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\models--stabilityai--stable-diffusion-2-base\\\\snapshots\\\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\\\text_encoder\\\\pytorch_model.bin'\r\n\r\nschmavery ~/git/sd-test $ ls -lh C:\\\\Users\\\\schmavery\\\\.cache\\\\huggingface\\\\hub\\\\models--stabilityai--stable-diffusion-2-base\\\\snapshots\\\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\\\text_encoder\\\\pytorch_model.bin\r\nlrwxrwxrwx 1 schmavery 79 Mar 20 12:14 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin' -> ../../../blobs/f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730\r\n\r\nschmavery ~/git/sd-test $ ls ~/.cache/huggingface/hub/models--stabilityai--stable-diffusion-2-base/blobs/\r\n11bc15ceb385823b4adb68bd5bdd7568d0c706c3de5ea9ebcb0b807092fc9030 469be27c5c010538f845f518c4f5e8574c78f7c8 9e3e87514708d0a2b44abfa0096ec14802862f5d ae0c5be6f35217e51c4c000fd325d8de0294e99c\r\n14bcdff46ade71e94221b696cefbad2382223370 4a37db2129e08cb00670e652398a8f3960d97d0e 9ef36adb76dff35bf9dc2fc690ce4ae3bb72360d e966b0b8955e8c66a0717acb2ce5041274d7c60a\r\n34009b21392113e829e498653f739f1ec81244b4a2eaf56f111b0805c9617650 76e821f1b6f0a9709293c3b6b51ed90980b3166b a08e9e082e6ab9044bdd2926092ce2e4f33d2272\r\n```", "@Wauplin Ok, doing some more investigation. When watching my filesystem during the install, I see the offending f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 in the blobs folder after it gets to the \r\n\r\n```\r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/1cb61502fc8b634cdb04e7cd69e06051a728bedf/text_encoder/pytorch_model.bin in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 \r\n```\r\n\r\nBut then at some point it gets deleted/disappears. Any idea what might be triggering that?\r\n\r\nAt this point I'm just running \r\n```python\r\nfrom huggingface_hub.utils.logging import set_verbosity_debug\r\nset_verbosity_debug()\r\n\r\nfrom diffusers import DiffusionPipeline, DPMSolverMultistepScheduler\r\nimport torch\r\nrepo_id = \"stabilityai/stable-diffusion-2-base\"\r\npipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision=\"fp16\")\r\n```", "@Schmavery thanks for trying the commands. The fact that it doesn't work on huggingface v0.12.1 makes me think that it's an issue specific to your setup, not something that was introduced recently. It's doesn't mean we should not find the root cause.\r\n\r\nMaybe let's try to keep the test as minimal as possible:\r\n\r\n```py\r\n# tested with huggingface_hub==0.12.1\r\nfrom huggingface_hub.utils.logging import set_verbosity_debug\r\nfrom huggingface_hub import hf_hub_download\r\nfrom huggingface_hub.constants import HUGGINGFACE_HUB_CACHE\r\nfrom pathlib import Path\r\nimport shutil\r\n\r\nprint(\"Deleting\", HUGGINGFACE_HUB_CACHE)\r\nshutil.rmtree(HUGGINGFACE_HUB_CACHE)\r\n\r\nset_verbosity_debug()\r\n\r\npath = Path(hf_hub_download(repo_id=\"stabilityai/stable-diffusion-2-base\", filename=\"text_encoder/pytorch_model.bin\", revision=\"fp16\"))\r\n\r\nprint(\"hf_hub_download\", path)\r\nprint(\"is_file\", path.is_file())\r\nprint(\"is_symlink\", path.is_symlink())\r\nprint(\"resolved\", path.resolve())\r\nprint(\"resolved size\", path.resolve().stat().st_size)\r\n\r\n```\r\n\r\nshould output\r\n```\r\nDeleting C:\\Users\\Administrator\\.cache\\huggingface\\hub\r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin to C:\\Users\\Administrator\\.cache\\huggingface\\hub\\tmp9rxs8yls\r\nDownloading (…)\"pytorch_model.bin\";: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 681M/681M [00:05<00:00, 115MB/s]\r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin in cache at C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730\r\ncreating pointer to C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 from C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin \r\nhf_hub_download C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin\r\nis_file True\r\nis_symlink True\r\nresolved C:\\Users\\Administrator\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730\r\nresolved size 680904225\r\n```\r\n\r\nCan you confirm that or is the blob file missing already ?\r\n", "@Wauplin ok I can confirm that much works!\r\n```\r\nschmavery ~/git/sd-test $ python repro2.py\r\nDeleting C:\\Users\\schmavery\\.cache\\huggingface\\hub\r\ndownloading https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\tmp8kmmcj6w\r\nDownloading (…)\"pytorch_model.bin\";: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 681M/681M [00:05<00:00, 114MB/s]\r\nstoring https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/fp16/text_encoder/pytorch_model.bin in cache at C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730\r\ncreating pointer to C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730 from C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin\r\nhf_hub_download C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin\r\nis_file True\r\nis_symlink True\r\nresolved C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\blobs\\f2a06cf32cf585d03b55fef302142a5321b761ec440113925f64f4ceaffc7730\r\nresolved size 680904225\r\n```", "Ok, that's already a good news. Could you try to load this path from pytorch? (adding the following line to [the previous script](https://github.com/huggingface/transformers/pull/22228#issuecomment-1476649854)).\r\n\r\n```py\r\nimport torch\r\n\r\n# try to load from symlink directly\r\nstate_dict = torch.load(path)\r\n\r\n# or try to load from resolved symlink\r\nstate_dict = torch.load(path.resolve())\r\n```\r\n\r\nand if that doesn't work, at least try to read the binary file:\r\n\r\n```py\r\nwith open(path, \"rb\") as f:\r\n print(\"content length\", len(f.read()), \"(read from file)\")\r\n\r\n# or \r\nwith open(path.resolve(), \"rb\") as f:\r\n print(\"content length\", len(f.read()), \"(read from file)\")\r\n```", "Ok. I think I've finally figured out what's going on.\r\n@Wauplin Thank you so much for your help in debugging\r\n\r\n![image](https://user-images.githubusercontent.com/2154522/226428085-e0203aef-b868-4009-8deb-dc83ed1c1677.png)\r\n\r\nIt looks like somehow the model is triggering some trojan detector in Windows Defender. Looks like a couple other people have run into the issue too:\r\nhttps://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/8584\r\n\r\nProbably just a false positive but I might try and figure out how to use the safetensor version of the `stabilityai/stable-diffusion-2-base` just in case.\r\n", "Thanks again for all the help -- seems like this PR is probably not needed now that huggingface_hub is using relative symlinks.", "@Schmavery Very glad that you finally figured out what's going on! Hope this will help other users switching to safetensors as well :+1: :) " ]
1,679
1,679
1,679
NONE
null
# What does this PR do? I ran into an issue trying to run this on Windows 10 (via Git Bash, in a python 3.9.12 Conda environment, deps installed via pip). My requirements.txt included below for completeness. I tried running an example of SD 2 from the docs ``` from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler import torch repo_id = "stabilityai/stable-diffusion-2-base" pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "High quality photo of an astronaut riding a horse in space" image = pipe(prompt, num_inference_steps=25).images[0] image.save("astronaut.png") ``` And kept getting output like this: ``` schmavery ~/git/sd-test $ python test.py A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' Downloading pytorch_model.bin: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 681M/681M [00:16<00:00, 41.8MB/s] Fetching 12 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:16<00:00, 1.38s/it] Traceback (most recent call last): File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\transformers\modeling_utils.py", line 417, in load_state_dict return torch.load(checkpoint_file, map_location="cpu") File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\torch\serialization.py", line 771, in load with _open_file_like(f, 'rb') as opened_file: File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\torch\serialization.py", line 270, in _open_file_like return _open_file(name_or_buffer, mode) File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\torch\serialization.py", line 251, in __init__ super(_open_file, self).__init__(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\schmavery\git\sd-test\test.py", line 5, in <module> pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 944, in from_pretrained loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\transformers\modeling_utils.py", line 2431, in from_pretrained state_dict = load_state_dict(resolved_archive_file) File "C:\Users\schmavery\scoop\apps\miniconda3\current\lib\site-packages\transformers\modeling_utils.py", line 420, in load_state_dict with open(checkpoint_file) as f: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\schmavery\\.cache\\huggingface\\hub\\models--stabilityai--stable-diffusion-2-base\\snapshots\\1cb61502fc8b634cdb04e7cd69e06051a728bedf\\text_encoder\\pytorch_model.bin' ``` I did some poking around and realized that `C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\snapshots\1cb61502fc8b634cdb04e7cd69e06051a728bedf\text_encoder\pytorch_model.bin` is a symlink to another file in `C:\Users\schmavery\.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-base\blobs`. Some searching online revealed some issues with python loading files via symlink in Windows, mostly due to Window's funny handling of symlinks. I tried adding a call to `os.path.realpath` to resolve the path before opening the file, and that solved the problem! I thought I'd post this here in case it helps anyone. requirements.txt: ``` accelerate==0.17.1 brotlipy==0.7.0 certifi @ file:///C:/b/abs_85o_6fm0se/croot/certifi_1671487778835/work/certifi cffi @ file:///C:/b/abs_49n3v2hyhr/croot/cffi_1670423218144/work charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work colorama @ file:///C:/b/abs_a9ozq0l032/croot/colorama_1672387194846/work conda==23.1.0 conda-package-handling @ file:///C:/b/abs_fcga8w0uem/croot/conda-package-handling_1672865024290/work conda_package_streaming @ file:///C:/b/abs_0e5n5hdal3/croot/conda-package-streaming_1670508162902/work cryptography @ file:///C:/b/abs_8ecplyc3n2/croot/cryptography_1677533105000/work diffusers==0.14.0 filelock==3.10.0 huggingface-hub==0.13.2 idna @ file:///C:/b/abs_bdhbebrioa/croot/idna_1666125572046/work importlib-metadata==6.0.0 Jinja2==3.1.2 MarkupSafe==2.1.2 menuinst @ file:///C:/ci/menuinst_1631733438520/work mpmath==1.3.0 mypy-extensions==1.0.0 networkx==3.0 numpy==1.24.2 packaging==23.0 Pillow==9.4.0 pluggy @ file:///C:/ci/pluggy_1648024580010/work psutil==5.9.4 pycosat @ file:///C:/b/abs_4b1rrw8pn9/croot/pycosat_1666807711599/work pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work pyOpenSSL @ file:///C:/b/abs_552w85x1jz/croot/pyopenssl_1677607703691/work pyre-extensions==0.0.23 PySocks @ file:///C:/ci/pysocks_1605307512533/work pywin32==305.1 PyYAML==6.0 regex==2022.10.31 requests @ file:///C:/ci/requests_1657735342357/work ruamel.yaml @ file:///C:/b/abs_30ee5qbthd/croot/ruamel.yaml_1666304562000/work ruamel.yaml.clib @ file:///C:/b/abs_aarblxbilo/croot/ruamel.yaml.clib_1666302270884/work sympy==1.11.1 tokenizers==0.13.2 toolz @ file:///C:/b/abs_cfvk6rc40d/croot/toolz_1667464080130/work torch==1.13.1+cu117 torchaudio==0.13.1+cu117 torchvision==0.14.1+cu117 tqdm @ file:///C:/b/abs_0axbz66qik/croots/recipe/tqdm_1664392691071/work transformers==4.27.1 typing-inspect==0.8.0 typing_extensions==4.5.0 urllib3 @ file:///C:/b/abs_9bcwxczrvm/croot/urllib3_1673575521331/work win-inet-pton @ file:///C:/ci/win_inet_pton_1605306162074/work wincertstore==0.2 xformers==0.0.16 zipp==3.15.0 zstandard==0.19.0 ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ^^ This is such a small change that it shouldn't affect any docs/tests I think ## Who can review? Looks like @sgugger and @stas00 were the last to touch this area in the file, though it wasn't particularly recently. I wonder if some change was made in how the models are cached that could have caused this.. 🤷 My original local fix just changed the torch load to `torch.load(os.path.realpath(checkpoint_file_realpath), map_location="cpu")`, but this seems like it might catch a couple more cases. I considered just overriding the `checkpoint_file` variable to point to the realpath but I thought that might have made the error messages less clear.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22228/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22228/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22228", "html_url": "https://github.com/huggingface/transformers/pull/22228", "diff_url": "https://github.com/huggingface/transformers/pull/22228.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22228.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22227
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22227/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22227/comments
https://api.github.com/repos/huggingface/transformers/issues/22227/events
https://github.com/huggingface/transformers/pull/22227
1,629,049,850
PR_kwDOCUB6oc5MSu9D
22,227
Use `dash==2.8.1` for now for daily CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? Use `das==2.8.1` for now for daily CI. Currently daily CI jobs all fail, for example, [this job run](https://github.com/huggingface/transformers/actions/runs/4443525103/jobs/7800913606) Issue reported [here](https://github.com/plotly/dash/issues/2460)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22227/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22227", "html_url": "https://github.com/huggingface/transformers/pull/22227", "diff_url": "https://github.com/huggingface/transformers/pull/22227.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22227.patch", "merged_at": 1679056035000 }
https://api.github.com/repos/huggingface/transformers/issues/22226
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22226/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22226/comments
https://api.github.com/repos/huggingface/transformers/issues/22226/events
https://github.com/huggingface/transformers/pull/22226
1,629,030,772
PR_kwDOCUB6oc5MSq1c
22,226
fix(docs): fix task guide links in model docs
{ "login": "Seb0", "id": 790702, "node_id": "MDQ6VXNlcjc5MDcwMg==", "avatar_url": "https://avatars.githubusercontent.com/u/790702?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Seb0", "html_url": "https://github.com/Seb0", "followers_url": "https://api.github.com/users/Seb0/followers", "following_url": "https://api.github.com/users/Seb0/following{/other_user}", "gists_url": "https://api.github.com/users/Seb0/gists{/gist_id}", "starred_url": "https://api.github.com/users/Seb0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Seb0/subscriptions", "organizations_url": "https://api.github.com/users/Seb0/orgs", "repos_url": "https://api.github.com/users/Seb0/repos", "events_url": "https://api.github.com/users/Seb0/events{/privacy}", "received_events_url": "https://api.github.com/users/Seb0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Fixes broken links for task guides in model docs Fixes # (issue) see above ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22226/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22226", "html_url": "https://github.com/huggingface/transformers/pull/22226", "diff_url": "https://github.com/huggingface/transformers/pull/22226.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22226.patch", "merged_at": 1679063418000 }
https://api.github.com/repos/huggingface/transformers/issues/22225
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22225/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22225/comments
https://api.github.com/repos/huggingface/transformers/issues/22225/events
https://github.com/huggingface/transformers/issues/22225
1,628,991,798
I_kwDOCUB6oc5hGHE2
22,225
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 8192, 1]], which is output 0 of AsStridedBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later.
{ "login": "Tanya-11", "id": 90728105, "node_id": "MDQ6VXNlcjkwNzI4MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/90728105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tanya-11", "html_url": "https://github.com/Tanya-11", "followers_url": "https://api.github.com/users/Tanya-11/followers", "following_url": "https://api.github.com/users/Tanya-11/following{/other_user}", "gists_url": "https://api.github.com/users/Tanya-11/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tanya-11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tanya-11/subscriptions", "organizations_url": "https://api.github.com/users/Tanya-11/orgs", "repos_url": "https://api.github.com/users/Tanya-11/repos", "events_url": "https://api.github.com/users/Tanya-11/events{/privacy}", "received_events_url": "https://api.github.com/users/Tanya-11/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Tanya-11, thanks for raising this issue! \r\n\r\nIt's not possible to access the code in the link shared as it is private. Could you share a minimal code snippet to reproduce the error? ", "> Hi @Tanya-11, thanks for raising this issue!\r\n> \r\n> It's not possible to access the code in the link shared as it is private. Could you share a minimal code snippet to reproduce the error?\r\n\r\nHi @amyeroberts \r\nPls find the link to my public[ github repo](https://github.com/Tanya-11/experiment/blob/main/lonformer_experiment.ipynb).\r\nThanks! ", "Hi @Tanya-11, thanks for sharing the link. \r\n\r\nI am able to run the example code if I set `gradient_checkpointing=False`. There have been recent updates to the LED model, including this one which resolves an [issue with gradient checkpointing](https://github.com/huggingface/transformers/pull/21840). Can you retry with the most recent release of transformers? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
### System Info tranformer version: 4.2.0 Huggingface hub:0.13.2 Python: Python 3.9.16 ![image](https://user-images.githubusercontent.com/90728105/225866607-921911db-b0b2-4a4e-8278-6a8326c36241.png) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I do trainer.train() to fine-tune pertained long former for text summarization, I get the following error: > /usr/local/lib/python3.9/dist-packages/torch/autograd/__init__.py:197: UserWarning: Error detected in BmmBackward0. Traceback of forward call that caused the error: > File "/usr/local/lib/python3.9/dist-packages/torch/autograd/function.py", line 267, in apply > return user_fn(self, *args) > File "/usr/local/lib/python3.9/dist-packages/torch/utils/checkpoint.py", line 141, in backward > outputs = ctx.run_function(*detached_inputs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 1701, in custom_forward > return module(*inputs, is_global_attn, output_attentions) > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 873, in forward > attn_outputs = self.self_attn( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 695, in forward > self_outputs = self.longformer_self_attn( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 268, in forward > attn_output = self._compute_attn_output_with_global_indices( > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 578, in _compute_attn_output_with_global_indices > attn_output_only_global = torch.matmul( > File "/usr/local/lib/python3.9/dist-packages/torch/fx/traceback.py", line 57, in format_stack > return traceback.format_stack() > (Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:114.) > Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass > /usr/local/lib/python3.9/dist-packages/torch/autograd/__init__.py:197: UserWarning: > > Previous calculation was induced by CheckpointFunctionBackward. Traceback of forward call that induced the previous calculation: > File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main > return _run_code(code, main_globals, None, > File "/usr/lib/python3.9/runpy.py", line 87, in _run_code > exec(code, run_globals) > File "/usr/local/lib/python3.9/dist-packages/ipykernel_launcher.py", line 16, in <module> > app.launch_new_instance() > File "/usr/local/lib/python3.9/dist-packages/traitlets/config/application.py", line 992, in launch_instance > app.start() > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelapp.py", line 612, in start > self.io_loop.start() > File "/usr/local/lib/python3.9/dist-packages/tornado/platform/asyncio.py", line 215, in start > self.asyncio_loop.run_forever() > File "/usr/lib/python3.9/asyncio/base_events.py", line 601, in run_forever > self._run_once() > File "/usr/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once > handle._run() > File "/usr/lib/python3.9/asyncio/events.py", line 80, in _run > self._context.run(self._callback, *self._args) > File "/usr/local/lib/python3.9/dist-packages/tornado/ioloop.py", line 687, in <lambda> > lambda f: self._run_callback(functools.partial(callback, future)) > File "/usr/local/lib/python3.9/dist-packages/tornado/ioloop.py", line 740, in _run_callback > ret = callback() > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 821, in inner > self.ctx_run(self.run) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 782, in run > yielded = self.gen.send(value) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 365, in process_one > yield gen.maybe_future(dispatch(*args)) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 268, in dispatch_shell > yield gen.maybe_future(handler(stream, idents, msg)) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 543, in execute_request > self.do_execute( > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/ipkernel.py", line 306, in do_execute > res = shell.run_cell(code, store_history=store_history, silent=silent) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/zmqshell.py", line 536, in run_cell > return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 2854, in run_cell > result = self._run_cell( > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 2881, in _run_cell > return runner(coro) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner > coro.send(None) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3057, in run_cell_async > has_raised = await self.run_ast_nodes(code_ast.body, cell_name, > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3249, in run_ast_nodes > if (await self.run_code(code, result, async_=asy)): > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3326, in run_code > exec(code_obj, self.user_global_ns, self.user_ns) > File "<ipython-input-120-3435b262f1ae>", line 1, in <module> > trainer.train() > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 888, in train > tr_loss += self.training_step(model, inputs) > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 1248, in training_step > loss = self.compute_loss(model, inputs) > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 1277, in compute_loss > outputs = model(**inputs) > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 2190, in forward > outputs = self.led( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 2044, in forward > encoder_outputs = self.encoder( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 1705, in forward > layer_outputs = torch.utils.checkpoint.checkpoint( > File "/usr/local/lib/python3.9/dist-packages/torch/utils/checkpoint.py", line 249, in checkpoint > return CheckpointFunction.apply(function, preserve, *args) > File "/usr/local/lib/python3.9/dist-packages/torch/fx/traceback.py", line 57, in format_stack > return traceback.format_stack() > (Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:121.) > Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass > /usr/local/lib/python3.9/dist-packages/torch/autograd/__init__.py:197: UserWarning: Error detected in CheckpointFunctionBackward. Traceback of forward call that caused the error: > File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main > return _run_code(code, main_globals, None, > File "/usr/lib/python3.9/runpy.py", line 87, in _run_code > exec(code, run_globals) > File "/usr/local/lib/python3.9/dist-packages/ipykernel_launcher.py", line 16, in <module> > app.launch_new_instance() > File "/usr/local/lib/python3.9/dist-packages/traitlets/config/application.py", line 992, in launch_instance > app.start() > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelapp.py", line 612, in start > self.io_loop.start() > File "/usr/local/lib/python3.9/dist-packages/tornado/platform/asyncio.py", line 215, in start > self.asyncio_loop.run_forever() > File "/usr/lib/python3.9/asyncio/base_events.py", line 601, in run_forever > self._run_once() > File "/usr/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once > handle._run() > File "/usr/lib/python3.9/asyncio/events.py", line 80, in _run > self._context.run(self._callback, *self._args) > File "/usr/local/lib/python3.9/dist-packages/tornado/ioloop.py", line 687, in <lambda> > lambda f: self._run_callback(functools.partial(callback, future)) > File "/usr/local/lib/python3.9/dist-packages/tornado/ioloop.py", line 740, in _run_callback > ret = callback() > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 821, in inner > self.ctx_run(self.run) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 782, in run > yielded = self.gen.send(value) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 365, in process_one > yield gen.maybe_future(dispatch(*args)) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 268, in dispatch_shell > yield gen.maybe_future(handler(stream, idents, msg)) > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 543, in execute_request > self.do_execute( > File "/usr/local/lib/python3.9/dist-packages/tornado/gen.py", line 234, in wrapper > yielded = ctx_run(next, result) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/ipkernel.py", line 306, in do_execute > res = shell.run_cell(code, store_history=store_history, silent=silent) > File "/usr/local/lib/python3.9/dist-packages/ipykernel/zmqshell.py", line 536, in run_cell > return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 2854, in run_cell > result = self._run_cell( > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 2881, in _run_cell > return runner(coro) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner > coro.send(None) > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3057, in run_cell_async > has_raised = await self.run_ast_nodes(code_ast.body, cell_name, > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3249, in run_ast_nodes > if (await self.run_code(code, result, async_=asy)): > File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3326, in run_code > exec(code_obj, self.user_global_ns, self.user_ns) > File "<ipython-input-120-3435b262f1ae>", line 1, in <module> > trainer.train() > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 888, in train > tr_loss += self.training_step(model, inputs) > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 1248, in training_step > loss = self.compute_loss(model, inputs) > File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 1277, in compute_loss > outputs = model(**inputs) > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 2190, in forward > outputs = self.led( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 2044, in forward > encoder_outputs = self.encoder( > File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl > return forward_call(*input, **kwargs) > File "/usr/local/lib/python3.9/dist-packages/transformers/models/led/modeling_led.py", line 1705, in forward > layer_outputs = torch.utils.checkpoint.checkpoint( > File "/usr/local/lib/python3.9/dist-packages/torch/utils/checkpoint.py", line 249, in checkpoint > return CheckpointFunction.apply(function, preserve, *args) > File "/usr/local/lib/python3.9/dist-packages/torch/fx/traceback.py", line 57, in format_stack > return traceback.format_stack() > (Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:114.) > Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass > --------------------------------------------------------------------------- > RuntimeError Traceback (most recent call last) > <ipython-input-120-3435b262f1ae> in <module> > ----> 1 trainer.train() > > 6 frames > /usr/local/lib/python3.9/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) > 195 # some Python versions print out the first line of a multi-line function > 196 # calls in the traceback and some print out the last line > --> 197 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass > 198 tensors, grad_tensors_, retain_graph, create_graph, inputs, > 199 allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass > > RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 8192, 1]], which is output 0 of AsStridedBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! code : https://github.com/Tanya-11/experiment/blob/main/lonformer_experiment.ipynb ### Expected behavior trainer.train() should run
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22225/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22224
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22224/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22224/comments
https://api.github.com/repos/huggingface/transformers/issues/22224/events
https://github.com/huggingface/transformers/issues/22224
1,628,984,849
I_kwDOCUB6oc5hGFYR
22,224
Flax Whisper uses a lot of GPU memory
{ "login": "hannan72", "id": 8229163, "node_id": "MDQ6VXNlcjgyMjkxNjM=", "avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hannan72", "html_url": "https://github.com/hannan72", "followers_url": "https://api.github.com/users/hannan72/followers", "following_url": "https://api.github.com/users/hannan72/following{/other_user}", "gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}", "starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hannan72/subscriptions", "organizations_url": "https://api.github.com/users/hannan72/orgs", "repos_url": "https://api.github.com/users/hannan72/repos", "events_url": "https://api.github.com/users/hannan72/events{/privacy}", "received_events_url": "https://api.github.com/users/hannan72/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @hannan72! Could you try disabling [`_do_init`](https://github.com/huggingface/transformers/pulls?q=is%3Apr+_do_init+is%3Aclosed)? This way we won't initialise a random version of the parameters. Note that this isn't compatible with `from_pt=True`, so you'll have to load a checkpoint where the Flax weights have already been saved:\r\n```python\r\nmodel, params = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, _do_init=False)\r\n\r\njit_generate = jax.jit(model.generate, static_argnames=[\"max_length\", \"language\"])\r\n\r\ninput_features = jnp.array(input_features, dtype=jnp.float16)\r\npred_ids = jit_generate(input_features, params=params, max_length=128, language='<|en|>') # we need to explicitly pass the params now since we're in Flax's stateless design\r\n```\r\n\r\nIf you need to load a model where you only have PyTorch weights, you can first convert them to Flax on CPU:\r\n```python\r\nimport jax\r\n\r\n# Global flag to set a specific platform, must be used at startup. ONLY DO THIS FOR SAVING WEIGHTS ON CPU!\r\njax.config.update('jax_platform_name', 'cpu')\r\n\r\nmodel = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, from_pt=True)\r\nmodel.save_pretrained(\"save/path/to/ckpt/here\")\r\n```\r\n\r\nKill this window, and then open up a new one and load:\r\n```python\r\nmodel, params = FlaxWhisperForConditionalGeneration.from_pretrained(\"save/path/to/ckpt/here\", dtype=jnp.float16, _do_init=False)\r\n```", "Thanks for your response @sanchit-gandhi \r\nI've tested your proposed approach, save flax model by converting to cpu and then restart kernel and load `FlaxWhisperForConditionalGeneration` by try disabling `_do_init`.\r\nBut Inference time increased a lot while GPU memory utilization didn't decreased significantly.\r\n\r\nresults when use `from_pt=True` for whisper-medium on a 10 second audio on A100-40GB GPU:\r\n- GPU memory usage: ~33.1GB\r\n- Inference time: ~0.22 seconds\r\n\r\nresults when use `_do_init=False` for flax saved whisper-medium on a 10 second audio on A100-40GB GPU:\r\n- GPU memory usage: ~31.1GB\r\n- Inference time: ~16.5 seconds\r\n\r\nNow Inference time is 80x larger!\r\n", "Some of the extra GPU memory can probably be attributed to how the flax generation implements the kv cache. Check what happens when you set max new tokens to be smaller.", "Also, it doesn't make sense to run the flax stuff within a `torch.no_grad()` context.", "I also found that whisper_small checkpoint is also taking ~33GB of GPU RAM!", "> \r\nFor my fine-tuned whisper-medium, if I don't run inside the `torch.no_grad()`, I get an error and it is just fixed by adding `torch.no_grad()`:\r\n\r\n```\r\nRuntimeError Traceback (most recent call last)\r\n/s2t-test/client_notebook/Untitled1.ipynb Cell 25 in <cell line: 3>()\r\n 1 jax.config.update('jax_platform_name', 'cpu')\r\n----> 2 model = FlaxWhisperForConditionalGeneration.from_pretrained(model_id , dtype=jnp.float16, from_pt=True)\r\n 3 model.save_pretrained(model_id+ \"/flax/\")\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/transformers/modeling_flax_utils.py:810, in FlaxPreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, dtype, *model_args, **kwargs)\r\n 807 model = cls(config, *model_args, _do_init=_do_init, **model_kwargs)\r\n 809 if from_pt:\r\n--> 810 state = load_pytorch_checkpoint_in_flax_state_dict(model, resolved_archive_file, is_sharded)\r\n 811 else:\r\n 812 if is_sharded:\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/transformers/modeling_flax_pytorch_utils.py:62, in load_pytorch_checkpoint_in_flax_state_dict(flax_model, pytorch_checkpoint_path, is_sharded, allow_missing_keys)\r\n 59 pt_state_dict = torch.load(pt_path, map_location=\"cpu\")\r\n 60 logger.info(f\"PyTorch checkpoint contains {sum(t.numel() for t in pt_state_dict.values()):,} parameters.\")\r\n---> 62 flax_state_dict = convert_pytorch_state_dict_to_flax(pt_state_dict, flax_model)\r\n 63 else:\r\n 64 # model is sharded and pytorch_checkpoint_path already contains the list of .pt shard files\r\n 65 flax_state_dict = convert_pytorch_sharded_state_dict_to_flax(pytorch_checkpoint_path, flax_model)\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/transformers/modeling_flax_pytorch_utils.py:128, in convert_pytorch_state_dict_to_flax(pt_state_dict, flax_model)\r\n 126 def convert_pytorch_state_dict_to_flax(pt_state_dict, flax_model):\r\n...\r\n--> 128 pt_state_dict = {k: v.numpy() for k, v in pt_state_dict.items()}\r\n 130 model_prefix = flax_model.base_model_prefix\r\n 132 # use params dict if the model contains batch norm layers\r\n\r\nRuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.\r\n```\r\n\r\n(However pretrained models does not need to be loaded inside `torch.no_grad()` )\r\n\r\nAlbeit the results I mentioned after @sanchit-gandhi 's answer, was test with and without `torch.no_grad()` and it didn't make any change.", "> Now Inference time is 80x larger!\r\n\r\nThere shouldn't be any difference to inference time - are you certain you're running on GPU here? Make sure you **have not** set:\r\n```\r\njax.config.update('jax_platform_name', 'cpu')\r\n```", "> > Now Inference time is 80x larger!\r\n> \r\n> There shouldn't be any difference to inference time - are you certain you're running on GPU here? Make sure you **have not** set:\r\n> \r\n> ```\r\n> jax.config.update('jax_platform_name', 'cpu')\r\n> ```\r\n\r\nYes, I kill the window after saving the flax model and afterwards I don't move weights to CPU anymore.\r\nBut it is so slow.\r\nHave you tested this approach @sanchit-gandhi ?\r\n", "> Have you tested this approach @sanchit-gandhi ?\r\n\r\nExtensively! See my results for A100 (PyTorch) vs pmap (TPU v4-8 + JAX):\r\n\r\n\r\n![Screenshot 2023-04-03 at 11 54 17](https://user-images.githubusercontent.com/93869735/229489827-56b52e7c-fab3-4c37-a02e-c5c34132ace6.png)\r\n\r\nCould you perhaps share your code @hannan72? There shouldn't be any performance difference between using / not using `_do_init`.", "It could also be that we're recompiling each time - would be great to see your code here @hannan72 to verify!", "> It could also be that we're recompiling each time - would be great to see your code here @hannan72 to verify!\r\n\r\nThis is my full code:\r\n\r\nFirstly, PyTorch model is loaded and converted to Flax an then saved:\r\n```\r\nimport jax\r\nimport jax.numpy as jnp\r\nimport torch\r\nfrom transformers import FlaxWhisperForConditionalGeneration, WhisperForConditionalGeneration, WhisperProcessor\r\n\r\npt_model_path = \"/client_notebook/whisper_model_chkp\"\r\nmodel_id = \"/client_notebook/flax_whisper_model\"\r\n\r\njax.config.update('jax_platform_name', 'cpu')\r\nwith torch.no_grad():\r\n model = FlaxWhisperForConditionalGeneration.from_pretrained(pt_model_path, dtype=jnp.float16, from_pt=True)\r\n model.save_pretrained(model_id)\r\n```\r\n\r\nFor deploying the Flax model, following code is used:\r\n```\r\nimport jax\r\nimport jax.numpy as jnp\r\nimport torch\r\nimport flax\r\nfrom scipy.io import wavfile\r\nimport time\r\nfrom transformers import FlaxWhisperForConditionalGeneration, WhisperForConditionalGeneration, WhisperProcessor\r\n\r\nmodel_id = \"/client_notebook/flax_whisper_model\"\r\nprocessor = WhisperProcessor.from_pretrained(model_id)\r\n\r\nwith torch.no_grad():\r\n model, params = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, _do_init=False)\r\n jit_generate = jax.jit(model.generate, static_argnames=[\"max_length\", \"language\", \"task\"])\r\n\r\naudio_file_path = \"sample_audio_5s.wav\"\r\nsamplerate, data_waveform = wavfile.read(audio_file_path)\r\nata_waveform = (data_waveform)/32768.0\r\ninput_features = processor(data_waveform, padding=\"max_length\", sampling_rate=16000, return_tensors=\"pt\").input_features\r\n\r\nruntime=[]\r\nfor i in range(5):\r\n start_time = time.time()\r\n input_features = jnp.array(input_features, dtype=jnp.float16)\r\n pred_ids = jit_generate(input_features, params=params, max_length=128, language='<|de|>', task =\"transcribe\")\r\n runtime.append(time.time() - start_time)\r\nprint(\"Inference time:\\n\", runtime)\r\n```\r\nAnd the output is as follows:\r\n```\r\nInference time: \r\n[70.23309993743896, 14.300963640213013, 12.430477142333984, 13.643242120742798, 12.125237703323364]\r\n```\r\n\r\nGPU memory utilization: 31,127 MB\r\nGPU Type: 1x A100-40GB\r\nmodel checkpoint: whisper_medium\r\n\r\n* Note: GPU memory utilization when the model is directly imported from pt model (By passing `from_pt=True`) is 31,587MB.\r\nIt is just 460MB larger. But this value (460MB) is exactly the same GPU memory utilization when I put the model to cpu by running `jax.config.update('jax_platform_name', 'cpu')` during the saving of Flax model.\r\n\r\n@sanchit-gandhi ", "Hey @hannan72 - thanks for the super detailed report and attaching your code. This is indeed a very strange phenomenon that we're seeing with such high memory utilisation for the Flax model. Based on what you've said, I think all of this is coming from when we load the model, rather than from when we do the forward pass.\r\n\r\nI also ran a few tests on an A100, where I was comfortably able to fit a batch size of 16 on a 40GB device. If we're getting 31GB memory in loading, there's no way that's persistent for then the forward pass, otherwise a batch size of 16 wouldn't be possible.\r\n\r\nI wonder whether we can trick JAX into using the CPU for the heavy weight loading, and then move the weights onto the GPU for the forward pass? Something along the lines of:\r\n\r\n```python\r\nimport jax\r\nimport jax.numpy as jnp\r\n\r\nfrom transformers import FlaxWhisperForConditionalGeneration, WhisperForConditionalGeneration, WhisperProcessor\r\n\r\nmodel_id = \"/client_notebook/flax_whisper_model\"\r\nprocessor = WhisperProcessor.from_pretrained(model_id)\r\n\r\n# load weights on CPU\r\njax.config.update('jax_platform_name', 'cpu')\r\nmodel, params = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, _do_init=False)\r\n\r\n# now move weights to GPU\r\njax.config.update('jax_platform_name', 'gpu')\r\nparams = jax.device_put(params, 'gpu')\r\n\r\njit_generate = jax.jit(model.generate, static_argnames=[\"max_length\", \"language\", \"task\"])\r\n...\r\n```\r\n\r\nThis could be a workaround, but not a fix to the high memory usage we're seeing during initialisation", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,684
1,684
NONE
null
I'm using Flax whisper-medium and now it's ~3x faster rather than the pytorch deployment. but now it is allocating ~10x more GPU memory. loading Pytorch model takes ~3GB, but loading Flax Whisper-medium takes >30GB of VRAM. Does this huge memory allocation normal? And is there any prepared method for cut it down? @andyehrenberg @ArthurZucker @sanchit-gandhi The code for loading Flax model: ``` with torch.no_grad(): model = FlaxWhisperForConditionalGeneration.from_pretrained(model_id, dtype=jnp.float16, from_pt=True) jit_generate = jax.jit(model.generate, static_argnames=["max_length", "language"]) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22224/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22223
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22223/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22223/comments
https://api.github.com/repos/huggingface/transformers/issues/22223/events
https://github.com/huggingface/transformers/pull/22223
1,628,833,718
PR_kwDOCUB6oc5MSBUx
22,223
fix typos in llama.mdx
{ "login": "keturn", "id": 83819, "node_id": "MDQ6VXNlcjgzODE5", "avatar_url": "https://avatars.githubusercontent.com/u/83819?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keturn", "html_url": "https://github.com/keturn", "followers_url": "https://api.github.com/users/keturn/followers", "following_url": "https://api.github.com/users/keturn/following{/other_user}", "gists_url": "https://api.github.com/users/keturn/gists{/gist_id}", "starred_url": "https://api.github.com/users/keturn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keturn/subscriptions", "organizations_url": "https://api.github.com/users/keturn/orgs", "repos_url": "https://api.github.com/users/keturn/repos", "events_url": "https://api.github.com/users/keturn/events{/privacy}", "received_events_url": "https://api.github.com/users/keturn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Documentation: @sgugger, @stevhliu and @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22223/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22223", "html_url": "https://github.com/huggingface/transformers/pull/22223", "diff_url": "https://github.com/huggingface/transformers/pull/22223.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22223.patch", "merged_at": 1679042599000 }
https://api.github.com/repos/huggingface/transformers/issues/22222
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22222/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22222/comments
https://api.github.com/repos/huggingface/transformers/issues/22222/events
https://github.com/huggingface/transformers/issues/22222
1,628,829,592
I_kwDOCUB6oc5hFfeY
22,222
ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.
{ "login": "candowu", "id": 4629043, "node_id": "MDQ6VXNlcjQ2MjkwNDM=", "avatar_url": "https://avatars.githubusercontent.com/u/4629043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/candowu", "html_url": "https://github.com/candowu", "followers_url": "https://api.github.com/users/candowu/followers", "following_url": "https://api.github.com/users/candowu/following{/other_user}", "gists_url": "https://api.github.com/users/candowu/gists{/gist_id}", "starred_url": "https://api.github.com/users/candowu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/candowu/subscriptions", "organizations_url": "https://api.github.com/users/candowu/orgs", "repos_url": "https://api.github.com/users/candowu/repos", "events_url": "https://api.github.com/users/candowu/events{/privacy}", "received_events_url": "https://api.github.com/users/candowu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I face the same issue", "Hi @candowu, thanks for raising this issue. This is arising, because the `tokenizer` in the [config on the hub](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer_config.json) points to `LLaMATokenizer`. However, the tokenizer in the library is `LlamaTokenizer`. \r\n\r\nThis is likely due to the configuration files being created before the final PR was merged in. ", "I cloned the repo and changed the tokenizer in the config file to LlamaTokenizer \r\nbut I got\r\nValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported.\r\n", "For anybody interested I was able to load an earlier saved model with the same issue using my [fork](https://github.com/mbehm/transformers) with the capitalization restored. That being said for future it's probably better to try find or save a new model with the new naming.", "@yhifny Are you able to import the tokenizer directly using `from transformers import LlamaTokenizer `? \r\n\r\nIf not, can you make sure that you are working from the development branch in your environment using:\r\n`pip install git+https://github.com/huggingface/transformers`\r\n\r\nmore details [here](https://huggingface.co/docs/transformers/installation#install-from-source).", "I can import the `LlamaTokenizer` class, but getting error that `from_pretrained` method is None. Anyone else having this issue?", "As the error message probably mentions, you need to install sentencepiece: `pip install sentencepiece`.", "Working now. I swear I had sentencepiece, but probably forgot to reset the runtime 🤦 My bad!", "> For anybody interested I was able to load an earlier saved model with the same issue using my [fork](https://github.com/mbehm/transformers) with the capitalization restored. That being said for future it's probably better to try find or save a new model with the new naming.\r\n\r\nThanks, man, your link solved all the problem", "> Hi @candowu, thanks for raising this issue. This is arising, because the `tokenizer` in the [config on the hub](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer_config.json) points to `LLaMATokenizer`. However, the tokenizer in the library is `LlamaTokenizer`.\r\n> \r\n> This is likely due to the configuration files being created before the final PR was merged in.\r\n\r\nChange the **LLaMATokenizer** in tokenizer_config.json into lowercase **LlamaTokenizer** and it works like a charm.", "> For anybody interested I was able to load an earlier saved model with the same issue using my [fork](https://github.com/mbehm/transformers) with the capitalization restored. That being said for future it's probably better to try find or save a new model with the new naming.\r\n\r\nThank you so much for this! Works!", "> > Hi @candowu, thanks for raising this issue. This is arising, because the `tokenizer` in the [config on the hub](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer_config.json) points to `LLaMATokenizer`. However, the tokenizer in the library is `LlamaTokenizer`.\r\n> > This is likely due to the configuration files being created before the final PR was merged in.\r\n> \r\n> Change the **LLaMATokenizer** in tokenizer_config.json into lowercase **LlamaTokenizer** and it works like a charm.\r\n\r\nI assume this is applied to the llama-7b cloned repo from HuggingFace right? How can I instantiate the model and the tokenizer after doing that please?", "> \r\n\r\nyou are a life saver. There docs on the site should be updated for this reference. ", "Thank you so much for this! Works! That's amazing!", "You can try this for a ather crazy way to find out what is the right casing for the module:\r\n\r\n```python\r\nimport transformers\r\n\r\nfrom itertools import product\r\nimport importlib\r\n\r\ndef find_variable_case(s, max_tries=1000):\r\n var_permutations = list(map(\"\".join, product(*zip(s.upper(), s.lower()))))\r\n # Intuitively, any camel casing should minimize the no. of upper chars.\r\n # From https://stackoverflow.com/a/58789587/610569\r\n var_permutations.sort(key=lambda ss: (sum(map(str.isupper, ss)), len(ss)))\r\n for i, v in enumerate(var_permutations):\r\n if i > max_tries:\r\n return\r\n try:\r\n dir(transformers).index(v)\r\n return v\r\n except:\r\n continue\r\n\r\n\r\nv = find_variable_case('LLaMatokenizer')\r\nexec(f\"from transformers import {v}\")\r\nvars()[v]\r\n```\r\n\r\n[out]:\r\n\r\n\r\n```\r\ntransformers.utils.dummy_sentencepiece_objects.LlamaTokenizer\r\n```", "I encountered the same issue identified at the thread today 4/2/2023. The post https://github.com/huggingface/transformers/issues/22222#issuecomment-1477171703 fixed the problem for me. \r\n\r\nThank you.", "Hi! I am facing the same problem. I try to import LlamaTokenizer,\r\nBut:---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\nCell In[27], line 1\r\n----> 1 from transformers import LlamaTokenizer \r\n\r\nImportError: cannot import name 'LlamaTokenizer' from 'transformers' (/usr/local/anaconda3/envs/abc/lib/python3.10/site-packages/transformers/__init__.py)\r\n\r\nand the version of transformers is \"transformers 4.28.0.dev0 pypi_0 pypi\"\r\n\r\nplz tell me how to fix it.", "You need to install the library from source to be able to use the LLaMA model.", "> You need to install the library from source to be able to use the LLaMA model.\r\n\r\nThanks! Where can I get it? And how to install it? \r\nActually I have already installed transformers 4.28.0.dev0, I'm not sure about what you mean.", "You can open the documentation at the [install page](https://huggingface.co/docs/transformers/installation#install-from-source).", "Great! I restart my server and it works! thank you !!!", "Hi\r\n\r\nI installed from source \r\n\r\ngit clone https://github.com/huggingface/transformers.git\r\ncd transformers\r\npip install -e .\r\n\r\n\r\n pip list show:\r\n\r\ntransformers 4.29.0.dev0 D:\\myfolder\\transformers\r\n\r\nbut I still have\r\n\r\nValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported.\r\n\r\n", "+1 on @thibaudart comment, I have the same issue.", "> Hi\r\n> \r\n> I installed from source\r\n> \r\n> git clone https://github.com/huggingface/transformers.git cd transformers pip install -e .\r\n> \r\n> pip list show:\r\n> \r\n> transformers 4.29.0.dev0 D:\\myfolder\\transformers\r\n> \r\n> but I still have\r\n> \r\n> ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported.\r\n\r\nhey, try this rep `pip install git+https://github.com/mbehm/transformers`, maybe it can work", "Will this problem be fixed by updating to newest version of transformers, or must we modify the config file manually each time?", "You should just using that checkpoint. The maintainers of that repo have made it clear that they are not interested in being compatible with Transformers by ignoring the 62 PRs trying to fix their checkpoints. The huggyllama checkpoints are confirmed to work if you are looking for an alternative (but you should still request the weights to Meta following their official form).\r\n\r\nThere are now 903 checkpoints for llama on the Hub and only the 4 from decapoda-research do not work since they created them before the PR for Llama was merged into Transformers. We won't break the code for the other 899 checkpoints.", " if( \"LLaMATokenizer\" == tokenizer_class_candidate ): ## add these 2 line to solve it.\r\n tokenizer_class_candidate = 'LlamaTokenizer' \r\n tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate)", "@MasterLivens hi, i am currently using colab, which file should i add this code? ", "@zhiyixu The code being referred to should go into .../site-packages/transformers/models/auto/tokenization_auto.py\r\n\r\nHowever, what worked for me was updating my transformers and tokenizers package. \r\ntokenization_auto.py has a mapping of tokenizers at the beginning and I realized that llama wasn't included in the version I had.", "> > Hi @candowu, thanks for raising this issue. This is arising, because the `tokenizer` in the [config on the hub](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/tokenizer_config.json) points to `LLaMATokenizer`. However, the tokenizer in the library is `LlamaTokenizer`.\r\n> > This is likely due to the configuration files being created before the final PR was merged in.\r\n> \r\n> Change the **LLaMATokenizer** in tokenizer_config.json into lowercase **LlamaTokenizer** and it works like a charm.\r\n\r\nCan you please enlighten me on how this could be achieved please? I'm new to this " ]
1,679
1,695
1,679
NONE
null
### System Info 4.27.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction i test llama in colab here is my code and output: !pip install git+https://github.com/huggingface/transformers !pip install sentencepiece import torch from transformers import pipeline,LlamaTokenizer,LlamaForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" print(device) # tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") # model = LlamaForCausalLM.from_pretrained("decapoda-research/llama-7b-hf") generator = pipeline(model="decapoda-research/llama-7b-hf", device=device) generator("I can't believe you did such a ") ValueError Traceback (most recent call last) [<ipython-input-3-c1d71e177e5a>](https://localhost:8080/#) in <module> 7 # tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") 8 # model = LlamaForCausalLM.from_pretrained("decapoda-research/llama-7b-hf") ----> 9 generator = pipeline(model="decapoda-research/llama-7b-hf", device=device) 10 generator("I can't believe you did such a ") 1 frames [/usr/local/lib/python3.9/dist-packages/transformers/models/auto/tokenization_auto.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 675 676 if tokenizer_class is None: --> 677 raise ValueError( 678 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported." 679 ) ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported. ### Expected behavior expect output generated info
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22222/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22222/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22221
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22221/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22221/comments
https://api.github.com/repos/huggingface/transformers/issues/22221/events
https://github.com/huggingface/transformers/issues/22221
1,628,813,628
I_kwDOCUB6oc5hFbk8
22,221
export clip to text encoder and image encoder two onnxs
{ "login": "susht3", "id": 12723964, "node_id": "MDQ6VXNlcjEyNzIzOTY0", "avatar_url": "https://avatars.githubusercontent.com/u/12723964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susht3", "html_url": "https://github.com/susht3", "followers_url": "https://api.github.com/users/susht3/followers", "following_url": "https://api.github.com/users/susht3/following{/other_user}", "gists_url": "https://api.github.com/users/susht3/gists{/gist_id}", "starred_url": "https://api.github.com/users/susht3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susht3/subscriptions", "organizations_url": "https://api.github.com/users/susht3/orgs", "repos_url": "https://api.github.com/users/susht3/repos", "events_url": "https://api.github.com/users/susht3/events{/privacy}", "received_events_url": "https://api.github.com/users/susht3/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "cc @michaelbenayoun ", "Hi @susht3 ,\r\nYou mean that you want to export a `CLIPTextModel` and `CLIPVisionModel`?\r\n\r\nWe support the CLIP export in `optimum`:\r\n\r\n```bash\r\noptimum-cli export onnx -m openai/clip-vit-base-patch32 --task default clip\r\n```\r\n\r\nBut as I understand here, you want to export two models?", "> Hi @susht3 , You mean that you want to export a `CLIPTextModel` and `CLIPVisionModel`?\r\n> \r\n> We support the CLIP export in `optimum`:\r\n>\r\n> ```shell\r\n> optimum-cli export onnx -m openai/clip-vit-base-patch32 --task default clip\r\n> ```\r\n> \r\n> But as I understand here, you want to export two models?\r\n\r\nyes,i try to convert by transformer.onnx but failed, my code like this:\r\n\r\nmodel = CLIPModel.from_pretrained(model_path)\r\n processor = CLIPProcessor.from_pretrained(model_path)\r\n text = processor.tokenizer(\"[UNK]”, return_tensors=\"np\")\r\n image = processor.feature_extractor(Image.open(\"CLIP.png\"))\r\n text_model = model.text_model\r\n image_model = model.vision_model\r\n onnx_inputs, onnx_outputs = export(\r\n preprocessor=tokenizer, model=text_model, config=onnx_config, opset=10, output=onnx_model_path\r\n )\r\n", "You want what kind of inputs?\r\n\r\nAnyways, you should use `optimum.exporters.onnx` for this.\r\nYou should be able to export the text model easily because we have a [`CLIPTextOnnxConfig`](https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/model_configs.py#LL620C49-L620C49).\r\n\r\nFor the rest we have `CLIPOnnxConfig` as well.", "> [`CLIPTextOnnxConfig`](https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/model_configs.py#LL620C49-L620C49).\r\n\r\nthanks,and which is clip visual onxx config? i can't find it", "I think we do not have it, but you can make a PR and add it if you are interested!", "with torch.no_grad():\r\n image_features = model.encode_image(image)\r\n\r\n torch.onnx.export(model.visual,\r\n image,\r\n \"image_encoder.onnx\",\r\n input_names=(\"images\", ),\r\n output_names=(\"image_features\", ),\r\n dynamic_axes={\"images\": {\r\n 0: \"num_image\"\r\n }})\r\n # text_features = model.encode_text(text)\r\n\r\n text_features = model(text)\r\n\r\n torch.onnx.export(model, (text, ),\r\n \"text_encoder.onnx\",\r\n input_names=(\"texts\", ),\r\n output_names=(\"text_features\", ),\r\n dynamic_axes={\"texts\": {\r\n 0: \"num_text\"\r\n }})\r\n \r\n \r\n Coding like this, then you can get the image encoder and text encoder onnx model respectively" ]
1,679
1,705
null
NONE
null
### Model description i want to export clip to text encoder and image encoder two onnx, but it seems can only convert the whole model, how can i seperate clip to two onnx models? ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22221/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22220
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22220/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22220/comments
https://api.github.com/repos/huggingface/transformers/issues/22220/events
https://github.com/huggingface/transformers/issues/22220
1,628,795,580
I_kwDOCUB6oc5hFXK8
22,220
Positinal Encoding for T5 family of models
{ "login": "SreehariSankar", "id": 54915320, "node_id": "MDQ6VXNlcjU0OTE1MzIw", "avatar_url": "https://avatars.githubusercontent.com/u/54915320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SreehariSankar", "html_url": "https://github.com/SreehariSankar", "followers_url": "https://api.github.com/users/SreehariSankar/followers", "following_url": "https://api.github.com/users/SreehariSankar/following{/other_user}", "gists_url": "https://api.github.com/users/SreehariSankar/gists{/gist_id}", "starred_url": "https://api.github.com/users/SreehariSankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SreehariSankar/subscriptions", "organizations_url": "https://api.github.com/users/SreehariSankar/orgs", "repos_url": "https://api.github.com/users/SreehariSankar/repos", "events_url": "https://api.github.com/users/SreehariSankar/events{/privacy}", "received_events_url": "https://api.github.com/users/SreehariSankar/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Hi @SreehariSankar, thanks for raising this issue. \r\n\r\nQuestions on designing custom hook for modifying the models are better placed in the [forum](https://discuss.huggingface.co/). \r\n\r\nAll of the code for the model, including producing the embeddings are in the modeling files e.g. [this one for T5](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py). Note: the T5 family of models do not use the same positional embedding logic as in the traditional transformer i.e. there isn't a fixed embedding for each position, but instead a relative position embedding. " ]
1,679
1,679
null
NONE
null
### Feature request Please create a hook to allow users to modify the T5 family of models and change the default positional embeddings to custom positional embeddings. ### Motivation I am trying to build a T5 version that takes non-text input, and the traditional positional encodings are getting in the way, and there is no way to switch it off, or to make them learnable parameters, etc. BART gives limited access to positional encodings, but T5 family gives nearly 0 access. ### Your contribution If i knew where the positional encoding were calculated and added in to the input_ids, I could create this hook myself
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22220/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22219
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22219/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22219/comments
https://api.github.com/repos/huggingface/transformers/issues/22219/events
https://github.com/huggingface/transformers/pull/22219
1,628,767,994
PR_kwDOCUB6oc5MRzZx
22,219
fix code example in mgp-str doc
{ "login": "wdp-007", "id": 4025053, "node_id": "MDQ6VXNlcjQwMjUwNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4025053?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wdp-007", "html_url": "https://github.com/wdp-007", "followers_url": "https://api.github.com/users/wdp-007/followers", "following_url": "https://api.github.com/users/wdp-007/following{/other_user}", "gists_url": "https://api.github.com/users/wdp-007/gists{/gist_id}", "starred_url": "https://api.github.com/users/wdp-007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wdp-007/subscriptions", "organizations_url": "https://api.github.com/users/wdp-007/orgs", "repos_url": "https://api.github.com/users/wdp-007/repos", "events_url": "https://api.github.com/users/wdp-007/events{/privacy}", "received_events_url": "https://api.github.com/users/wdp-007/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Fix code example in mgp-str doc. ## Before submitting - [√] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [√] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [√] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [√] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [√] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22219/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22219", "html_url": "https://github.com/huggingface/transformers/pull/22219", "diff_url": "https://github.com/huggingface/transformers/pull/22219.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22219.patch", "merged_at": 1679046006000 }
https://api.github.com/repos/huggingface/transformers/issues/22218
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22218/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22218/comments
https://api.github.com/repos/huggingface/transformers/issues/22218/events
https://github.com/huggingface/transformers/pull/22218
1,628,386,395
PR_kwDOCUB6oc5MQiWP
22,218
Hotfix for natten on CircleCI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Here is the update regarding this issue\r\n\r\nhttps://github.com/SHI-Labs/NATTEN/issues/23#issuecomment-1473865224" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? Hotfix for natten on CircleCI. The PR CI in #22204 run with `natten` with the version `0.14.4` which run successfully. However, when I merged that PR into `main`, natten 0.14.5 is released and cause some issues.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22218/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22218", "html_url": "https://github.com/huggingface/transformers/pull/22218", "diff_url": "https://github.com/huggingface/transformers/pull/22218.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22218.patch", "merged_at": 1679007446000 }
https://api.github.com/repos/huggingface/transformers/issues/22217
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22217/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22217/comments
https://api.github.com/repos/huggingface/transformers/issues/22217/events
https://github.com/huggingface/transformers/pull/22217
1,628,310,708
PR_kwDOCUB6oc5MQRsX
22,217
Fix LLaMATokenizer naming
{ "login": "mbehm", "id": 699007, "node_id": "MDQ6VXNlcjY5OTAwNw==", "avatar_url": "https://avatars.githubusercontent.com/u/699007?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mbehm", "html_url": "https://github.com/mbehm", "followers_url": "https://api.github.com/users/mbehm/followers", "following_url": "https://api.github.com/users/mbehm/following{/other_user}", "gists_url": "https://api.github.com/users/mbehm/gists{/gist_id}", "starred_url": "https://api.github.com/users/mbehm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mbehm/subscriptions", "organizations_url": "https://api.github.com/users/mbehm/orgs", "repos_url": "https://api.github.com/users/mbehm/repos", "events_url": "https://api.github.com/users/mbehm/events{/privacy}", "received_events_url": "https://api.github.com/users/mbehm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22217). All of your documentation changes will be reflected on that endpoint.", "Ah ok, it was preventing me from loading a saved model because of the capitalization change so thought it was a mistake. In that case I'll be closing this, for anyone coming across the same issue (\"Tokenizer class LLaMATokenizer does not exist or is not currently imported.\") they can use my fork to load them for now." ]
1,679
1,679
1,679
NONE
null
# What does this PR do? Simple fix for the naming of LLaMATokenizer class
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22217/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22217/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22217", "html_url": "https://github.com/huggingface/transformers/pull/22217", "diff_url": "https://github.com/huggingface/transformers/pull/22217.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22217.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22216
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22216/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22216/comments
https://api.github.com/repos/huggingface/transformers/issues/22216/events
https://github.com/huggingface/transformers/pull/22216
1,628,259,418
PR_kwDOCUB6oc5MQGQ4
22,216
LLaMA house-keeping
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,679
1,679
COLLABORATOR
null
# What does this PR do? This PR just groups a couple of nits I had on the LLaMA model PR, but didn't want to add there to merge the PR quickly. I have tested the conversion scripts on all four models and it works fine. cc @zphang for information.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22216/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22216", "html_url": "https://github.com/huggingface/transformers/pull/22216", "diff_url": "https://github.com/huggingface/transformers/pull/22216.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22216.patch", "merged_at": 1679057716000 }
https://api.github.com/repos/huggingface/transformers/issues/22215
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22215/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22215/comments
https://api.github.com/repos/huggingface/transformers/issues/22215/events
https://github.com/huggingface/transformers/issues/22215
1,628,218,757
I_kwDOCUB6oc5hDKWF
22,215
torch.compile() and FSDP/DDP wrappers are called in the wrong order.
{ "login": "ani300", "id": 919977, "node_id": "MDQ6VXNlcjkxOTk3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/919977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ani300", "html_url": "https://github.com/ani300", "followers_url": "https://api.github.com/users/ani300/followers", "following_url": "https://api.github.com/users/ani300/following{/other_user}", "gists_url": "https://api.github.com/users/ani300/gists{/gist_id}", "starred_url": "https://api.github.com/users/ani300/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ani300/subscriptions", "organizations_url": "https://api.github.com/users/ani300/orgs", "repos_url": "https://api.github.com/users/ani300/repos", "events_url": "https://api.github.com/users/ani300/events{/privacy}", "received_events_url": "https://api.github.com/users/ani300/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note that we haven't tested `torch.compile` with any kind of distributed training yet, so it's normal if there are issues. If you have the fix, we'd be happy to look at a PR!", "Ok! I'll make the PR then, just figured I'd ask before." ]
1,678
1,679
1,679
CONTRIBUTOR
null
### System Info transformers main branch ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When training/fine-tuning a model, activate torch.compile() and FSDP with `torch_compile=True` and `fsdp="full_shard auto_wrap"` as training arguments. The model is compiled before the FSDP wrapping, preventing optimizations on the backwards passes. According to the PyTorch docs, both DDP and FSDP wrappers have special optimizations that run with torch.compile() to ensure model training doesn't end up slower instead of faster (see [here](https://dev-discuss.pytorch.org/t/torchdynamo-update-11-making-fsdp-and-dynamo-work-together/1037)). ### Expected behavior Therefore, the model would need to be torch.compile()'d after being wrapped in either FSDP or DDP. Right now, in `src/transformers/trainer.py` that is not the case, with compile() being the first call in `_wrap_model()`. Before making a PR with the change, I figured I'd make this bug report to ensure nothing prevents that change from happening.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22215/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22214
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22214/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22214/comments
https://api.github.com/repos/huggingface/transformers/issues/22214/events
https://github.com/huggingface/transformers/issues/22214
1,628,183,661
I_kwDOCUB6oc5hDBxt
22,214
whisper return_timestamp error
{ "login": "pearl-yu", "id": 65966653, "node_id": "MDQ6VXNlcjY1OTY2NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/65966653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pearl-yu", "html_url": "https://github.com/pearl-yu", "followers_url": "https://api.github.com/users/pearl-yu/followers", "following_url": "https://api.github.com/users/pearl-yu/following{/other_user}", "gists_url": "https://api.github.com/users/pearl-yu/gists{/gist_id}", "starred_url": "https://api.github.com/users/pearl-yu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pearl-yu/subscriptions", "organizations_url": "https://api.github.com/users/pearl-yu/orgs", "repos_url": "https://api.github.com/users/pearl-yu/repos", "events_url": "https://api.github.com/users/pearl-yu/events{/privacy}", "received_events_url": "https://api.github.com/users/pearl-yu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for reporting. Could you share the audio that you are using? We never stumbled upon something like this 😅 \r\nProblem seems to come from `sequence[1:relevant_timestamp]` ,but the traceback is a bit messed up\r\n\r\ncc @Narsil ", "`prediction = pipe(np.array(audio), return_timestamps=True, stride_length_s=(4, 2))['chunks']`\r\n\r\nThe \"stride_length_s\" parameter determines the length of the audio chunks to be processed at each time, as well as the length of the gaps between them. This parameter is different from the \"chunk_length_s\" parameter and is set by default to half of the \"chunk_length_s\" parameter.", "The recommended parameters are:\r\n* `chunk_length_s=30.0`\r\n* `stride_length_s=(6, 0)` (or `stride_length_s=None`, and the pipeline will set this to `(chunk_length_s / 5, 0)` for you)\r\n\r\nSee the following Colab for details: https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor?usp=sharing#scrollTo=Mh_e6rV62QUM", "Any luck here @ataturkiyebmka changing the hyper parameters?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.27.1 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <yes> - Using distributed or parallel set-up in script?: <no> ### Who can help? @ArthurZucker @younesbelkada @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` device = "cuda:0" if torch.cuda.is_available() else "cpu" pipe = pipeline( "automatic-speech-recognition", model="openai/whisper-tiny", chunk_length_s=30, device=device, ) audio, _ = librosa.load(mypath+ filename, sr = 16000) prediction = pipe(np.array(audio),return_timestamps=True)['chunks'] ``` Below is the full error message. ``` IndexError Traceback (most recent call last) [<ipython-input-20-17a93ca487ee>](https://localhost:8080/#) in <module> ----> 1 prediction = pipe(np.array(audio),return_timestamps=True,stride_length_s=(4, 2))['chunks'] 4 frames [/usr/local/lib/python3.9/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in __call__(self, inputs, **kwargs) 376 logger.warning( 377 "Using `chunk_length_s` is very experimental with seq2seq models. The results will not necessarily" --> 378 " be entirely accurate and will have caveats. More information:" 379 " https://github.com/huggingface/transformers/pull/20104. Ignore this warning with pipeline(...," 380 " ignore_warning=True)" [/usr/local/lib/python3.9/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in __call__(self, inputs, num_workers, batch_size, *args, **kwargs) 1074 ) 1075 -> 1076 is_dataset = Dataset is not None and isinstance(inputs, Dataset) 1077 is_generator = isinstance(inputs, types.GeneratorType) 1078 is_list = isinstance(inputs, list) [/usr/local/lib/python3.9/dist-packages/transformers/pipelines/pt_utils.py](https://localhost:8080/#) in __next__(self) 123 # We're out of items within a batch 124 item = next(self.iterator) --> 125 processed = self.infer(item, **self.params) 126 # We now have a batch of "inferred things". 127 if self.loader_batch_size is not None: [/usr/local/lib/python3.9/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in postprocess(self, model_outputs, decoder_kwargs, return_timestamps) 625 if previous_sequence[0] < (timestamp_begin + offset - overlap_time) and idx != 0: 626 break # the previous sequence is too far in the past --> 627 if len(previous_tokens) > 0: 628 # find the longest common sequence between the overlapping parts 629 index_left, index_right, match_length = _fast_find_longest_common_sequence( [/usr/local/lib/python3.9/dist-packages/transformers/pipelines/automatic_speech_recognition.py](https://localhost:8080/#) in _find_timestamp_sequence(sequences, tokenizer, feature_extractor, max_source_positions) 174 <Tip> 175 --> 176 For more information on how to effectively use `stride_length_s`, please have a look at the [ASR chunking 177 blog post](https://huggingface.co/blog/asr-chunking). 178 IndexError: list index out of range ``` ### Expected behavior The tiny.en model returns a 'list out of index' error for some files. It works for all files if not adding the return_timetamps = True argument. The tiny model also returns the same error for some different audio files when return_timetamps = True. The base.en, and the base model also returns the same error for some (but different) files.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22214/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22213
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22213/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22213/comments
https://api.github.com/repos/huggingface/transformers/issues/22213/events
https://github.com/huggingface/transformers/issues/22213
1,628,164,546
I_kwDOCUB6oc5hC9HC
22,213
LLAMA model won't release VRAM when deleted
{ "login": "devilismyfriend", "id": 87043616, "node_id": "MDQ6VXNlcjg3MDQzNjE2", "avatar_url": "https://avatars.githubusercontent.com/u/87043616?v=4", "gravatar_id": "", "url": "https://api.github.com/users/devilismyfriend", "html_url": "https://github.com/devilismyfriend", "followers_url": "https://api.github.com/users/devilismyfriend/followers", "following_url": "https://api.github.com/users/devilismyfriend/following{/other_user}", "gists_url": "https://api.github.com/users/devilismyfriend/gists{/gist_id}", "starred_url": "https://api.github.com/users/devilismyfriend/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/devilismyfriend/subscriptions", "organizations_url": "https://api.github.com/users/devilismyfriend/orgs", "repos_url": "https://api.github.com/users/devilismyfriend/repos", "events_url": "https://api.github.com/users/devilismyfriend/events{/privacy}", "received_events_url": "https://api.github.com/users/devilismyfriend/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You also need to call the garbage collector:\r\n```python\r\nimport gc\r\n\r\ngc.collect()\r\n```", "It worked thanks!, I did try it before and it didn't but checked it again now and it did lol" ]
1,678
1,678
1,678
NONE
null
### System Info latest git, windows (tested on WSL as well), pytorch 1.13 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run this code tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") model = LlamaForCausalLM.from_pretrained( "decapoda-research/llama-7b-hf", load_in_8bit=True, torch_dtype=torch.float16, device_map={'':0}, ) import time #try to unload the model from GPU memory del model torch.cuda.empty_cache() time.sleep(5) ### Expected behavior After del the memory should be freed from VRAM.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22213/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22212
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22212/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22212/comments
https://api.github.com/repos/huggingface/transformers/issues/22212/events
https://github.com/huggingface/transformers/pull/22212
1,628,041,347
PR_kwDOCUB6oc5MPWuh
22,212
Add MaskedImageModelingOutput
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Pinging @NielsRogge for the final approval", "@NielsRogge could you take another look? I think all comments are addressed" ]
1,678
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? - Adds `MaskedImageModelingOutput` and `TFMaskedImageModelingOutput` classes for masked image modeling / completion / in-painting models. - Replaces the inaccurate MaskedLMOutput used for ViT and DeiT MIM heads with the new output class - Ensures backward compatibility by adding `logits` as a property to the new output class ## Before submitting - [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22212/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22212", "html_url": "https://github.com/huggingface/transformers/pull/22212", "diff_url": "https://github.com/huggingface/transformers/pull/22212.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22212.patch", "merged_at": 1679459747000 }
https://api.github.com/repos/huggingface/transformers/issues/22211
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22211/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22211/comments
https://api.github.com/repos/huggingface/transformers/issues/22211/events
https://github.com/huggingface/transformers/pull/22211
1,628,033,318
PR_kwDOCUB6oc5MPU-X
22,211
Generate: Add assisted generation
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@amyeroberts @sgugger -- since this PR is a bit more complex than most, I've decided to request a review from you two 🤗 ", "@amyeroberts regarding splitting up, I totally agree! And not only on this method but on most parts of `GenerationMixin`. Not only are the functions long, but they reuse a significant part of the logic. I want to address that in the near future, by designing a `.generate()` that can be somehow composed of a sequence of smaller functional blocks. I haven't figured out the deets, but I'd expect that a good implementation would get us better readability, less code duplication, and higher flexibility for HW/model/decoding-specific implementations! 💅 \r\n\r\nBefore merging, I'm going to double-check that the current code keeps the performance numbers I got a few weeks ago. If everything goes well, it will be merged today 🙏 ", "@gante Excellent work! I just dive into the code these days and found that the impl only support batchsize 1. Speculative Decoding have no relative to batchsize. I guess supporting bs >1 will be more hard to impl so you just support bs=1 firstly? \r\n\r\nAnother question is about the decision of whether the candidate tokens generated by draft model be accepted or not. The [process](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L4529C17-L4529C17) `n_matches` is not the same as Google or DeepMind's paper. I found an [impl](https://github.com/feifeibear/LLMSpeculativeSampling/blob/main/sampling/speculative_sampling.py#L167C20-L167C61) of DeepMind's algrothm. Could you please explain it with more detail? I have the Thanks in advance. ", "@zhaoyang-star thank you for the kind words :)\r\n\r\nRe batch size 1: it was a mix of implementation simplicity and diminishing returns. Since `transformers` works with batched inputs with fixed length, efficiently applying assisted generation/speculative decoding would necessarily mean applying extra logic to realign the tensors (e.g. row 1 might get 5 speculated tokens, but row 2 only gets 2 -- row 2 would need to be left-padded to continue). Moving to [nested tensors](https://pytorch.org/docs/stable/nested.html) will get us rid of this limitation :)\r\n\r\nRe implementation differences: the two techniques were developed independently, despite relying on the same principle (saving GPU memory bandwidth with the aid of a smaller model). To put it plainly:\r\n1. Speculative Decoding is better when sampling is active with temperatures above 0.3-0.4 -- it employs a clever mathematical trick to handle decoding mismatches. However, you must define how many tokens you want to fetch from the smaller model.\r\n2. Assisted Generation (our implementation) is better in the other scenarios because it has a dynamic heuristic to decide how many tokens to fetch from the assistant model, based on the assistant hit ratio. This means it can adapt according to the difficulty of the prompt, with additional no user input.\r\n\r\nFor the record, we will be adding the sampling trick to our implementation soon, so it will be the best of both worlds :)", "@gante Thanks for your reply. \r\n\r\n> Speculative Decoding is better when sampling is active with temperatures above 0.3-0.4 -- it employs a clever mathematical trick to handle decoding mismatches. However, you must define how many tokens you want to fetch from the smaller model.\r\n\r\nHow to get the conclusion that Speculative Decoding is better when sampling is active with temperatures above 0.3-0.4, and Assisted Generation is better in other scenarios? If the conclusion is right, is it better that we implement both the two methods and decide to execute it according to the vaule of temperature?\r\n\r\nBTW, Assisted Generation is much easier to understand than Speculative Decoding. So I perfer to use Assisted Generation. ", "@zhaoyang-star The conclusion is empirical, with the `0.3-0.4` being a personal rule of thumb based on my assisted generation tests and the values reported in the speculative decoding paper 🤗 It certainly depends on the model and on the task itself.\r\n\r\nAfter we merge the mathematical trick from speculative decoding, calling `assisted_generation` will actually be the best of both worlds -- it will use the mathematical trick from speculative decoding AND apply the heuristic to determine the number of candidate tokens from assisted generation, all without additional parameterization!", "@gante Thanks a lot. Can't waiting to try the merged version. I saw https://github.com/huggingface/transformers/pull/27270/ is relative to speculative decoding.", "@gante Have you thought of any solution and approach to implement assisted generation on transformer-nueronx?" ]
1,678
1,700
1,681
MEMBER
null
# What does this PR do? Here it is, the PR for assisted generation 🙌 In a nutshell, it uses an assistant model (which should be a smaller model with the same tokenizer) to speed up generation, taking advantage of the reduced need for memory transfers in the main model forward pass. It leverages the same property that makes batched inference faster per token. Since it is meant to be a reference implementation, the code is meant to be clear and well-commented. If you come across any non-obvious steps, let me know so I can clarify them! Follow-up steps after this PR: 1. Add support for a `sample` version of assisted generation (many cool apps rely on sampling, including chatbots/assistants) 2. Write a blog post a prepare strong communications about the feature _________________________________________________________________ To process the potential speedup visually, consider the following script and the two videos. They correspond to greedy search using a 6.9B GPTNeoX model on an nvidia 3090 🚀 <details> <summary>Script</summary> ```py from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer import torch import time model_id = "EleutherAI/pythia-6.9b-deduped" assistant_id = "EleutherAI/pythia-160m-deduped" tokenizer = AutoTokenizer.from_pretrained(model_id) assistant_model = AutoModelForCausalLM.from_pretrained(assistant_id) assistant_model = assistant_model.to("cuda") model_kwargs = { "pretrained_model_name_or_path": model_id, "device_map": "auto", "max_memory": {0: "20GiB", "cpu": "50GiB"}, "torch_dtype": torch.float16, } model = AutoModelForCausalLM.from_pretrained(**model_kwargs) inputs = tokenizer("Here's how to cook a good ramen:", return_tensors="pt").to("cuda") streamer = TextStreamer(tokenizer=tokenizer) print("Without assistance:") start = time.time() model.generate(**inputs, streamer=streamer, max_new_tokens=128) print(f"Elapsed time: {time.time() - start:.2f} seconds") print("With assistance:") start = time.time() model.generate(**inputs, assistant_model=assistant_model, streamer=streamer, max_new_tokens=128) print(f"Elapsed time: {time.time() - start:.2f} seconds") ``` </details> Without assistant | With assistant :-------------------------:|:-------------------------: <img src="https://user-images.githubusercontent.com/12240844/232580502-19965b8d-0f9e-45d8-b57b-86fad2d4681b.gif"/> | <img src="https://user-images.githubusercontent.com/12240844/232580535-30a27fd2-1338-4c71-a0ba-68055a825605.gif"/> (focus on the speed and the fact that the output is the same, not on the output itself)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22211/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22211", "html_url": "https://github.com/huggingface/transformers/pull/22211", "diff_url": "https://github.com/huggingface/transformers/pull/22211.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22211.patch", "merged_at": 1681835817000 }
https://api.github.com/repos/huggingface/transformers/issues/22210
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22210/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22210/comments
https://api.github.com/repos/huggingface/transformers/issues/22210/events
https://github.com/huggingface/transformers/issues/22210
1,627,826,533
I_kwDOCUB6oc5hBqll
22,210
Rag-end2end
{ "login": "Rajdoshi99", "id": 44093439, "node_id": "MDQ6VXNlcjQ0MDkzNDM5", "avatar_url": "https://avatars.githubusercontent.com/u/44093439?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rajdoshi99", "html_url": "https://github.com/Rajdoshi99", "followers_url": "https://api.github.com/users/Rajdoshi99/followers", "following_url": "https://api.github.com/users/Rajdoshi99/following{/other_user}", "gists_url": "https://api.github.com/users/Rajdoshi99/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rajdoshi99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rajdoshi99/subscriptions", "organizations_url": "https://api.github.com/users/Rajdoshi99/orgs", "repos_url": "https://api.github.com/users/Rajdoshi99/repos", "events_url": "https://api.github.com/users/Rajdoshi99/events{/privacy}", "received_events_url": "https://api.github.com/users/Rajdoshi99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Rajdoshi99, thanks for raising this issue!\r\n\r\nIt seems the issue is coming from pytorch lighnting. So that we can best help, could you give us more information about the error and how to reproduce. Specifically: \r\n* Your environment. Run `transformers-cli env` in the terminal to get the necessary info to share\r\n* A snippet of code that we can run to try and reproduce the error\r\n* A full trackback of the error that occurred ", "Following from my comment above ^ - this is likely an issue with the pytorch lightning version and its compatibility with the example. \r\n\r\nPytorch Lighting 1.6.4 was released last June, whereas this example is three years old. We don't actively maintain the examples in the library. I would recommend downgrading the pytorch lighting version in your environment if you wish to run it. ", "HI @amyeroberts \r\nTraceback (most recent call last):\r\n File \"/home/ec2-user/SageMaker/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py\", line 815, in <module>\r\n main(args)\r\n File \"/home/ec2-user/SageMaker/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py\", line 780, in main\r\n trainer: pl.Trainer = generic_train(\r\n File \"/home/ec2-user/SageMaker/transformers/examples/research_projects/rag-end2end-retriever/lightning_base.py\", line 410, in generic_train\r\n trainer.fit(model)\r\n File \"/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 770, in fit\r\n self._call_and_handle_interrupt(\r\n File \"/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 723, in _call_and_handle_interrupt\r\n return trainer_fn(*args, **kwargs)\r\n File \"/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 811, in _fit_impl\r\n results = self._run(model, ckpt_path=self.ckpt_path)\r\n File \"/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 1217, in _run\r\n self.strategy.setup(self)\r\n File \"/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py\", line 179, in setup\r\n self.setup_optimizers(trainer)\r\n File \"/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py\", line 128, in setup_optimizers\r\n self.optimizers, self.lr_scheduler_configs, self.optimizer_frequencies = _init_optimizers_and_lr_schedulers(\r\n File \"/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py\", line 195, in _init_optimizers_and_lr_schedulers\r\n _validate_scheduler_api(lr_scheduler_configs, model)\r\n File \"/home/ec2-user/SageMaker/.cs/conda/envs/codeserver_py39/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py\", line 350, in _validate_scheduler_api\r\n raise MisconfigurationException(\r\npytorch_lightning.utilities.exceptions.MisconfigurationException: The provided lr scheduler `LambdaLR` doesn't follow PyTorch's LRScheduler API. You should override the `LightningModule.lr_scheduler_step` hook with your own logic if you are using a custom LR scheduler.\r\n\r\nRag-End2End Retriever\r\n\r\n\r\n", "Transformer CLI ENV\r\n\r\n- `transformers` version: 4.27.1\r\n- Platform: Linux-5.10.157-139.675.amzn2.x86_64-x86_64-with-glibc2.26\r\n- Python version: 3.9.16\r\n- Huggingface_hub version: 0.13.2\r\n- PyTorch version (GPU?): 2.0.0+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <yes>\r\n- Using distributed or parallel set-up in script?: <ray>", "Hi @Rajdoshi99, thanks for providing this information! \r\n\r\nLooking at the traceback, the issue is indeed arising from pytorch lightning itself and its compatibility with the script. We don't actively maintain the research examples. If you wish to run the script I would suggest downgrading the pytorch lighting version in your environment. As the script is old, I unfortunately can't guarantee that will be enough to make it work. ", "@Rajdoshi99 Hey Raj, I'm getting the same error. I tried different versions of PyTorch Lightning with no success. Were you able to fix this bug?" ]
1,678
1,692
1,679
NONE
null
### System Info ```shell raise misconfigurationexception( pytorch_lightning.utilities.exceptions.misconfigurationexception: the provided lr scheduler `lambdalr` doesn't follow pytorch's lrscheduler api. you should override the `lightningmodule.lr_scheduler_step` hook with your own logic if you are using a custom lr scheduler. stopped all 7 ray processes. pytorch_lightning=1.6.4 ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction raise misconfigurationexception( pytorch_lightning.utilities.exceptions.misconfigurationexception: the provided lr scheduler `lambdalr` doesn't follow pytorch's lrscheduler api. you should override the `lightningmodule.lr_scheduler_step` hook with your own logic if you are using a custom lr scheduler. stopped all 7 ray processes. ### Expected behavior ```shell Its not working, it was working previously but now there is some misconfigurations error. ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22210/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22209
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22209/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22209/comments
https://api.github.com/repos/huggingface/transformers/issues/22209/events
https://github.com/huggingface/transformers/pull/22209
1,627,818,540
PR_kwDOCUB6oc5MOmDf
22,209
Add LlamaForSequenceClassification
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> I would also add potentially a test with single_label_classification to make sure everything works!\r\n\r\nDone in https://github.com/huggingface/transformers/pull/22209/commits/6737e380fc6a4cb73150da4fa821dd463f9a7204 :)\r\n", "Awesome thanks a lot @lewtun ! 🚀 " ]
1,678
1,679
1,679
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds the `LlamaForSequenceClassification` class, which among standard applications can be used for reward modelling in the RLHF pipeline :) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> cc @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22209/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22209/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22209", "html_url": "https://github.com/huggingface/transformers/pull/22209", "diff_url": "https://github.com/huggingface/transformers/pull/22209.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22209.patch", "merged_at": 1679060367000 }
https://api.github.com/repos/huggingface/transformers/issues/22208
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22208/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22208/comments
https://api.github.com/repos/huggingface/transformers/issues/22208/events
https://github.com/huggingface/transformers/pull/22208
1,627,741,397
PR_kwDOCUB6oc5MOVlc
22,208
fixes a typo in WhisperFeatureExtractor docs.
{ "login": "susnato", "id": 56069179, "node_id": "MDQ6VXNlcjU2MDY5MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susnato", "html_url": "https://github.com/susnato", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "organizations_url": "https://api.github.com/users/susnato/orgs", "repos_url": "https://api.github.com/users/susnato/repos", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "received_events_url": "https://api.github.com/users/susnato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes a typo. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22208/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22208", "html_url": "https://github.com/huggingface/transformers/pull/22208", "diff_url": "https://github.com/huggingface/transformers/pull/22208.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22208.patch", "merged_at": 1678982886000 }
https://api.github.com/repos/huggingface/transformers/issues/22207
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22207/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22207/comments
https://api.github.com/repos/huggingface/transformers/issues/22207/events
https://github.com/huggingface/transformers/pull/22207
1,627,677,709
PR_kwDOCUB6oc5MOH2b
22,207
[`XGLM`] Add `accelerate` support for XGLM
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
# What does this PR do? Fixes: https://github.com/huggingface/transformers/issues/22188 With this PR users will be able to load XGLM models in 8bit cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22207/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22207", "html_url": "https://github.com/huggingface/transformers/pull/22207", "diff_url": "https://github.com/huggingface/transformers/pull/22207.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22207.patch", "merged_at": 1678979886000 }
https://api.github.com/repos/huggingface/transformers/issues/22206
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22206/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22206/comments
https://api.github.com/repos/huggingface/transformers/issues/22206/events
https://github.com/huggingface/transformers/issues/22206
1,627,562,811
I_kwDOCUB6oc5hAqM7
22,206
Error with load_tf_weights_in_albert when transforming tf checkpoint to pytorch model
{ "login": "Ala-Na", "id": 67599180, "node_id": "MDQ6VXNlcjY3NTk5MTgw", "avatar_url": "https://avatars.githubusercontent.com/u/67599180?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ala-Na", "html_url": "https://github.com/Ala-Na", "followers_url": "https://api.github.com/users/Ala-Na/followers", "following_url": "https://api.github.com/users/Ala-Na/following{/other_user}", "gists_url": "https://api.github.com/users/Ala-Na/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ala-Na/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ala-Na/subscriptions", "organizations_url": "https://api.github.com/users/Ala-Na/orgs", "repos_url": "https://api.github.com/users/Ala-Na/repos", "events_url": "https://api.github.com/users/Ala-Na/events{/privacy}", "received_events_url": "https://api.github.com/users/Ala-Na/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note that this command only works for the original TensorFlow checkpoints of Albert and we do not maintain it as those checkpoints have long been converted and are available on the Hub. To convert Hugging Face models from TensorFlow to PyTorch or vice versa, use [this guide](https://huggingface.co/docs/transformers/model_sharing#convert-a-model-for-all-frameworks).", "Thanks @sgugger for your quick answer ! I tried the guide with :\r\n```\r\npytorch_model = AlbertForPreTraining.from_pretrained(\"tf_checkpoint_folder\", from_tf=True)\r\npt_model.save_pretrained(\"generated_pytorch_model\")\r\n```\r\nBut the same error is occurring.\r\n\r\nWhat do you mean by \"original Tensorflow checkpoints\" ? ", "Where does your TensorFlow checkpoint come from?", "From a pretraining with google-resarch official github code", "We do not maintain a generic conversion command that works with all repos outside of Hugging Face. The command you are using is the one we used to convert the original ALBERT checkpoints three years ago, but we don't guarantee it will work with more recent ones.\r\n\r\nYou will need to adapt the code a bit yourself to solve this error, I'm afraid.", "I'm answering after trying to adapt a little bit the script. I managed to copy tensorflow variables to seemingly corresponding pytorch tensors. Only optimizers and the layer norm of AlbertAttention module keeps their originally instantiated values (as there is no need to copy optimizers variables and there is not equivalent for the layer norm of attention module). But for some reason, the model I obtained doesn't seems \"right\" as it doesn't perform learning when fine-tuned on a simple task.\r\nMaybe some others manipulations are needed, such as the transpose operation performed in line 181 of modeling_albert.py ?\r\n\r\nSorry to bother you again with that, but is there someone which could have a slight idea of what could be done and give me some tips ?\r\n\r\nThanks again for your attention\r\n", "Hi @Ala-Na, thanks for raising an issue! \r\n\r\nFor custom situations like this, the question is best placed in the [forums](https://discuss.huggingface.co/). We try to reserve issues for feature requests and bug reports specific to the transformers library. ", "Thank you @amyeroberts for the suggestion.\r\n\r\nI just created a post about it on the forum : https://discuss.huggingface.co/t/help-appreciated-modifying-load-tf-weights-in-albert-for-transforming-albert-tensorflow-checkpoint-to-pytorch-model/34415\r\n\r\nFor anyone who may have an idea of what need to be done, please don't hesitate to respond there.\r\nThanks !", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): 2.10.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction #### The error happens when : - Using transformer-cli convert script for ALBERT model ``` transformers-cli convert --model_type albert \ --tf_checkpoint $ALBERT_BASE_DIR/model.ckpt-best \ --config $ALBERT_BASE_DIR/albert_config.json \ --pytorch_dump_output $ALBERT_BASE_DIR/pytorch_model.bin ``` - Or directly using convert_albert_original_tf_checkpoint_to_pytorch.py script ``` python3 compatibility.py --tf_checkpoint_path $ALBERT_BASE_DIR/model.ckpt-best --pytorch_dump_path $ALBERT_BASE_DIR/pytorch_model.bin --albert_config_file $ALBERT_BASE_DIR/albert_config.json ``` #### The error message : ``` Traceback (most recent call last): File "/path/transformers-cli", line 11, in <module> sys.exit(main()) File "/path/transformers/commands/transformers_cli.py", line 55, in main service.run() File "/path/transformers/commands/convert.py", line 94, in run convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output) File "/path/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch load_tf_weights_in_albert(model, config, tf_checkpoint_path) File "/path/transformers/models/albert/modeling_albert.py", line 164, in load_tf_weights_in_albert pointer = getattr(pointer, "bias") File "/path/torch/nn/modules/module.py", line 1207, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'AlbertEmbeddings' object has no attribute 'bias' ``` #### Code causing this error Inside load_tf_weights_in_albert function from modeling_albert.py file. More precisely it's the part with ```scope_names[0] == gamma```` and ```beta``` : ``` pointer = model for m_name in name: if re.fullmatch(r"[A-Za-z]+_\d+", m_name): scope_names = re.split(r"_(\d+)", m_name) else: scope_names = [m_name] if scope_names[0] == "kernel" or scope_names[0] == "gamma": pointer = getattr(pointer, "weight") elif scope_names[0] == "output_bias" or scope_names[0] == "beta": pointer = getattr(pointer, "bias") elif scope_names[0] == "output_weights": pointer = getattr(pointer, "weight") elif scope_names[0] == "squad": pointer = getattr(pointer, "classifier") else: try: pointer = getattr(pointer, scope_names[0]) except AttributeError: logger.info(f"Skipping {'/'.join(name)}") continue if len(scope_names) >= 2: num = int(scope_names[1]) pointer = pointer[num] ``` #### What I suspect is happening : In this part of the code, a newly instantiated pytorch ```AlbertForPreTraining``` model (instantiated inside convert_tf_checkpoint_to_pytorch.py) is being filled with tensorflow variables' arrays. In order to achieve this, tensorflow variables are red, their names modified and corresponding arrays are copied to similar pytorch variables. In order to fill the correct pytorch variable/attribute, a pointer is moved to the corresponding element according to the variable name. This errors occurs when a tensorflow variable either contains a ```beta``` or ```gamma``` in its name (example of variable name : albert/embeddings/layer_normalization/beta). Because, in those cases, the class/object that the pointer is representing doesn't contains any ```bias``` or ```weight```, resulting in an error when the ```getattr``` function is trying to retrieve them. This seems to happen with every variable name corresponding to a normalization layer. #### Example of my reasoning : The current variable name is ```albert/embeddings/layer_normalization/beta```. It was split on ```/``` and we're now on the ```beta``` substring. ```pointer``` is currently pointing to ```AlbertEmbeddings``` object. We reach the condition ```if scope_names[0] == "output_bias" or scope_names[0] == "beta```. ```getattr``` function is trying to retrieve ```bias``` from ```AlbertEmbeddings``` but there is no corresponding attribute, resulting in the displayed error. #### What should ```pointer``` retrieve inside ```AlbertForPreTraining``` architecture when meeting a ```gamma``` or ```beta``` ? Normalization layer's weight and bias ? ### Expected behavior Obtaining a pytorch bin file from a tensorflow checkpoint, without errors occurring in the process. #### Note : I couldn't find any recent or opened issues on this subject, but similar closed ones are #2006 and #3779
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22206/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22205
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22205/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22205/comments
https://api.github.com/repos/huggingface/transformers/issues/22205/events
https://github.com/huggingface/transformers/pull/22205
1,627,465,305
PR_kwDOCUB6oc5MNYsI
22,205
Depth estimation task guide
{ "login": "MKhalusova", "id": 1065417, "node_id": "MDQ6VXNlcjEwNjU0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MKhalusova", "html_url": "https://github.com/MKhalusova", "followers_url": "https://api.github.com/users/MKhalusova/followers", "following_url": "https://api.github.com/users/MKhalusova/following{/other_user}", "gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}", "starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions", "organizations_url": "https://api.github.com/users/MKhalusova/orgs", "repos_url": "https://api.github.com/users/MKhalusova/repos", "events_url": "https://api.github.com/users/MKhalusova/events{/privacy}", "received_events_url": "https://api.github.com/users/MKhalusova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "PR with images: https://huggingface.co/datasets/huggingface/documentation-images/discussions/64", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,699
1,679
CONTRIBUTOR
null
This PR adds a zero-shot depth estimation task guide that covers inference with a pipeline, as well as manually, for monocular depth estimation supported by DPT and GLPN.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22205/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22205", "html_url": "https://github.com/huggingface/transformers/pull/22205", "diff_url": "https://github.com/huggingface/transformers/pull/22205.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22205.patch", "merged_at": 1679056583000 }
https://api.github.com/repos/huggingface/transformers/issues/22204
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22204/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22204/comments
https://api.github.com/repos/huggingface/transformers/issues/22204/events
https://github.com/huggingface/transformers/pull/22204
1,627,368,212
PR_kwDOCUB6oc5MNDtL
22,204
🔥py38 + torch 2 🔥🔥🔥🚀
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The whole suite of tests is run and passed in this [run](https://app.circleci.com/pipelines/github/huggingface/transformers/60009/workflows/73d754f8-017d-458f-8dc2-c6166d30e1de)" ]
1,678
1,679
1,679
COLLABORATOR
null
# What does this PR do? Title is all we need. There is one line in `setup.py` I don't know if I need to change and if yes how. ```python3 deps["importlib_metadata"] + ";python_version<'3.8'", # importlib_metadata for Python versions that don't have it ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22204/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22204", "html_url": "https://github.com/huggingface/transformers/pull/22204", "diff_url": "https://github.com/huggingface/transformers/pull/22204.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22204.patch", "merged_at": 1679003963000 }
https://api.github.com/repos/huggingface/transformers/issues/22203
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22203/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22203/comments
https://api.github.com/repos/huggingface/transformers/issues/22203/events
https://github.com/huggingface/transformers/issues/22203
1,627,328,527
I_kwDOCUB6oc5g_xAP
22,203
GenerationConfig argument for Seq2SeqTrainer / Seq2SeqTrainingArgument
{ "login": "Natooz", "id": 56734983, "node_id": "MDQ6VXNlcjU2NzM0OTgz", "avatar_url": "https://avatars.githubusercontent.com/u/56734983?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Natooz", "html_url": "https://github.com/Natooz", "followers_url": "https://api.github.com/users/Natooz/followers", "following_url": "https://api.github.com/users/Natooz/following{/other_user}", "gists_url": "https://api.github.com/users/Natooz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Natooz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Natooz/subscriptions", "organizations_url": "https://api.github.com/users/Natooz/orgs", "repos_url": "https://api.github.com/users/Natooz/repos", "events_url": "https://api.github.com/users/Natooz/events{/privacy}", "received_events_url": "https://api.github.com/users/Natooz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm not sure we can have a `generation_config` directly in the `Seq2SeqTrainingArguments` as it wouldn't work with the CLI. But maybe we can have a `generation_config_file` argument instead? Also yes to the `model.generation_config` way being better documented!", "Good point (CLI)!\r\nIn that can a json file could work, and alternatively the argument could maybe accept both paths to this file and a `GenerationConfig` object ?", "Yes, that works for me!", "@Natooz that sounds great! Would you like to have a go at it?", "Hey @gante, yep, just clearing my backlog, it should be done by the week-end", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "(completed)", "Hello! It's a great idea to be able to pass GenerationConfig to the Seq2SeqTrainer. However, it would be great to have a matching `GenerationArguments` class that allows parsing.\r\n\r\nRight now I see that the documentation of `GenerationConfig` describes the types of the attributes in words but these are not defined in the code. Am I missing where these are defined?", "Hey @artidoro 👋 \r\n\r\nI'm not sure if I got your question right -- were you asking for support to pass generation arguments directly to the trainer (e.g. `--top-k 50`), as opposed to the solution added as a result of this issue (passing a whole generation config file)?", "Hi, I agree with @artidoro and would also love a `GenerationArguments` class that can be passed along with `Seq2SeqTrainingArgument` to `HfArgumentParser`. @gante that is also how I interpret this request.", "I agree that it may make things easier for the users :) However, due to our limited bandwidth, an important question must be asked: is there any form of parameterization that you can't do through the `generation_config` argument?", "Actually I ended up having issues with `GenerationConfig`* so I just pass the arguments directly to `generate(**config)`. Only reason it would be nice to pass generation parameters directly to the command line is if I want to sweep easily over parameters like beam size/temperature. But no, there is no functionality loss that I can see.\r\n\r\n*Issues with default arguments vs set arguments cancelling each other out" ]
1,678
1,697
1,681
CONTRIBUTOR
null
### Feature request 👋 The request is for a way to pass a `GenerationConfig` to a `Seq2SeqTrainer` (through `Seq2SeqTrainingArguments`). ### Motivation ATOW, `Seq2SeqTrainer` only supports a few arguments for generation: `max_length` / `max_new_tokens`, `num_beams`. Being able to pass a `GenerationConfig` configuration to generate would allow users to have much more control over the prediction step. I noticed that this is already possible as in `generate`, if no `GenerationConfig` arg is given, it is [retrieved from `self.generation_config`](https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1195), [itself deduced from `model.config`](https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/modeling_utils.py#L1036) at model init (but gen args in `PretrainedConfig` is legacy / will be removed right ?). Currently, overriding `model.generation_config` model attribute would conduct to the desired result, however this does not seem to be documented. ### Your contribution I don't know if this have been discussed. Do you think this should be added ? If not, maybe edit the documentation to clarify the `model.generation_config` way ? I can help in both cases. cc @sgugger @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22203/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22202
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22202/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22202/comments
https://api.github.com/repos/huggingface/transformers/issues/22202/events
https://github.com/huggingface/transformers/pull/22202
1,627,254,490
PR_kwDOCUB6oc5MMquK
22,202
Update tiny model creation script
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22202). All of your documentation changes will be reflected on that endpoint." ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? - Update `UNCONVERTIBLE_MODEL_ARCHITECTURES` with a few recent models: they don't have processor class (or not included in the `XXX_MAPPING_NAMES`) - Make the detection of model test class more robust.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22202/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22202", "html_url": "https://github.com/huggingface/transformers/pull/22202", "diff_url": "https://github.com/huggingface/transformers/pull/22202.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22202.patch", "merged_at": 1678972919000 }
https://api.github.com/repos/huggingface/transformers/issues/22201
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22201/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22201/comments
https://api.github.com/repos/huggingface/transformers/issues/22201/events
https://github.com/huggingface/transformers/issues/22201
1,627,149,681
I_kwDOCUB6oc5g_FVx
22,201
not related summary
{ "login": "aylix", "id": 54117566, "node_id": "MDQ6VXNlcjU0MTE3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/54117566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aylix", "html_url": "https://github.com/aylix", "followers_url": "https://api.github.com/users/aylix/followers", "following_url": "https://api.github.com/users/aylix/following{/other_user}", "gists_url": "https://api.github.com/users/aylix/gists{/gist_id}", "starred_url": "https://api.github.com/users/aylix/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aylix/subscriptions", "organizations_url": "https://api.github.com/users/aylix/orgs", "repos_url": "https://api.github.com/users/aylix/repos", "events_url": "https://api.github.com/users/aylix/events{/privacy}", "received_events_url": "https://api.github.com/users/aylix/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @aylix, could you please follow the issue template, giving details about the model, your environment, a reproducible snippet, and the expected behaviour? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
NONE
null
I passed a Text about the food industry in the model the summarization would be something about atoms and totally unrelated
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22201/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22200
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22200/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22200/comments
https://api.github.com/repos/huggingface/transformers/issues/22200/events
https://github.com/huggingface/transformers/issues/22200
1,627,099,559
I_kwDOCUB6oc5g-5Gn
22,200
trainer.push_to_hub(**kwargs) requires "git pull" first
{ "login": "Yiiii19", "id": 28276176, "node_id": "MDQ6VXNlcjI4Mjc2MTc2", "avatar_url": "https://avatars.githubusercontent.com/u/28276176?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yiiii19", "html_url": "https://github.com/Yiiii19", "followers_url": "https://api.github.com/users/Yiiii19/followers", "following_url": "https://api.github.com/users/Yiiii19/following{/other_user}", "gists_url": "https://api.github.com/users/Yiiii19/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yiiii19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yiiii19/subscriptions", "organizations_url": "https://api.github.com/users/Yiiii19/orgs", "repos_url": "https://api.github.com/users/Yiiii19/repos", "events_url": "https://api.github.com/users/Yiiii19/events{/privacy}", "received_events_url": "https://api.github.com/users/Yiiii19/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You either need to `--overwrite_output_dir` or make sure the `output_dir` you are using is a local clone of your repo that is up to date, yes.", "Thanks to your fast reply. How could I change the code? Cause I donot want to pull and push in the terminal manually, want to \r\n do \"pull\" firstly then \"push\" in the code example provided by the link.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
NONE
null
### System Info Hi, I am using the Segformer, following the tutorial https://huggingface.co/blog/fine-tune-segformer Everytime aftern training the conflict error will come after executing the code "trainer.push_to_hub(**kwargs)". Error messages: ! [rejected] main -> main (fetch first) error: failed to push some refs to 'https://huggingface.co/yiming19/segformer-b0-finetuned-segments-construction-1' hint: Updates were rejected because the remote contains work that you do hint: not have locally. This is usually caused by another repository pushing hint: to the same ref. You may want to first integrate the remote changes hint: (e.g., 'git pull ...') before pushing again. hint: See the 'Note about fast-forwards' in 'git push --help' for details. I could use git pull and git push manually to push model, but why will this error come or before push the model can execute git pull first? @sgugger Thanks. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction kwargs = { "tags": ["vision", "image-segmentation"], "finetuned_from": pretrained_model_name, "dataset": hf_dataset_identifier, } feature_extractor.push_to_hub(hub_model_id) trainer.push_to_hub(**kwargs) ### Expected behavior I just follow the tutorial https://huggingface.co/blog/fine-tune-segformer Should no error come.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22200/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22199
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22199/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22199/comments
https://api.github.com/repos/huggingface/transformers/issues/22199/events
https://github.com/huggingface/transformers/pull/22199
1,627,037,163
PR_kwDOCUB6oc5ML6kq
22,199
Fix typo in Align docs
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
# What does this PR do? Fixes a broken link to the blog post in the ALIGN docs ## Before submitting - [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22199/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22199", "html_url": "https://github.com/huggingface/transformers/pull/22199", "diff_url": "https://github.com/huggingface/transformers/pull/22199.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22199.patch", "merged_at": 1678963308000 }
https://api.github.com/repos/huggingface/transformers/issues/22198
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22198/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22198/comments
https://api.github.com/repos/huggingface/transformers/issues/22198/events
https://github.com/huggingface/transformers/issues/22198
1,627,036,958
I_kwDOCUB6oc5g-p0e
22,198
Import "transformers" could not be resolved
{ "login": "givik", "id": 2458760, "node_id": "MDQ6VXNlcjI0NTg3NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2458760?v=4", "gravatar_id": "", "url": "https://api.github.com/users/givik", "html_url": "https://github.com/givik", "followers_url": "https://api.github.com/users/givik/followers", "following_url": "https://api.github.com/users/givik/following{/other_user}", "gists_url": "https://api.github.com/users/givik/gists{/gist_id}", "starred_url": "https://api.github.com/users/givik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/givik/subscriptions", "organizations_url": "https://api.github.com/users/givik/orgs", "repos_url": "https://api.github.com/users/givik/repos", "events_url": "https://api.github.com/users/givik/events{/privacy}", "received_events_url": "https://api.github.com/users/givik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @givik, thanks for raising this issue. \r\n\r\nThis isn't related to transformers - it's to do with vscode and the environment. \r\n\r\nThe error shown is coming from PyLance, and is indicating that the environment it's looking in doesn't have `transformers` installed. Please make sure transformers [is installed](https://huggingface.co/docs/transformers/installation) and PyLance is looking in the [correct place](https://code.visualstudio.com/docs/python/environments).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "i get the same issue", "tyu", "cringe" ]
1,678
1,698
1,682
NONE
null
### System Info **I have tried different python versions 3.7 and 3.11** ![Screenshot 2023-03-16 at 13 18 32](https://user-images.githubusercontent.com/2458760/225571262-7deb343d-9d76-4d14-8a24-1c75da0276fa.png) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import pipeline classifier = pipeline("sentiment-analysis") res = classifier ("I've been waiting for a Hugging Face course my whole life.") print (res) ### Expected behavior should work
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22198/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22197
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22197/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22197/comments
https://api.github.com/repos/huggingface/transformers/issues/22197/events
https://github.com/huggingface/transformers/issues/22197
1,626,889,902
I_kwDOCUB6oc5g-F6u
22,197
[Pytorch 2.0] Cannot load `BERT` model `No module named 'torch._six'`
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Error was thrown since i had `deepspeed` installed which uses/used `torch._six` see: https://github.com/microsoft/DeepSpeed/pull/2863", "Is DeepSpeed part of transformers 4.27.1? I get the same error message when using torch>=2.0 and transformers, but I am not installing deepspeed myself, at least not knowingly. How did you uninstall it or turn it off?\r\n\r\nI am not using the Trainer or Accelerate directly. Instead I use:\r\n```\r\n... = AutoTokenizer.from_pretrained(tokenizer_variant,do_lower_case=True)\r\n...\r\n... = AutoModelForSequenceClassification.from_pretrained(variant, **config_params)\r\n\r\n```\r\n\r\n\r\n```\r\nSuccessfully installed absl-py-1.4.0 altair-4.2.2 cachetools-5.3.0 entrypoints-0.4 filelock-3.10.0 google-auth-2.16.2 google-auth-oauthlib-0.4.6 grpcio-1.51.3 huggingface-hub-0.13.2 jsonlines-3.1.0 jsonschema-4.17.3 lit-15.0.7 markdown-3.4.1 mpmath-1.3.0 nltk-3.8.1 nvidia-cublas-cu11-11.10.3.66 nvidia-cuda-cupti-cu11-11.7.101 nvidia-cuda-nvrtc-cu11-11.7.99 nvidia-cuda-runtime-cu11-11.7.99 nvidia-cudnn-cu11-8.5.0.96 nvidia-cufft-cu11-10.9.0.58 nvidia-curand-cu11-10.2.10.91 nvidia-cusolver-cu11-11.4.0.1 nvidia-cusparse-cu11-11.7.4.91 nvidia-nccl-cu11-2.14.3 nvidia-nvtx-cu11-11.7.91 oauthlib-3.2.2 pyasn1-modules-0.2.8 pyrsistent-0.19.3 regex-2022.10.31 requests-oauthlib-1.3.1 sympy-1.11.1 tensorboard-2.12.0 tensorboard-data-server-0.7.0 tensorboard-plugin-wit-1.8.1 tokenizers-0.13.2 torch-2.0.0 transformers-4.27.1 triton-2.0.0\r\n```\r\n\r\n```\r\n2023-03-17 18:37:15,836 - models.transformers - INFO - Loader AutoModel from pre-trained.\r\n--\r\n2023-03-17 18:37:16,156 - models.transformers - ERROR - Giving up on: Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback):\r\nNo module named 'torch._six'\r\n2023-03-17 18:37:16,156 - models.transformers - ERROR - Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback):\r\nNo module named 'torch._six'\r\nTraceback (most recent call last): File \"/opt/conda/lib/python3.9/site-packages/transformers/utils/import_utils.py\", line 1126, in _get_module return importlib.import_module(\".\" + module_name, self.__name__) File \"/opt/conda/lib/python3.9/importlib/__init__.py\", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed File \"/opt/conda/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py\", line 42, in <module> from ...modeling_utils import PreTrainedModel File \"/opt/conda/lib/python3.9/site-packages/transformers/modeling_utils.py\", line 83, in <module> from accelerate import __version__ as accelerate_version File \"/opt/conda/lib/python3.9/site-packages/accelerate/__init__.py\", line 7, in <module> from .accelerator import Accelerator File \"/opt/conda/lib/python3.9/site-packages/accelerate/accelerator.py\", line 29, in <module> from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File \"/opt/conda/lib/python3.9/site-packages/accelerate/checkpointing.py\", line 24, in <module> from .utils import ( File \"/opt/conda/lib/python3.9/site-packages/accelerate/utils/__init__.py\", line 124, in <module> from .other import ( File \"/opt/conda/lib/python3.9/site-packages/accelerate/utils/other.py\", line 27, in <module> from deepspeed import DeepSpeedEngine File \"/opt/conda/lib/python3.9/site-packages/deepspeed/__init__.py\", line 16, in <module> from .runtime.engine import DeepSpeedEngine, DeepSpeedOptimizerCallable, DeepSpeedSchedulerCallable File \"/opt/conda/lib/python3.9/site-packages/deepspeed/runtime/engine.py\", line 25, in <module> from deepspeed.runtime.utils import see_memory_usage, get_ma_status, DummyOptim File \"/opt/conda/lib/python3.9/site-packages/deepspeed/runtime/utils.py\", line 18, in <module> from torch._six import inf\r\nModuleNotFoundError: No module named 'torch._six'\r\n\r\n```", "I just upgraded deepspeed to the 0.8.2 and it worked\r\n\r\n```\r\npip install deepspeed --upgrade\r\n```", "I don't think I need deepspeed for what I wrote above, but adding deepspeed>=0.8.2 to requirements.txt works for me as well. Thanks @maloyan!", "hey!\r\nI have same error but I don't need to use deepspeed, any idea how to solve it?\r\nthx!", "for me this was because of apex, had to do `pip uninstall -y apex` a couple of times since I wasn't using it anyway", "It worked for me.\r\n\r\n> for me this was because of apex, had to do `pip uninstall -y apex` a couple of times since I wasn't using it anyway\r\n\r\n", "> for me this was because of apex, had to do `pip uninstall -y apex` a couple of times since I wasn't using it anyway\r\n\r\nThank you so much, it worked for me", "> for me this was because of apex, had to do `pip uninstall -y apex` a couple of times since I wasn't using it anyway\r\n\r\nThanks, but I thought we need apex for mix precision training?" ]
1,678
1,694
1,678
MEMBER
null
### System Info - `transformers` version: 4.27.1 - Platform: Linux-5.15.0-1031-aws-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.1 - PyTorch version (GPU?): 2.0.0+cu117 (True) ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduction 1. `!pip install "torch>=2.0" --extra-index-url https://download.pytorch.org/whl/cu117 --upgrade --quiet` 2. `!pip install "transformers==4.27.1" "datasets==2.9.0" "accelerate==0.17.1" "evaluate==0.4.0" tensorboard scikit-learn --upgrade --quiet` 3. load model ```python from transformers import AutoModelForSequenceClassification # Model id to load the tokenizer model_id = "bert-base-uncased" model = AutoModelForSequenceClassification.from_pretrained( model_id, num_labels=2 ) ``` ### Expected behavior Can load model, below is the error ```bash Traceback (most recent call last): File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1126, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/opt/conda/envs/pytorch/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 42, in <module> from ...modeling_utils import PreTrainedModel File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/modeling_utils.py", line 37, in <module> from .deepspeed import deepspeed_config, is_deepspeed_zero3_enabled File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/deepspeed.py", line 38, in <module> from accelerate.utils.deepspeed import HfDeepSpeedConfig as DeepSpeedConfig File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/__init__.py", line 3, in <module> from .accelerator import Accelerator File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/accelerator.py", line 30, in <module> from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/checkpointing.py", line 24, in <module> from .utils import ( File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/utils/__init__.py", line 105, in <module> from .launch import ( File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/utils/launch.py", line 28, in <module> from ..utils.other import merge_dicts File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/accelerate/utils/other.py", line 28, in <module> from deepspeed import DeepSpeedEngine File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/__init__.py", line 15, in <module> from . import module_inject File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/module_inject/__init__.py", line 1, in <module> from .replace_module import replace_transformer_layer, revert_transformer_layer, ReplaceWithTensorSlicing, GroupQuantizer, generic_injection File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/module_inject/replace_module.py", line 801, in <module> from ..pipe import PipelineModule File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/pipe/__init__.py", line 1, in <module> from ..runtime.pipe import PipelineModule, LayerSpec, TiedLayerSpec File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/runtime/pipe/__init__.py", line 1, in <module> from .module import PipelineModule, LayerSpec, TiedLayerSpec File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/runtime/pipe/module.py", line 13, in <module> from .. import utils as ds_utils File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/deepspeed/runtime/utils.py", line 19, in <module> from torch._six import inf ModuleNotFoundError: No module named 'torch._six' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 470, in from_pretrained model_class = _get_model_class(config, cls._model_mapping) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 360, in _get_model_class supported_models = model_mapping[type(config)] File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 602, in __getitem__ return self._load_attr_from_module(model_type, model_name) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 616, in _load_attr_from_module return getattribute_from_module(self._modules[module_name], attr) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 561, in getattribute_from_module if hasattr(module, attr): File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1116, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1128, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback): No module named 'torch._six' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22197/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22197/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22196
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22196/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22196/comments
https://api.github.com/repos/huggingface/transformers/issues/22196/events
https://github.com/huggingface/transformers/pull/22196
1,626,875,738
PR_kwDOCUB6oc5MLX1M
22,196
fix AutoTP in deepspeed could not work for bloom
{ "login": "sywangyi", "id": 36058628, "node_id": "MDQ6VXNlcjM2MDU4NjI4", "avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sywangyi", "html_url": "https://github.com/sywangyi", "followers_url": "https://api.github.com/users/sywangyi/followers", "following_url": "https://api.github.com/users/sywangyi/following{/other_user}", "gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions", "organizations_url": "https://api.github.com/users/sywangyi/orgs", "repos_url": "https://api.github.com/users/sywangyi/repos", "events_url": "https://api.github.com/users/sywangyi/events{/privacy}", "received_events_url": "https://api.github.com/users/sywangyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "should work with https://github.com/microsoft/DeepSpeed/pull/3035", "@sgugger please help review\r\n", "@yao-matrix", "_The documentation is not available anymore as the PR was closed or merged._", "Actually, just checked the modeling file and this function is only used in this class, so it would be cleaner to just make it a method. Could you update your PR in that direction?", "@sgugger I see code like \"from transformers.models.bloom.modeling_bloom import build_alibi_tensor\" in petals, if we make this a method, the petals code needs to be changed as well. may happen to other repo that use bloom as well.", "Ok so let's keep it as a function in that module. I'd still prefer a real method (that directly returns the result of the function) to setting a function attribute like this if you don't mind.", "@sgugger update the PR.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22196). All of your documentation changes will be reflected on that endpoint." ]
1,678
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Fixes # (issue) fix AutoTP in deepspeed could not work for bloom ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - deepspeed: HF Trainer: @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22196/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22196", "html_url": "https://github.com/huggingface/transformers/pull/22196", "diff_url": "https://github.com/huggingface/transformers/pull/22196.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22196.patch", "merged_at": 1679059698000 }
https://api.github.com/repos/huggingface/transformers/issues/22195
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22195/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22195/comments
https://api.github.com/repos/huggingface/transformers/issues/22195/events
https://github.com/huggingface/transformers/pull/22195
1,626,706,249
PR_kwDOCUB6oc5MKzRN
22,195
Update expected values in `MgpstrModelIntegrationTest`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? [CI](https://github.com/huggingface/transformers/actions/runs/4422120632/jobs/7753698495) failed: the expected values provided by the contributor didn't match the one given in CI runner, and we just need to update it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22195/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22195", "html_url": "https://github.com/huggingface/transformers/pull/22195", "diff_url": "https://github.com/huggingface/transformers/pull/22195.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22195.patch", "merged_at": 1678967332000 }
https://api.github.com/repos/huggingface/transformers/issues/22194
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22194/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22194/comments
https://api.github.com/repos/huggingface/transformers/issues/22194/events
https://github.com/huggingface/transformers/pull/22194
1,626,668,241
PR_kwDOCUB6oc5MKrJO
22,194
Fix DeepSpeed CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "ouch, i checked it took tensor-rt 1 month to release a version supporting pt-1.13 after the latter was released. \r\n\r\nso I won't expect any coming updates quickly.\r\n\r\ndo we need tensor-rt for anything?\r\n\r\nThank you for fixing this, @ydshieh! Glad to have you on top of things as always!", "Hi @stas00 It's probably my bad.\r\n\r\nIn #20758, it (the one shipped with the base image) was uninstalled (same reason as this PR) \r\nIn #20758 (next day), I found a way to install it - and just updated the docker file without thinking if we need it.\r\n\r\nA quick search gives me\r\n```python\r\ndef is_torch_tensorrt_fx_available():\r\n if importlib.util.find_spec(\"torch_tensorrt\") is None:\r\n return False\r\n return importlib.util.find_spec(\"torch_tensorrt.fx\") is not None\r\n```\r\nBut I think it's irrelevant to DeepSpeed CI job.\r\n\r\n**I can actually remove these 2 lines**\r\n```bash\r\n# This installation instruction will uninstall torch 2.0.0\r\n# TODO: uncomment and update the following line once `torch-tensorrt` is ready for `torch 2.0.0`\r\n```\r\n~~**if you are also OK with this.**~~\r\n\r\nWell, let's remove the installation line, as it was never there originally - it was just shipped with the base image \r\n\r\n(BTW, thank you for checking the release time of `torch-tensorrt` ❤️ )" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? Since 2 days, the daily CI runs with torch 2.0.0. In DeepSpeed CI job, there is an issue regarding `torch-tensorrt` (currently `v1.3.0`). The installation was already disabled in #22135, but there was a version shipped with the base image, and I forgot to uninstall it in our docker image during building. Remark: The failure is our `undefined symbo` friend ```python E ImportError: /opt/conda/lib/python3.8/site-packages/torch_tensorrt/lib/libtorchtrt.so: undefined symbol: _ZN2at11show_configB5cxx11Ev ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22194/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22194/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22194", "html_url": "https://github.com/huggingface/transformers/pull/22194", "diff_url": "https://github.com/huggingface/transformers/pull/22194.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22194.patch", "merged_at": 1678942360000 }
https://api.github.com/repos/huggingface/transformers/issues/22193
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22193/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22193/comments
https://api.github.com/repos/huggingface/transformers/issues/22193/events
https://github.com/huggingface/transformers/pull/22193
1,626,632,072
PR_kwDOCUB6oc5MKjNz
22,193
[trainer] param count for deepspeed zero3
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "An explanation is needed here. \r\n\r\nThe Deepspeed team had to invent their own tensor substitute since 2 years ago nothing of a kind existed in pytorch. They had to replace tensors with placeholders to be able to support sharded tensors.\r\n\r\nThe meta tensors were added just recently so they are looking at possibly switching to those.\r\n\r\nThe API I used in this PR is not public per-se. And the \"clean\" way would be to gather tensors and then get their normal `t.numel()` - but this is extremely wasteful and expensive memory and time-wise. So I hacked it to get the internal equivalent to make it almost instant.\r\n\r\nI'm not planning on leaving it this way and asking for deepspeed to provide an efficient method to return the sizes w/o me using a non-public API.\r\n\r\nThere are many other hidden issues wrt this tensor substitution that impacts only ZeRO stage 3 https://github.com/microsoft/DeepSpeed/issues/2650 - and yesterday I have discovered at least one bug in our examples because of that, while debugging the user report that lead to this PR. All examples resize the embedding under zero3 because their check if the vocab is larger than embedding size always returns True, since the embed size is reported to be of size 0, because it's not gathered :(\r\n\r\nI'm working on ensuring that the Deepspeed addresses this issue because it's subtle and very problematic. \r\n\r\nPlease let me know if you're OK with merging this now that you know more details. I can also easily recode it to gather tensors first, but it'd be very inefficient." ]
1,678
1,679
1,679
CONTRIBUTOR
null
As reported in https://github.com/huggingface/transformers/issues/22179 the trainer code doesn't handle the sharded models correctly in reporting "the Number of trainable parameters" - I'm not sure if FSDP models have the same issue. This PR fixes this situation with Deepspeed ZeRO3 which otherwise reported a count of 0. Fixes: https://github.com/huggingface/transformers/issues/22179
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22193/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22193", "html_url": "https://github.com/huggingface/transformers/pull/22193", "diff_url": "https://github.com/huggingface/transformers/pull/22193.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22193.patch", "merged_at": 1679076176000 }