url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/20484
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20484/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20484/comments
https://api.github.com/repos/huggingface/transformers/issues/20484/events
https://github.com/huggingface/transformers/issues/20484
1,467,702,523
I_kwDOCUB6oc5Xe1z7
20,484
Error when getting the long-t5 model
{ "login": "Magicen0722", "id": 66616125, "node_id": "MDQ6VXNlcjY2NjE2MTI1", "avatar_url": "https://avatars.githubusercontent.com/u/66616125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Magicen0722", "html_url": "https://github.com/Magicen0722", "followers_url": "https://api.github.com/users/Magicen0722/followers", "following_url": "https://api.github.com/users/Magicen0722/following{/other_user}", "gists_url": "https://api.github.com/users/Magicen0722/gists{/gist_id}", "starred_url": "https://api.github.com/users/Magicen0722/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Magicen0722/subscriptions", "organizations_url": "https://api.github.com/users/Magicen0722/orgs", "repos_url": "https://api.github.com/users/Magicen0722/repos", "events_url": "https://api.github.com/users/Magicen0722/events{/privacy}", "received_events_url": "https://api.github.com/users/Magicen0722/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Hi, what `transformers` version are you using?", "> Hi, what `transformers` version are you using?\r\n\r\nI am using `transformers==4.24.0`", "@ArthurZucker could you take a look here? ", "Not really able to reproduce the code for now, but will have a look! I also used `transformers==4.24.0` and the import worked. It was added 6 month ago see [here](https://github.com/huggingface/transformers/pull/16792). Also can confirm that `(\"longt5\", \"LongT5\"),` is [in the auto config](https://github.com/ArthurZucker/transformers/blob/main/src/transformers/models/auto/configuration_auto.py#L391).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
### System Info When I get the relevant pipline or the corresponding model and tokenizer from this page, I get the `KeyError: 'longt5'` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://huggingface.co/google/long-t5-tglobal-base my code: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-base") model = AutoModelForSeq2SeqLM.from_pretrained("google/long-t5-tglobal-base") ``` Full error output: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <command-440662132347528> in <module> 1 from transformers import AutoTokenizer, AutoModelForSeq2SeqLM 2 ----> 3 tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-base") 4 model = AutoModelForSeq2SeqLM.from_pretrained("google/long-t5-tglobal-base") /databricks/python/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 400 tokenizer_config = get_tokenizer_config("bert-base-uncased") 401 # This model does not have a tokenizer config so the result will be an empty dict. --> 402 tokenizer_config = get_tokenizer_config("xlm-roberta-base") 403 404 # Save a pretrained tokenizer locally and you can reload its config /databricks/python/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 430 ("unispeech", "UniSpeech"), 431 ("unispeech-sat", "UniSpeechSat"), --> 432 ("van", "VAN"), 433 ("videomae", "VideoMAE"), 434 ("vilt", "ViLT"), KeyError: 'longt5' ``` Any kind of help is appreciated! ### Expected behavior Correct import of models and corresponding piplines
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20484/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20483
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20483/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20483/comments
https://api.github.com/repos/huggingface/transformers/issues/20483/events
https://github.com/huggingface/transformers/pull/20483
1,467,695,021
PR_kwDOCUB6oc5D3lzF
20,483
[MaskFormer] Add support for ResNet backbone
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[ { "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger feel free to approve, I'm currently uploading all checkpoints to the hub :)" ]
1,669
1,670
1,670
CONTRIBUTOR
null
# What does this PR do? This PR is part 3 of 3 of the big #20204 PR. This PR does 2 things: 1) it makes sure that ResNet is supported as backbone for MaskFormer, besides Swin. It leverages the `AutoBackbone` class for this. 2) <s> it makes sure that MaskFormer defaults to Swin as backbone, not relying on MaskFormerSwin, but just on plain Swin. For this, the argument `output_hidden_states_before_downsampling` is added to `SwinConfig`. </s> => `SwinBackbone` will be added in a separate PR To do: - [x] convert all remaining MaskFormer checkpoints
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20483/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20483", "html_url": "https://github.com/huggingface/transformers/pull/20483", "diff_url": "https://github.com/huggingface/transformers/pull/20483.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20483.patch", "merged_at": 1670402558000 }
https://api.github.com/repos/huggingface/transformers/issues/20482
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20482/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20482/comments
https://api.github.com/repos/huggingface/transformers/issues/20482/events
https://github.com/huggingface/transformers/issues/20482
1,467,690,897
I_kwDOCUB6oc5Xey-R
20,482
How to reproduce the machine translation experiments in Attention is all you need.
{ "login": "shizhediao", "id": 18120087, "node_id": "MDQ6VXNlcjE4MTIwMDg3", "avatar_url": "https://avatars.githubusercontent.com/u/18120087?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shizhediao", "html_url": "https://github.com/shizhediao", "followers_url": "https://api.github.com/users/shizhediao/followers", "following_url": "https://api.github.com/users/shizhediao/following{/other_user}", "gists_url": "https://api.github.com/users/shizhediao/gists{/gist_id}", "starred_url": "https://api.github.com/users/shizhediao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shizhediao/subscriptions", "organizations_url": "https://api.github.com/users/shizhediao/orgs", "repos_url": "https://api.github.com/users/shizhediao/repos", "events_url": "https://api.github.com/users/shizhediao/events{/privacy}", "received_events_url": "https://api.github.com/users/shizhediao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\nI am trying to reproduce the performance of transformer-base (from attention is all you need) on WMT14.\r\nI am using `FSMT` because I cannot find an implementation of the transformer.\r\nI was wondering which dataset and tokenizer are the best choices. \r\n1. `stas/wmt14-en-de-pre-processed` with `facebook/wmt19-en-de`\r\n2. `wmt14` with `facebook/wmt19-en-de`\r\nEspecially, I do not know which tokenizer should be used.\r\n\r\nThanks in advance if you could provide some suggestions!\r\n@patil-suraj @patrickvonplaten ", "Please use the [forums](https://discuss.huggingface.co/) to discuss such questions as we keep issues for bugs and feature requests only.", "> Please use the [forums](https://discuss.huggingface.co/) to discuss such questions as we keep issues for bugs and feature requests only.\r\n\r\nOK, sorry about this. " ]
1,669
1,669
1,669
NONE
null
### Feature request I want to reproduce the experiments described in _Attention is all you need_, which is a transformer base model from scratch. The model architecture is the same as _Attention is all you need_. In other words, I am looking for a transformer base model to train from scratch with HuggingFace. ### Motivation Reproduce the Transformer experiments in machine translation. I found that there are many pre-trained models (e.g., T5, BART, MariaMT), but I would like to train a transformer base model from scratch to compare different optimizers during pre-training. ### Your contribution I found a previous issue here but not sure whether it is the right way to go. https://github.com/huggingface/transformers/issues/12386
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20482/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20481
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20481/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20481/comments
https://api.github.com/repos/huggingface/transformers/issues/20481/events
https://github.com/huggingface/transformers/issues/20481
1,467,436,907
I_kwDOCUB6oc5Xd09r
20,481
CLIP - Mismatch tokenizer_config.json of CLIPTokenizer.from_pretrained() and huggingface_hub
{ "login": "EthanYishun", "id": 92590248, "node_id": "U_kgDOBYTQqA", "avatar_url": "https://avatars.githubusercontent.com/u/92590248?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EthanYishun", "html_url": "https://github.com/EthanYishun", "followers_url": "https://api.github.com/users/EthanYishun/followers", "following_url": "https://api.github.com/users/EthanYishun/following{/other_user}", "gists_url": "https://api.github.com/users/EthanYishun/gists{/gist_id}", "starred_url": "https://api.github.com/users/EthanYishun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EthanYishun/subscriptions", "organizations_url": "https://api.github.com/users/EthanYishun/orgs", "repos_url": "https://api.github.com/users/EthanYishun/repos", "events_url": "https://api.github.com/users/EthanYishun/events{/privacy}", "received_events_url": "https://api.github.com/users/EthanYishun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,673
1,673
NONE
null
### System Info Model ID : `"openai/clip-vit-base-patch32"` Transformer Version : 4.23.0 In the huggingface hub, the file https://huggingface.co/openai/clip-vit-base-patch32/blob/main/tokenizer_config.json doesn't have the parameter `model_max_length:77`. On the contrary, if I use the following: `tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")` `tokenizer.save_pretrained('./test_tokenizer/')` The tokenizer_config.json saved under **test_tokenizer** has the the parameter `model_max_length:77`. So, when I load tokenizer from the downloaded files from https://huggingface.co/openai/clip-vit-base-patch32/tree/main , the `tokenizer.model_max_length` is different from the `CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")`. The former one has a `model_max_length=1000000000000000019884624838656` The later one has a `model_max_length=77` ### Who can help? @patil-suraj , @SaulLu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Download the files directly from CLIP huggingface hub: https://huggingface.co/openai/clip-vit-base-patch32/tree/main and store them under a folder such as './hub_model_clip/' 2. Run the following code in a notebook ``` from transformers import CLIPTokenizer tokenizer_1 = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32") tokenizer_2 = CLIPTokenizer.from_pretrained("./hub_model_clip/") ``` 3. Check the following two: ``` print(tokenizer_1.model_max_length) # 77 print(tokenizer_2.model_max_length) # 1000000000000000019884624838656 ``` ### Expected behavior Sync up the model_max_len from different source of the CLIP tokenizer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20481/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20480
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20480/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20480/comments
https://api.github.com/repos/huggingface/transformers/issues/20480/events
https://github.com/huggingface/transformers/issues/20480
1,467,394,752
I_kwDOCUB6oc5XdqrA
20,480
Unexpected behavior when input ends with multiple newlines
{ "login": "monsieurpooh", "id": 29328114, "node_id": "MDQ6VXNlcjI5MzI4MTE0", "avatar_url": "https://avatars.githubusercontent.com/u/29328114?v=4", "gravatar_id": "", "url": "https://api.github.com/users/monsieurpooh", "html_url": "https://github.com/monsieurpooh", "followers_url": "https://api.github.com/users/monsieurpooh/followers", "following_url": "https://api.github.com/users/monsieurpooh/following{/other_user}", "gists_url": "https://api.github.com/users/monsieurpooh/gists{/gist_id}", "starred_url": "https://api.github.com/users/monsieurpooh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/monsieurpooh/subscriptions", "organizations_url": "https://api.github.com/users/monsieurpooh/orgs", "repos_url": "https://api.github.com/users/monsieurpooh/repos", "events_url": "https://api.github.com/users/monsieurpooh/events{/privacy}", "received_events_url": "https://api.github.com/users/monsieurpooh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Stop telling the model what it should do: [quote](https://history.aip.org/exhibits/einstein/ae63.htm).\r\n\r\nJoke aside, how do you know what the model should do ? It's a small model, so if it's less performant than expected or than the larger ones is completely normal.", "> Stop telling the model what it should do: [quote](https://history.aip.org/exhibits/einstein/ae63.htm).\r\n> \r\n> Joke aside, how do you know what the model should do ? It's a small model, so if it's less performant than expected or than the larger ones is completely normal.\r\n\r\nPlease take a closer look; it is literally impossible for this discrepancy to be caused by model performance/accuracy. Otherwise I would not have reported this as a bug. Again: If you take away one or more of the \"\\n\\n\" at the end, it completes the expected \"\\n\\n\", followed by the expected sentence. But if you end with \"\\n\\n\" it predicts the next token is yet another \"\\n\". That means at the point of the end of \"\\n\\n\" there were two different token predictions even though the input at that point was exactly the same in both cases.", "You are talking about strings here, the model reasons in tokens.\r\n\r\nSo it's perfectly possible that your sentence ending with `\\n\\n` is chunked differently than without, yielding different tokens, and so different outputs.\r\n\r\nCould you check the different tokenizations ?", "Assuming what you say is true (which seems like the most likely explanation):\r\n\r\n1. Isn't this still considered a bug during tokenization? Shouldn't the same input at each step lead to the same output?\r\n2. Is there a possible workaround, other than making sure certain types of inputs never get passed in?", "> Isn't this still considered a bug during tokenization? Shouldn't the same input at each step lead to the same output?\r\n\r\nNot really, all models usually have the basic ASCII chars, so the model is free to generate `t` + `h` + `e` which most likley will be in its vocabulary as `the`. Now this is usually not the case (since the model was usually not trained to output individual letters like here. But it's definitely not a guarantee. Some models actually DO train on such irregular tokenizations, and this is called tokenization `dropout`. Benefits in general seems mitigated (some says it's super important, some that it negatively impacts final performance. I personnally don't have any opinion on this).\r\n\r\n> Is there a possible workaround, other than making sure certain types of inputs never get passed in?\r\n\r\nYou could do that. This is what is done under the hood for GPT-3 for instance, where you have these \"START\" and \"STOP\" sequence which are inserted for you as tokens, which avoids letting the tokenizer doing it on its own. For Bloom, we also had the same issue, where prompt perform better when it doesn't end with a trailing space (so removing trailing spaces from prompts help the perceived quality of free text users). \r\nAs far as I know, there is no \"FIX\" for it entirely. \r\n\r\nIf you could stick to using tokens, things would make more sense maybe, but it depends on the use case and how the model was trained really.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,673
1,673
NONE
null
### System Info - `transformers` version: 4.15.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.5 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.5.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplaten, @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import GPTNeoForCausalLM, GPT2Tokenizer model_name = "EleutherAI/gpt-neo-125M" model = GPTNeoForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True, cache_dir='gpt_cache_dir', resume_download=True).half().to("cuda:0") tokenizer = GPT2Tokenizer.from_pretrained(model_name, low_cpu_mem_usage=True, cache_dir='gpt_cache_dir', resume_download=True) input_ids = tokenizer("This is a line 1\n\nThis is a line 2\n\nThis is a line 3\n\n", return_tensors="pt").input_ids.cuda() gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.01, max_length=40, min_length=1, repetition_penalty=1.0) gen_text = "Output: \"" + tokenizer.batch_decode(gen_tokens[:, input_ids.shape[1]:])[0] + "\"" print(gen_text) ``` Actual behavior: -If the input ends with 1 newline, generating multiple tokens works as expected, but generating just 1 token says the next token should be a newline by itself. -If the input ends with 2 newlines, generate multiple tokens doesn't work as expected, and printing the next top score reveals the next token is some unexpected thing such as another newline or a token beginning with a space. ### Expected behavior Expected behavior: If prompt ends in \n\n, generated text shouldn't start with \n. Duplicate of https://github.com/huggingface/transformers/issues/17860 but it won't let me re-open
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20480/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20479
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20479/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20479/comments
https://api.github.com/repos/huggingface/transformers/issues/20479/events
https://github.com/huggingface/transformers/pull/20479
1,467,166,286
PR_kwDOCUB6oc5D11Hw
20,479
add flax whisper implementation
{ "login": "andyehrenberg", "id": 32784181, "node_id": "MDQ6VXNlcjMyNzg0MTgx", "avatar_url": "https://avatars.githubusercontent.com/u/32784181?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andyehrenberg", "html_url": "https://github.com/andyehrenberg", "followers_url": "https://api.github.com/users/andyehrenberg/followers", "following_url": "https://api.github.com/users/andyehrenberg/following{/other_user}", "gists_url": "https://api.github.com/users/andyehrenberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/andyehrenberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andyehrenberg/subscriptions", "organizations_url": "https://api.github.com/users/andyehrenberg/orgs", "repos_url": "https://api.github.com/users/andyehrenberg/repos", "events_url": "https://api.github.com/users/andyehrenberg/events{/privacy}", "received_events_url": "https://api.github.com/users/andyehrenberg/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "@andyehrenberg \r\n\r\nThank you for the PR. However, a pull request should focus on a single objective/goal, rather than changing multiple things at the same time which are not absolutely coupled.\r\n\r\nPlease \r\n - follow the pytorch implementation regarding the `past_key_values`\r\n - revert the changes on the flax generation utils\r\n(You may want to have a backup branch to save these changes for future pull requests.)\r\n\r\nThe goal of this PR is to add Flax implementation of Whisper. For other changes, it's better to open issue tickets, and if we all agree with the proposals, a PR could proceed :-)\r\n\r\nThank you!", "I see a few other instances in this repo where the pytorch implementation computes `past_key_values_length` while the flax implementation uses `position_ids` (BART, OPT, etc) - to me, keeping consistency among the APIs of the flax models is something we should strive for. What do you think @ydshieh @patrickvonplaten ?\r\n\r\nHappy to remove the changes to the generation stuff and open a separate PR for that - will definitely do this to make flax Whisper generation work!", "I wasn't aware of that inconsistency, thank you for pointing out. This is a good question! But I don't think that's a very serious problem so far - the most important thing is the different frameworks produce the same outputs when feeding the same (supported) inputs + the API on the top model levels being consistent.\r\n\r\n(The internal computation could be somehow different - if there is good reason)\r\n\r\nIn any case, this could be discussed in an issue and we can proceed with a PR once decided :-) ", "BTW, there is some issue for triggering CircleCI. The message is\r\n\r\n```bash\r\nCould not find a usable config.yml, you may have revoked the CircleCI OAuth app.\r\nPlease sign out of CircleCI and log back in with your VCS before triggering a new pipeline.\r\n```\r\n\r\nDo you use some IDE to push the commits? Could you try to push the commit with a commandline tool or some git GUI tools instead?", "_The documentation is not available anymore as the PR was closed or merged._", "Also cc @sanchit-gandhi ", "Hey! Thanks for opening the follow PR 🤗 \r\n\r\nI don't think I agree with @ydshieh here, adding the `flax_generation_utils` along with whisper totally makes sense as it was done for `pytorch` and `tf`, and is required to add the `generation` tests which are currently missing! \r\nRegarding the `past_key_values`, we don't really strive to match `transformers` with other APIs, rather I think we prefer consistency within our own library, and code clarity. \r\nHowever you can still open an issue and we can discuss whether we should refactor the design of `past_key_values` for our `flax` model! \r\n\r\nWill have a look at the PR 😉 ", "You are right! I am not aware of those generation features are introduced when you added Whisper @ArthurZucker . Sorry about that, @andyehrenberg !\r\n\r\n", "Super excited by this PR! 🚀 Feel free to tag me with questions / review requests as well @andyehrenberg 🤗", "Hey @andyehrenberg! Looks like you found my old PR for implementing scan with Flax `nn.Modules` and copied the logic across https://github.com/huggingface/transformers/pull/18341\r\n\r\nI'm happy to answer @ArthurZucker's questions regarding scan here. In the end, we decided not to pursue with adding scan in Transformers - this is why you haven't seen the PR merged or scan in any of our Flax models. \r\n\r\nThe reason for this is that scan adds **a lot** of complexity to the modelling code. Whilst it does give faster compile times for training, it is actually **slower** for inference. On balance, it's not worth the myriad of extra code for a small speed-up to compile time for training. We prefer readability and ease of understanding over highly optimised code in Transformers. Because of this, unfortunately scan is not a good fit.\r\n\r\nNote: since Whisper pads/truncates the audio inputs to 30s, the inputs to Whisper are **always** of fixed dimension. This means that you only ever need 1 compile step! So the compilation time is entirely amortised by the subsequent compiled times during training/inference. For this reason, I advise that you stick to the regular way of implementing unrolled Flax `nn.Modules` for Whisper.\r\n\r\nHappy to answer any questions regarding scan and why we don't include it in our modelling code!\r\n\r\nThe optimum library might be a better place for highly optimised Flax code: https://github.com/huggingface/optimum", "Hey @ydshieh! Is there a way of enabling the Flax CI in this PR? Before merging it'd be awesome to verify that the Flax CI is ✅", "cc @sanchit-gandhi @sgugger for a final review here maybe :-) ", "@andyehrenberg thanks for the changes in the last commit <3 \r\n\r\nGreen light for this PR on my end [generate]", "Mmmm, before merging this PR, there is something wrong going on with the tests: only one of the tests job is actually run (no tests_flax/tests_tf etc...)\r\n\r\nWill investigate later today unless someone beats me to it.", "It looks like running under the wrong CircleCI project (on the PR author one, not on `huggingface/transformers`), and it got\r\n\r\n\r\n> Resource class docker for xlarge is not available for your project, or is not a valid resource class. This message will often appear if the pricing plan for this project does not support docker use.\r\n\r\n\r\nSee https://app.circleci.com/pipelines/github/andyehrenberg/transformers?branch=flax_whisper", "@andyehrenberg \r\n\r\nCould you follow the instruction mentioned [here](https://support.circleci.com/hc/en-us/articles/360008097173-Troubleshooting-why-pull-requests-are-not-triggering-jobs-on-my-organization-), and see if it fixes the CI issue?\r\n\r\n> If you're following the fork instead of the upstream repo\r\nA user who submits a pull request to your repository from a fork, but no pipeline is triggered with the pull request. This can happen when the user is following the project fork on their personal account rather than the project itself on CircleCI.\r\n\r\n> This will cause the jobs to trigger under the user's personal account. If the user is following a fork of the repository on CircleCI, we will only build on that fork and not the parent, so the parent’s PR will not get status updates. \r\n\r\n> In these cases, the user unfollows their fork of the project on CircleCI. This will trigger their jobs to run under the organization when they submit pull requests. Those users can optionally follow the source project if they wish to see the pipelines.", "\r\n\r\n\r\n> Mmmm, before merging this PR, there is something wrong going on with the tests: only one of the tests job is actually run (no tests_flax/tests_tf etc...)\r\n> \r\n> Will investigate later today unless someone beats me to it.\r\n\r\n@sgugger Fixed, and all tests are passing now (had to override some tests due to `input_features` being different from its usual shape in the tests)", "Thanks @andyehrenberg !\r\n\r\n@sanchit-gandhi Can you have one final look?", "@sanchit-gandhi - How can I rerun the checks without further commits? The error looks like an account limit overshoot and doesn't seem to do with the two newer commits.", "@andyehrenberg We can re-run the failed tests on the job run page\r\n<img width=\"1063\" alt=\"Screenshot 2023-01-16 202103\" src=\"https://user-images.githubusercontent.com/2521628/212752381-4bce24af-697c-4c4f-ab30-457b2b7a6b4a.png\">\r\n\r\nBut I think only HF members can do that - I will launch it.", "@sanchit-gandhi I think it's ready for another look by you! The torch tests it's failing current seem unrelated to the PR, so rerunning CI may give all passes", "Also sorry! We just modified Whisper quit a bit 😅 ", "> Also sorry! We just modified Whisper quit a bit 😅\r\n\r\n@ArthurZucker - Doesn't actually look too bad to catch up with those changes! Can do that soon-ish. I already have a jax timestamp processor that's compilable.", "Oh no - sorry you have to iterate again here @andyehrenberg! Feel free to ping me with any questions / discussions - more than happy to help with the final sprint of the integration! Otherwise super excited to review a final time before merge! 🚀", "@sanchit-gandhi - I think this is ready for another look - the recent commits (I think) get us to feature parity with the torch version.", "@sanchit-gandhi Bump", "@sanchit-gandhi @ArthurZucker - Addressed Arthur's comments and cleaned up the timestamp logits processor a bit. Hopefully we're close to getting this merged!", "> Very nice @andyehrenberg! Thanks for iterating here - reviewed the new changes and the PR is looking super clean. Last request from me is if we can avoid defining the `if_true()` functions if possible and just add the code explicitly! Good for merge otherwise :)\r\n\r\nFor sure, made those changes :)", "Is there any instructions to open the google cloud TPU port, admin?\r\n" ]
1,669
1,707
1,676
CONTRIBUTOR
null
Adds Flax whisper implementations, and adjusts flax generation utils to support it. @ydshieh @ArthurZucker See discussion in #19512
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20479/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 7, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20479/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20479", "html_url": "https://github.com/huggingface/transformers/pull/20479", "diff_url": "https://github.com/huggingface/transformers/pull/20479.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20479.patch", "merged_at": 1676881060000 }
https://api.github.com/repos/huggingface/transformers/issues/20478
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20478/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20478/comments
https://api.github.com/repos/huggingface/transformers/issues/20478/events
https://github.com/huggingface/transformers/pull/20478
1,467,090,159
PR_kwDOCUB6oc5D1kbb
20,478
Replace assert statements with raise exceptions
{ "login": "miyu386", "id": 60191117, "node_id": "MDQ6VXNlcjYwMTkxMTE3", "avatar_url": "https://avatars.githubusercontent.com/u/60191117?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miyu386", "html_url": "https://github.com/miyu386", "followers_url": "https://api.github.com/users/miyu386/followers", "following_url": "https://api.github.com/users/miyu386/following{/other_user}", "gists_url": "https://api.github.com/users/miyu386/gists{/gist_id}", "starred_url": "https://api.github.com/users/miyu386/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyu386/subscriptions", "organizations_url": "https://api.github.com/users/miyu386/orgs", "repos_url": "https://api.github.com/users/miyu386/repos", "events_url": "https://api.github.com/users/miyu386/events{/privacy}", "received_events_url": "https://api.github.com/users/miyu386/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger Appreciate the feedback! I've addressed and made the necessary changes." ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? Fixes an instance of https://github.com/huggingface/transformers/issues/12789. Replaces 4 assert statements with ValueError exception in `src/transformers/data/metrics/squad_metrics.py` Co-author: @mollerup23 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20478/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20478", "html_url": "https://github.com/huggingface/transformers/pull/20478", "diff_url": "https://github.com/huggingface/transformers/pull/20478.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20478.patch", "merged_at": 1669739649000 }
https://api.github.com/repos/huggingface/transformers/issues/20477
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20477/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20477/comments
https://api.github.com/repos/huggingface/transformers/issues/20477/events
https://github.com/huggingface/transformers/pull/20477
1,466,990,931
PR_kwDOCUB6oc5D1Ol8
20,477
Fix init import_structure sorting
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? The custom script we have that sorts the imports in our inits was broken since a while ago. This PR fixes it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20477/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20477", "html_url": "https://github.com/huggingface/transformers/pull/20477", "diff_url": "https://github.com/huggingface/transformers/pull/20477.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20477.patch", "merged_at": 1669733170000 }
https://api.github.com/repos/huggingface/transformers/issues/20476
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20476/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20476/comments
https://api.github.com/repos/huggingface/transformers/issues/20476/events
https://github.com/huggingface/transformers/issues/20476
1,466,749,638
I_kwDOCUB6oc5XbNLG
20,476
Trainer Eval loop fails to handle COCO formatted data
{ "login": "awilsonTorch", "id": 88393795, "node_id": "MDQ6VXNlcjg4MzkzNzk1", "avatar_url": "https://avatars.githubusercontent.com/u/88393795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/awilsonTorch", "html_url": "https://github.com/awilsonTorch", "followers_url": "https://api.github.com/users/awilsonTorch/followers", "following_url": "https://api.github.com/users/awilsonTorch/following{/other_user}", "gists_url": "https://api.github.com/users/awilsonTorch/gists{/gist_id}", "starred_url": "https://api.github.com/users/awilsonTorch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awilsonTorch/subscriptions", "organizations_url": "https://api.github.com/users/awilsonTorch/orgs", "repos_url": "https://api.github.com/users/awilsonTorch/repos", "events_url": "https://api.github.com/users/awilsonTorch/events{/privacy}", "received_events_url": "https://api.github.com/users/awilsonTorch/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This has been fixed by #19455, you should upgrade to the latest version of Transformers to have the fix." ]
1,669
1,670
1,669
NONE
null
### System Info - `transformers` version: 4.23.1 - Platform: Linux-4.14.294-220.533.amzn2.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: YES - Using distributed or parallel set-up in script?: NO ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Download and format (into Dataset object) a COCO vision dataset with the following features: ``` features = Features( { "pixel_mask": Sequence(Sequence(Sequence(Value(dtype="float32")))), 'labels': [{ 'boxes': Sequence(Sequence(Value(dtype="float32"))), 'class_labels': Sequence(ClassLabel(names=label_list)), 'image_id': Sequence(Value(dtype="int64")), 'area': Sequence(Value(dtype="float32")), 'iscrowd': Sequence(Value(dtype="int64")), 'orig_size': Sequence(Value(dtype="int64")), 'size': Sequence(Value(dtype="int64")) }], 'carrier': Value(dtype='string'), 'pixel_values': Array3D(dtype="float32", shape=(3, 1049, 800)) } ) ``` 3. Load the DETR model ``` def model_init(): return DetrForObjectDetection.from_pretrained('facebook/detr-resnet-101-dc5', id2label=id2label, label2id=label2id, ignore_mismatched_sizes=True) ``` 4. Create custom collator function ``` def collate_fn(batch): pixel_values = torch.cat([item["pixel_values"].unsqueeze(dim=0) for item in batch]) encoding = feature_extractor.pad_and_create_pixel_mask( pixel_values, return_tensors="pt" ) labels = [item["labels"][0] for item in batch] batch = {} batch['pixel_values'] = pixel_values batch["pixel_mask"] = encoding["pixel_mask"] batch["labels"] = labels return batch ``` 5. Instantiate Trainer ``` training_args = transformers.TrainingArguments( output_dir=output_dir, logging_dir=logging_dir, max_steps=1000, per_device_train_batch_size=2, per_device_eval_batch_size=2, learning_rate=1e-5, evaluation_strategy="steps", eval_steps=100, save_strategy='steps', save_steps=100, report_to="tensorboard", logging_strategy='steps', logging_steps=50, seed=42 ) trainer = Trainer( model_init=model_init, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=collate_fn, compute_metrics=compute_metrics # callbacks=[tensorboard_callback] ) ``` 6. Call trainer.prediction_step `loss, logits, labels = trainer.prediction_step(model, example, False, ignore_keys=True)` 7. Receive error message: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In [86], line 1 ----> 1 loss, logits, labels = trainer.prediction_step(model, example, False, ignore_keys=True) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:3166, in Trainer.prediction_step(self, model, inputs, prediction_loss_only, ignore_keys) 3164 # labels may be popped when computing the loss (label smoothing for instance) so we grab them first. 3165 if has_labels: -> 3166 labels = nested_detach(tuple(inputs.get(name) for name in self.label_names)) 3167 if len(labels) == 1: 3168 labels = labels[0] File /opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py:158, in nested_detach(tensors) 156 "Detach `tensors` (even if it's a nested list/tuple of tensors)." 157 if isinstance(tensors, (list, tuple)): --> 158 return type(tensors)(nested_detach(t) for t in tensors) 159 return tensors.detach() File /opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py:158, in <genexpr>(.0) 156 "Detach `tensors` (even if it's a nested list/tuple of tensors)." 157 if isinstance(tensors, (list, tuple)): --> 158 return type(tensors)(nested_detach(t) for t in tensors) 159 return tensors.detach() File /opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py:159, in nested_detach(tensors) 157 if isinstance(tensors, (list, tuple)): 158 return type(tensors)(nested_detach(t) for t in tensors) --> 159 return tensors.detach() AttributeError: 'dict' object has no attribute 'detach' ### Expected behavior The function should return the loss, logits, labels as expected.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20476/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20476/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20475
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20475/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20475/comments
https://api.github.com/repos/huggingface/transformers/issues/20475/events
https://github.com/huggingface/transformers/pull/20475
1,466,743,284
PR_kwDOCUB6oc5D0dcC
20,475
Fix torch meshgrid warnings
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "From a quick search in the PyTorch doc, this argument is only accepted starting in PyTorch 1.10, so this PR will break the corresponding models for older versions. You should add a `meshgrid` function the pytorch utils that passes the argument or not depending on the PyTorch version, then use this util in the modeling code.", "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger Thank you, hopefully fixed!\r\n\r\nedit: will fix the CI tomorrow", "@sgugger the CI is good at last!", "Arf my comment from yesterday never went through :sweat_smile: " ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? This PR fixes unwanted warnings due in `torch.meshgrid`: ``` /home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2894.) >>> ys = torch.linspace(-5, 5, steps=100) ``` Passing `indexing="ij"` is necessary to keep the current behavior, see https://pytorch.org/docs/stable/generated/torch.meshgrid.html and https://github.com/pytorch/pytorch/issues/50276 ```python import torch x = torch.tensor([1, 2, 3]) y = torch.tensor([4, 5, 6]) grid_x, grid_y = torch.meshgrid(x, y, indexing="ij") # vs indexing="xy" print(grid_x) print(grid_y) ``` ## Before submitting - [x] This PR fixes a typo ## Who can review? any core maintainer to approve
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20475/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20475", "html_url": "https://github.com/huggingface/transformers/pull/20475", "diff_url": "https://github.com/huggingface/transformers/pull/20475.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20475.patch", "merged_at": 1669729103000 }
https://api.github.com/repos/huggingface/transformers/issues/20474
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20474/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20474/comments
https://api.github.com/repos/huggingface/transformers/issues/20474/events
https://github.com/huggingface/transformers/pull/20474
1,466,741,991
PR_kwDOCUB6oc5D0dKq
20,474
Extract warnings from CI artifacts
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This is the current list of `DeprecationWarning`. I could NOT find any thing from torch/TF/accelerate. Hope I don't miss anything here.\r\n\r\n\r\n```bash\r\n[\r\n \"/opt/conda/lib/python3.8/site-packages/sklearn/utils/multiclass.py:14: DeprecationWarning: Please use `spmatrix` from the `scipy.sparse` namespace, the `scipy.sparse.base` namespace is deprecated.\\nfrom scipy.sparse.base import spmatrix\",\r\n \"/transformers/src/transformers/commands/add_new_model_like.py:1079: DeprecationWarning: invalid escape sequence \\\\s\\ncontent = re.sub(\\\"<!--\\\\s*Copyright (\\\\d+)\\\\s\\\", f\\\"<!--Copyright {CURRENT_YEAR} \\\", content)\",\r\n \"/transformers/src/transformers/commands/add_new_model_like.py:1105: DeprecationWarning: invalid escape sequence \\\\s\\nelif re.search(\\\"^#\\\\s+\\\\S+\\\", block) is not None:\",\r\n \"/transformers/src/transformers/commands/add_new_model_like.py:1117: DeprecationWarning: invalid escape sequence \\\\s\\nblock_class = re.search(\\\"^#+\\\\s+(\\\\S.*)$\\\", block_title).groups()[0]\",\r\n \"/transformers/src/transformers/commands/add_new_model_like.py:126: DeprecationWarning: invalid escape sequence \\\\s\\nsearch = re.search(\\\"^(\\\\s*)(?:\\\\S|$)\\\", line)\",\r\n \"/transformers/src/transformers/commands/add_new_model_like.py:427: DeprecationWarning: invalid escape sequence \\\\d\\ncontent = re.sub(\\\"# Copyright (\\\\d+)\\\\s\\\", f\\\"# Copyright {CURRENT_YEAR} \\\", content)\",\r\n \"/transformers/src/transformers/commands/add_new_model_like.py:476: DeprecationWarning: invalid escape sequence \\\\s\\nhas_copied_from = re.search(\\\"^#\\\\s+Copied from\\\", obj, flags=re.MULTILINE) is not None\",\r\n \"/transformers/src/transformers/commands/add_new_model_like.py:568: DeprecationWarning: invalid escape sequence \\\\s\\n_re_checkpoint_for_doc = re.compile(\\\"^_CHECKPOINT_FOR_DOC\\\\s+=\\\\s+(\\\\S*)\\\\s*$\\\", flags=re.MULTILINE)\",\r\n \"/transformers/src/transformers/commands/add_new_model_like.py:811: DeprecationWarning: invalid escape sequence \\\\s\\nre.search('^\\\\s*\\\"(tokenization|processing|feature_extraction)', lines[idx]) is None\",\r\n \"/transformers/src/transformers/commands/add_new_model_like.py:812: DeprecationWarning: invalid escape sequence \\\\s\\nand re.search(\\\"^\\\\s*from .(tokenization|processing|feature_extraction)\\\", lines[idx]) is None\",\r\n \"/transformers/src/transformers/models/deformable_detr/modeling_deformable_detr.py:1796: DeprecationWarning: invalid escape sequence \\\\.\\n_keys_to_ignore_on_load_missing = [\\\"bbox_embed\\\\.[1-9]\\\\d*\\\", \\\"class_embed\\\\.[1-9]\\\\d*\\\"]\",\r\n \"/transformers/src/transformers/models/deit/image_processing_deit.py:117: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.\\nresample: PILImageResampling = PIL.Image.BICUBIC,\",\r\n \"/transformers/src/transformers/models/deit/image_processing_deit.py:86: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.\\nresample: PILImageResampling = PIL.Image.BICUBIC,\",\r\n \"/transformers/src/transformers/models/glpn/modeling_glpn.py:637: DeprecationWarning: invalid escape sequence \\\\s\\n\\\"\\\"\\\"\",\r\n \"/transformers/src/transformers/models/jukebox/tokenization_jukebox.py:152: DeprecationWarning: invalid escape sequence \\\\-\\noov = \\\"[^A-Za-z0-9.,:;!?\\\\-'\\\\\\\"()\\\\[\\\\] \\\\t\\\\n]+\\\"\",\r\n \"/transformers/src/transformers/models/jukebox/tokenization_jukebox.py:155: DeprecationWarning: invalid escape sequence \\\\-\\noov = oov.replace(\\\"\\\\-'\\\", \\\"\\\\-+'\\\")\",\r\n \"/transformers/src/transformers/models/jukebox/tokenization_jukebox.py:234: DeprecationWarning: invalid escape sequence \\\\-\\nself.out_of_vocab = regex.compile(\\\"[^A-Za-z0-9.,:;!?\\\\-'\\\\\\\"()\\\\[\\\\] \\\\t\\\\n]+\\\")\",\r\n \"/transformers/src/transformers/models/jukebox/tokenization_jukebox.py:243: DeprecationWarning: invalid escape sequence \\\\-\\nself.out_of_vocab = regex.compile(\\\"[^A-Za-z0-9.,:;!?\\\\-+'\\\\\\\"()\\\\[\\\\] \\\\t\\\\n]+\\\")\",\r\n \"/transformers/src/transformers/models/maskformer/feature_extraction_maskformer.py:313: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.\\ntarget = self.resize(target, size=size, resample=Image.NEAREST)\",\r\n \"/transformers/src/transformers/models/maskformer/modeling_maskformer.py:2066: DeprecationWarning: invalid escape sequence \\\\e\\n\\\"\\\"\\\"\",\r\n \"/transformers/src/transformers/models/mobilevit/image_processing_mobilevit.py:141: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.\\nresample: PILImageResampling = PIL.Image.BILINEAR,\",\r\n \"/transformers/src/transformers/models/perceiver/image_processing_perceiver.py:156: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.\\nresample: PILImageResampling = PIL.Image.BICUBIC,\",\r\n \"/transformers/src/transformers/models/segformer/image_processing_segformer.py:304: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.\\nresample=PIL.Image.NEAREST,\",\r\n \"/transformers/src/transformers/models/segformer/image_processing_segformer.py:441: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.\\nresample=PIL.Image.NEAREST,\",\r\n \"/transformers/src/transformers/models/t5/tokenization_t5.py:217: DeprecationWarning: invalid escape sequence \\\\d\\nset(filter(lambda x: bool(re.search(\\\"<extra_id_\\\\d+>\\\", x)) is not None, self.additional_special_tokens))\",\r\n \"/transformers/src/transformers/models/t5/tokenization_t5_fast.py:240: DeprecationWarning: invalid escape sequence \\\\d\\nset(filter(lambda x: bool(re.search(\\\"<extra_id_\\\\d+>\\\", x)) is not None, self.additional_special_tokens))\",\r\n \"/transformers/src/transformers/models/transfo_xl/modeling_transfo_xl.py:1018: DeprecationWarning: The output of TransfoXL will be updated in v5 to support a single loss as first argument. In orderto use that updated output, please specify `trainer_compatible=True` as your configuration attribute.\\nwarnings.warn(\",\r\n \"/transformers/tests/models/maskformer/test_feature_extraction_maskformer.py:407: DeprecationWarning: Please use assertEqual instead.\\nself.assertEquals(inputs[\\\"mask_labels\\\"][0].sum().item(), 41527.0)\",\r\n \"/transformers/tests/models/maskformer/test_feature_extraction_maskformer.py:408: DeprecationWarning: Please use assertEqual instead.\\nself.assertEquals(inputs[\\\"mask_labels\\\"][1].sum().item(), 26259.0)\",\r\n \"/transformers/tests/models/maskformer/test_feature_extraction_maskformer.py:449: DeprecationWarning: Please use assertEqual instead.\\nself.assertEquals(inputs[\\\"mask_labels\\\"][0].sum().item(), 170200.0)\",\r\n \"/transformers/tests/models/maskformer/test_feature_extraction_maskformer.py:450: DeprecationWarning: Please use assertEqual instead.\\nself.assertEquals(inputs[\\\"mask_labels\\\"][1].sum().item(), 257036.0)\",\r\n \"/transformers/tests/models/maskformer/test_feature_extraction_maskformer.py:514: DeprecationWarning: Please use assertEqual instead.\\nself.assertEquals(inputs[\\\"mask_labels\\\"][0].sum().item(), 315193.0)\",\r\n \"/transformers/tests/models/maskformer/test_feature_extraction_maskformer.py:515: DeprecationWarning: Please use assertEqual instead.\\nself.assertEquals(inputs[\\\"mask_labels\\\"][1].sum().item(), 350747.0)\",\r\n \"/transformers/tests/models/realm/test_modeling_realm.py:394: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.\\nDeprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\\ndtype=np.object,\",\r\n \"/transformers/tests/models/realm/test_retrieval_realm.py:103: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.\\nDeprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\\ndtype=np.object,\",\r\n \"/transformers/tests/models/realm/test_retrieval_realm.py:119: DeprecationWarning: `np.long` is a deprecated alias for `np.compat.long`. To silence this warning, use `np.compat.long` by itself. In the likely event your code does not need to work on Python 2 you can use the builtin `int` for which `np.compat.long` is itself an alias. Doing this will not modify any behaviour and is safe. When replacing `np.long`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.\\nDeprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\\nretrieved_block_ids = np.array([0, 3], dtype=np.long)\",\r\n \"/transformers/tests/models/realm/test_retrieval_realm.py:154: DeprecationWarning: `np.long` is a deprecated alias for `np.compat.long`. To silence this warning, use `np.compat.long` by itself. In the likely event your code does not need to work on Python 2 you can use the builtin `int` for which `np.compat.long` is itself an alias. Doing this will not modify any behaviour and is safe. When replacing `np.long`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.\\nDeprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\\nretrieved_block_ids = np.array([0, 3, 5], dtype=np.long)\",\r\n \"/transformers/tests/models/t5/test_tokenization_t5.py:387: DeprecationWarning: Please use assertEqual instead.\\nself.assertEquals(len(sentinel_tokens), 10)\",\r\n \"/transformers/tests/models/t5/test_tokenization_t5.py:389: DeprecationWarning: invalid escape sequence \\\\d\\nself.assertTrue([re.search(\\\"<extra_id_\\\\d+>\\\", token) is not None for token in sentinel_tokens])\",\r\n \"/transformers/tests/models/t5/test_tokenization_t5.py:398: DeprecationWarning: Please use assertEqual instead.\\nself.assertEquals(len(sentinel_tokens), 10)\",\r\n \"/transformers/tests/models/t5/test_tokenization_t5.py:400: DeprecationWarning: invalid escape sequence \\\\d\\nself.assertTrue([re.search(\\\"<extra_id_\\\\d+>\\\", token) is not None for token in sentinel_tokens])\",\r\n \"/transformers/tests/sagemaker/conftest.py:36: DeprecationWarning: invalid escape sequence \\\\D\\n{\\\"Name\\\": \\\"train_runtime\\\", \\\"Regex\\\": \\\"train_runtime.*=\\\\D*(.*?)$\\\"},\",\r\n \"/transformers/tests/sagemaker/conftest.py:37: DeprecationWarning: invalid escape sequence \\\\D\\n{\\\"Name\\\": \\\"eval_accuracy\\\", \\\"Regex\\\": \\\"eval_accuracy.*=\\\\D*(.*?)$\\\"},\",\r\n \"/transformers/tests/sagemaker/conftest.py:38: DeprecationWarning: invalid escape sequence \\\\D\\n{\\\"Name\\\": \\\"eval_loss\\\", \\\"Regex\\\": \\\"eval_loss.*=\\\\D*(.*?)$\\\"},\",\r\n \"/transformers/tests/sagemaker/conftest.py:42: DeprecationWarning: invalid escape sequence \\\\D\\n{\\\"Name\\\": \\\"train_runtime\\\", \\\"Regex\\\": \\\"train_runtime.*=\\\\D*(.*?)$\\\"},\",\r\n \"/transformers/tests/sagemaker/conftest.py:43: DeprecationWarning: invalid escape sequence \\\\D\\n{\\\"Name\\\": \\\"eval_accuracy\\\", \\\"Regex\\\": \\\"loss.*=\\\\D*(.*?)]?$\\\"},\",\r\n \"/transformers/tests/sagemaker/conftest.py:44: DeprecationWarning: invalid escape sequence \\\\D\\n{\\\"Name\\\": \\\"eval_loss\\\", \\\"Regex\\\": \\\"sparse_categorical_accuracy.*=\\\\D*(.*?)]?$\\\"},\",\r\n \"/transformers/tests/utils/test_add_new_model_like.py:156: DeprecationWarning: invalid escape sequence \\\\s\\nself.assertEqual(add_content_to_text(test_text, line, add_before=re.compile('^\\\\s*\\\"bert\\\":')), expected)\",\r\n \"/transformers/tests/utils/test_add_new_model_like.py:163: DeprecationWarning: invalid escape sequence \\\\s\\nself.assertEqual(add_content_to_text(test_text, line, add_after=re.compile('^\\\\s*\\\"gpt\\\":')), expected)\",\r\n \"/transformers/tests/utils/test_add_new_model_like.py:196: DeprecationWarning: invalid escape sequence \\\\s\\nadd_content_to_file(file_name, line, add_before=re.compile('^\\\\s*\\\"bert\\\":'))\",\r\n \"/transformers/tests/utils/test_add_new_model_like.py:212: DeprecationWarning: invalid escape sequence \\\\s\\nadd_content_to_file(file_name, line, add_after=re.compile('^\\\\s*\\\"gpt\\\":'))\",\r\n \"/usr/local/lib/python3.8/dist-packages/detectron2/data/transforms/augmentation_impl.py:113: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.\\ndef __init__(self, shape, interp=Image.BILINEAR):\",\r\n \"/usr/local/lib/python3.8/dist-packages/detectron2/data/transforms/augmentation_impl.py:140: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.\\nself, short_edge_length, max_size=sys.maxsize, sample_style=\\\"range\\\", interp=Image.BILINEAR\",\r\n \"/usr/local/lib/python3.8/dist-packages/detectron2/data/transforms/augmentation_impl.py:214: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.\\ninterp: int = Image.BILINEAR,\",\r\n \"/usr/local/lib/python3.8/dist-packages/detectron2/data/transforms/augmentation_impl.py:635: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.\\ndef __init__(self, shape_list, interp=Image.BILINEAR):\",\r\n \"/usr/local/lib/python3.8/dist-packages/detectron2/data/transforms/transform.py:46: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.\\ndef __init__(self, src_rect, output_size, interp=Image.LINEAR, fill=0):\",\r\n \"/usr/local/lib/python3.8/dist-packages/tf2onnx/tf_utils.py:58: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here.\\nDeprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations\\nnp_data = np_data.astype(np.str).astype(object)\",\r\n \"/workspace/transformers/src/transformers/models/t5/tokenization_t5.py:217: DeprecationWarning: invalid escape sequence \\\\d\\nset(filter(lambda x: bool(re.search(\\\"<extra_id_\\\\d+>\\\", x)) is not None, self.additional_special_tokens))\",\r\n \"/workspace/transformers/src/transformers/models/t5/tokenization_t5_fast.py:240: DeprecationWarning: invalid escape sequence \\\\d\\nset(filter(lambda x: bool(re.search(\\\"<extra_id_\\\\d+>\\\", x)) is not None, self.additional_special_tokens))\"\r\n]\r\n```\r\n\r\n" ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? The default behavior is to extract the `DeprecationWarning`, but it could be changed by specifying `--targets`. Currently, I don't use this script in our CI workflow file. If it is desired, we can add an extra job in our CI workflow to generate this report at the end, but this could be done in a separate PR if you allow me :-)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20474/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20474", "html_url": "https://github.com/huggingface/transformers/pull/20474", "diff_url": "https://github.com/huggingface/transformers/pull/20474.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20474.patch", "merged_at": 1669666474000 }
https://api.github.com/repos/huggingface/transformers/issues/20473
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20473/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20473/comments
https://api.github.com/repos/huggingface/transformers/issues/20473/events
https://github.com/huggingface/transformers/pull/20473
1,466,724,857
PR_kwDOCUB6oc5D0Zhd
20,473
Fix Swin ONNX export warnings
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Note that this will only work in recent versions of PyTorch. As was done for the [int div](https://github.com/huggingface/transformers/blob/321ef388fe041f630e65abc26a3be8580d7e858b/src/transformers/pytorch_utils.py#LL35C3-L35C3), you should create a util function in the PyTorch utils that works across PyTorch versions", "Good to know, thanks a lot! I think there are other models having the issue of using `torch.div` then, I'll fix as well. I'm a bit skeptic about readability (including this PR), trying to support tracing overall makes the code less readable I feel.", "Just realized that the warning above is not shown anymore in PyTorch 1.13. Seem to be https://github.com/pytorch/pytorch/pull/78411\r\n\r\nRefer to the release notes: `Updated torch.floor_divide to perform floor division` https://github.com/pytorch/pytorch/releases\r\n\r\nThis is fine since it influences only negative numbers. I'll close this PR." ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? This PR fixes uninformative warnings in the ONNX export for Swin as reported in https://github.com/huggingface/transformers/issues/19780 , namely ``` UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). ``` Since `torch.jit.trace` assumes expressions like `tensor.size(0)`, `tensor.size()[1]`, `tensor.shape[2]` are tensors in tracing mode ([reference](https://ppwwyyxx.com/blog/2022/TorchScript-Tracing-vs-Scripting/)), the warning is raised. ## Before submitting - [x] This PR fixes a typo ## Who can review? @lewtun
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20473/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20473", "html_url": "https://github.com/huggingface/transformers/pull/20473", "diff_url": "https://github.com/huggingface/transformers/pull/20473.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20473.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20472
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20472/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20472/comments
https://api.github.com/repos/huggingface/transformers/issues/20472/events
https://github.com/huggingface/transformers/pull/20472
1,466,621,975
PR_kwDOCUB6oc5D0DYx
20,472
Added TFBartForSequenceClassification
{ "login": "IMvision12", "id": 88665786, "node_id": "MDQ6VXNlcjg4NjY1Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IMvision12", "html_url": "https://github.com/IMvision12", "followers_url": "https://api.github.com/users/IMvision12/followers", "following_url": "https://api.github.com/users/IMvision12/following{/other_user}", "gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}", "starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions", "organizations_url": "https://api.github.com/users/IMvision12/orgs", "repos_url": "https://api.github.com/users/IMvision12/repos", "events_url": "https://api.github.com/users/IMvision12/events{/privacy}", "received_events_url": "https://api.github.com/users/IMvision12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger I have tested locally, not sure why tests are failing", "@IMvision12 the common tests do not pass for the model you have added, you can run them with `pytest tests/models/bart/test_modeling_tf_bart.py`.\r\n\r\nAlso the equivalence test PyTorch/TensorFlow does not pas either.", "@sgugger I've looked at some SequenceClassification models and the tests that have been written but none of them have TFSeq2SeqSequenceClassification or tests for it, so I'm not sure how to add a test for that. Need little help", "@ydshieh Still the tests are failing", "This issue has been solved in this PR here #20570 so i am closing this PR\r\n " ]
1,669
1,670
1,670
CONTRIBUTOR
null
# What does this PR do? Fixes: #19653
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20472/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20472/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20472", "html_url": "https://github.com/huggingface/transformers/pull/20472", "diff_url": "https://github.com/huggingface/transformers/pull/20472.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20472.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20471
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20471/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20471/comments
https://api.github.com/repos/huggingface/transformers/issues/20471/events
https://github.com/huggingface/transformers/issues/20471
1,466,454,629
I_kwDOCUB6oc5XaFJl
20,471
ImportError: cannot import name 'CommitOperationAdd' from 'huggingface_hub'
{ "login": "wccccp", "id": 55964850, "node_id": "MDQ6VXNlcjU1OTY0ODUw", "avatar_url": "https://avatars.githubusercontent.com/u/55964850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wccccp", "html_url": "https://github.com/wccccp", "followers_url": "https://api.github.com/users/wccccp/followers", "following_url": "https://api.github.com/users/wccccp/following{/other_user}", "gists_url": "https://api.github.com/users/wccccp/gists{/gist_id}", "starred_url": "https://api.github.com/users/wccccp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wccccp/subscriptions", "organizations_url": "https://api.github.com/users/wccccp/orgs", "repos_url": "https://api.github.com/users/wccccp/repos", "events_url": "https://api.github.com/users/wccccp/events{/privacy}", "received_events_url": "https://api.github.com/users/wccccp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "There seems to be an installation problem with `huggingface_hub`. You should try to uninstall and re-install it in your environment.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Just ran into this issue. I looked up the commit adding CommitOperationAdd, it's https://github.com/huggingface/huggingface_hub/commit/b6145e74b73bbd4a74dc00562e2b3f5e331066f4 .\r\nWhich means you need `huggingface_hub` >= 0.9.0" ]
1,669
1,704
1,672
NONE
null
### System Info PS C:\Users\46213> transformers-cli env Traceback (most recent call last): File "C:\Users\46213\anaconda3\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\46213\anaconda3\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\46213\anaconda3\Scripts\transformers-cli.exe\__main__.py", line 4, in <module> File "C:\Users\46213\anaconda3\lib\site-packages\transformers\__init__.py", line 30, in <module> from . import dependency_versions_check File "C:\Users\46213\anaconda3\lib\site-packages\transformers\dependency_versions_check.py", line 17, in <module> from .utils.versions import require_version, require_version_core File "C:\Users\46213\anaconda3\lib\site-packages\transformers\utils\__init__.py", line 48, in <module> from .hub import ( File "C:\Users\46213\anaconda3\lib\site-packages\transformers\utils\hub.py", line 32, in <module> from huggingface_hub import ( ImportError: cannot import name 'CommitOperationAdd' from 'huggingface_hub' (C:\Users\46213\anaconda3\lib\site-packages\huggingface_hub\__init__.py) ### Who can help? @LysandreJik @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction PS C:\Users\46213> pip show huggingface_hub Name: huggingface-hub Version: 0.10.1 Summary: Client library to download and publish models, datasets and other repos on the huggingface.co hub Home-page: https://github.com/huggingface/huggingface_hub Author: Hugging Face, Inc. Author-email: julien@huggingface.co License: Apache Location: c:\users\46213\anaconda3\lib\site-packages Requires: requests, typing-extensions, tqdm, pyyaml, packaging, filelock Required-by: transformers, ltp, evaluate, datasets ### Expected behavior my transformer don‘t run ,please help me
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20471/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20470
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20470/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20470/comments
https://api.github.com/repos/huggingface/transformers/issues/20470/events
https://github.com/huggingface/transformers/issues/20470
1,466,358,825
I_kwDOCUB6oc5XZtwp
20,470
Standardize expected input shapes for audio models
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's not possible to standardize this across all models, since some models naturally work on raw audio while others use various types of spectrograms.\r\n\r\nHowever, I feel that we should make the FeatureExtractors more flexible in the type of inputs they accept. \r\n\r\nFor example, right now, the Wav2Vec2FeatureExtractor expects the input to be mono and 16 kHz. It would be nicer if you could pass in a tensor of shape `(batch_size, num_channels, num_samples)` and an arbitrary sampling rate. \r\n\r\nIf the model requires mono input, the FeatureExtractor can automatically downmix stereo to mono. If the sampling rate does not match what the model expects, the FeatureExtractor can automatically resample the data. It's more convenient to do this in the FeatureExtractor than having the user do this themselves.\r\n\r\nThis is analogous to what the ImageProcessors do in vision models: if the image is not the expected size, it will be resized (= resampled). If the number of color channels is wrong, the ImageProcessor fixes this. \r\n\r\nThese operations are optional, so if the user already has a pipeline where they put the data in the correct format, they can choose to skip the resampling / downmixing stages.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
COLLABORATOR
null
### Feature request Hi, it seems some audio models expect `(batch_size, feature_size, n_frames)` (e.g. whisper) while others expect `(batch_size, sequence_length)` (e.g. wavlm, wav2vec2). Could this be standardized? It seems like the naming follows `input_features` vs `input_values` (but wav2vec2 feature extractor returns input_values!) It seems to be the difference between models taking raw audio as an input vs MEL-spectrogram / STFT. ### Motivation I ask because I am wondering if for models handling stereo I should pass inputs as `(batch_size, 2, n_frames)`. And if so, why not be able to pass `(batch_size, 1, n_frames)` for mono? Is not raw audio a feature as well? Related https://github.com/huggingface/transformers/issues/16564 Preliminar internal discussion at https://huggingface.slack.com/archives/C02G13FEMDH/p1669637602417809 ### Your contribution /
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20470/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20469
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20469/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20469/comments
https://api.github.com/repos/huggingface/transformers/issues/20469/events
https://github.com/huggingface/transformers/pull/20469
1,466,311,523
PR_kwDOCUB6oc5Dy_pI
20,469
fix both failing RoCBert tests
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "The two tests no pass locally (which was not the case before) ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? Fixes two failing test : - tokenization test did not take into account that the dummy tokenizer's `pad_token_id = 2` and `bos_token_id = 1`. The values used were `102, 101`. - on of the modeling test used `assertEqual` on tensors and not list, which is ambigus.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20469/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20469", "html_url": "https://github.com/huggingface/transformers/pull/20469", "diff_url": "https://github.com/huggingface/transformers/pull/20469.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20469.patch", "merged_at": 1669651737000 }
https://api.github.com/repos/huggingface/transformers/issues/20468
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20468/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20468/comments
https://api.github.com/repos/huggingface/transformers/issues/20468/events
https://github.com/huggingface/transformers/pull/20468
1,466,109,074
PR_kwDOCUB6oc5DyTrT
20,468
Fix doctests for audio models
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Merge as the failing (TF) test is irrelevant to this PR." ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? The condition was wrong (`[]` should be `()`) and failed some doctests as they get wrong classes ```python if ["SequenceClassification" in model_class or "AudioClassification" in model_class] ``` should be ```python if ("SequenceClassification" in model_class or "AudioClassification" in model_class) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20468/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20468", "html_url": "https://github.com/huggingface/transformers/pull/20468", "diff_url": "https://github.com/huggingface/transformers/pull/20468.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20468.patch", "merged_at": 1669630414000 }
https://api.github.com/repos/huggingface/transformers/issues/20467
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20467/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20467/comments
https://api.github.com/repos/huggingface/transformers/issues/20467/events
https://github.com/huggingface/transformers/pull/20467
1,466,057,421
PR_kwDOCUB6oc5DyIhN
20,467
Fix device issues in CLIPSeg tests
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? Just add `to.(torch_device)` in a few places
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20467/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20467/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20467", "html_url": "https://github.com/huggingface/transformers/pull/20467", "diff_url": "https://github.com/huggingface/transformers/pull/20467.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20467.patch", "merged_at": 1669628489000 }
https://api.github.com/repos/huggingface/transformers/issues/20466
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20466/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20466/comments
https://api.github.com/repos/huggingface/transformers/issues/20466/events
https://github.com/huggingface/transformers/pull/20466
1,465,477,827
PR_kwDOCUB6oc5DwPvG
20,466
RAG README & pl version updated
{ "login": "kaiiwoo", "id": 80252411, "node_id": "MDQ6VXNlcjgwMjUyNDEx", "avatar_url": "https://avatars.githubusercontent.com/u/80252411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kaiiwoo", "html_url": "https://github.com/kaiiwoo", "followers_url": "https://api.github.com/users/kaiiwoo/followers", "following_url": "https://api.github.com/users/kaiiwoo/following{/other_user}", "gists_url": "https://api.github.com/users/kaiiwoo/gists{/gist_id}", "starred_url": "https://api.github.com/users/kaiiwoo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kaiiwoo/subscriptions", "organizations_url": "https://api.github.com/users/kaiiwoo/orgs", "repos_url": "https://api.github.com/users/kaiiwoo/repos", "events_url": "https://api.github.com/users/kaiiwoo/events{/privacy}", "received_events_url": "https://api.github.com/users/kaiiwoo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20466). All of your documentation changes will be reflected on that endpoint.", "> Thanks for your PR, but we don't maintain this example, nor do we want to update it to more recent versions of any libraries it uses\r\n\r\nHi, thanks for the reply. This PR is late response for https://github.com/huggingface/transformers/issues/18704#issue-1345102153 issue. Nevertheless, if this PR can't be accepted now, can you tell me the reasons for not mainitaining RAG?", "The RAG model is maintained, it's just this example which is not. It clearly says in the README it should be run with PyTorch ligthning 1.3.1. If you want to run it with a more recent version you need to adapt the script, probably as you did, but I'm not changing what the original authors have done in this research project.\r\n\r\nMaintained examples are in the pytorch/tensorflow/flax subfolders of the examples folder.", "Agree with @sgugger actually (sorry for approving it)\r\n\r\nBut, ok for me to also update the example since fine-tuning RAG seems to be used quite a bit", "@patrickvonplaten \r\n\r\nActually, with the latest PL versions, we can't use **DDPPlugin**. So my suggestion is to move with ray_distributed_retriever only. If we update the code with the current changes, it will fail.\r\n\r\nI've added a comment in this in my last PR.\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/lightning_base.py#L269\r\n\r\nSo it would be better to come up with an if condition since ray_distributed_retriever's init function, doesn't take any parameters.\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/distributed_ray_retriever.py#L87\r\n\r\n\r\n@kaiiwoo ", "@shamanez Why not host an updated version of the example on your repo then link to it from here and our community page in the doc?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The current code for fine tuning examples/research_projects/rag' can't be executed without passing `--distributed_retriever ray` parameter. In addition, `pytorch-lightning==1.5.10` should be recommended to prevent other miscellaneous error. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @shamanez Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20466/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20466", "html_url": "https://github.com/huggingface/transformers/pull/20466", "diff_url": "https://github.com/huggingface/transformers/pull/20466.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20466.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20465
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20465/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20465/comments
https://api.github.com/repos/huggingface/transformers/issues/20465/events
https://github.com/huggingface/transformers/pull/20465
1,465,347,360
PR_kwDOCUB6oc5Dv2l7
20,465
[Fix a small bug] Misleading use of variable names
{ "login": "anonNo2", "id": 20515543, "node_id": "MDQ6VXNlcjIwNTE1NTQz", "avatar_url": "https://avatars.githubusercontent.com/u/20515543?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anonNo2", "html_url": "https://github.com/anonNo2", "followers_url": "https://api.github.com/users/anonNo2/followers", "following_url": "https://api.github.com/users/anonNo2/following{/other_user}", "gists_url": "https://api.github.com/users/anonNo2/gists{/gist_id}", "starred_url": "https://api.github.com/users/anonNo2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anonNo2/subscriptions", "organizations_url": "https://api.github.com/users/anonNo2/orgs", "repos_url": "https://api.github.com/users/anonNo2/repos", "events_url": "https://api.github.com/users/anonNo2/events{/privacy}", "received_events_url": "https://api.github.com/users/anonNo2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20465). All of your documentation changes will be reflected on that endpoint.", "We do not maintain those examples, they are given as is :-) You can try pinging the original author to see if they agree with the change however.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) It's just a small problem, but it may cause misunderstandings for subsequent developers. The author reversed the positions of pred and hypo. This pull request fixes this. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20465/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20465", "html_url": "https://github.com/huggingface/transformers/pull/20465", "diff_url": "https://github.com/huggingface/transformers/pull/20465.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20465.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20464
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20464/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20464/comments
https://api.github.com/repos/huggingface/transformers/issues/20464/events
https://github.com/huggingface/transformers/pull/20464
1,465,334,446
PR_kwDOCUB6oc5Dv0Hx
20,464
[fix bug] Although this bug will not have any impact on rag here (bec…
{ "login": "anonNo2", "id": 20515543, "node_id": "MDQ6VXNlcjIwNTE1NTQz", "avatar_url": "https://avatars.githubusercontent.com/u/20515543?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anonNo2", "html_url": "https://github.com/anonNo2", "followers_url": "https://api.github.com/users/anonNo2/followers", "following_url": "https://api.github.com/users/anonNo2/following{/other_user}", "gists_url": "https://api.github.com/users/anonNo2/gists{/gist_id}", "starred_url": "https://api.github.com/users/anonNo2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anonNo2/subscriptions", "organizations_url": "https://api.github.com/users/anonNo2/orgs", "repos_url": "https://api.github.com/users/anonNo2/repos", "events_url": "https://api.github.com/users/anonNo2/events{/privacy}", "received_events_url": "https://api.github.com/users/anonNo2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
NONE
null
…ause em is used as the evaluation metric), if you add bleu or rouge here, this bug will have an incorrect impact # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) There is a small bug in the rag implementation, the two parameters are reversed.(pred and hypo) This error will not affect the experimental results here, but if you increase the evaluation metrics such as bleu or rouge, it will affect the results. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20464/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20464", "html_url": "https://github.com/huggingface/transformers/pull/20464", "diff_url": "https://github.com/huggingface/transformers/pull/20464.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20464.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20463
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20463/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20463/comments
https://api.github.com/repos/huggingface/transformers/issues/20463/events
https://github.com/huggingface/transformers/pull/20463
1,465,114,387
PR_kwDOCUB6oc5DvKxH
20,463
Replace assertions with value errors on distilbert model
{ "login": "JuheonChu", "id": 35699839, "node_id": "MDQ6VXNlcjM1Njk5ODM5", "avatar_url": "https://avatars.githubusercontent.com/u/35699839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JuheonChu", "html_url": "https://github.com/JuheonChu", "followers_url": "https://api.github.com/users/JuheonChu/followers", "following_url": "https://api.github.com/users/JuheonChu/following{/other_user}", "gists_url": "https://api.github.com/users/JuheonChu/gists{/gist_id}", "starred_url": "https://api.github.com/users/JuheonChu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JuheonChu/subscriptions", "organizations_url": "https://api.github.com/users/JuheonChu/orgs", "repos_url": "https://api.github.com/users/JuheonChu/repos", "events_url": "https://api.github.com/users/JuheonChu/events{/privacy}", "received_events_url": "https://api.github.com/users/JuheonChu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you so much!\r\n", "Thank you so much for your guidance and help!" ]
1,669
1,670
1,669
CONTRIBUTOR
null
# Replace assertions with ValueErros on distilbert model This PR is made to check if the valid checks made from #20433 pass for all cases. Co-author: @batese2001 To: @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20463/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20463", "html_url": "https://github.com/huggingface/transformers/pull/20463", "diff_url": "https://github.com/huggingface/transformers/pull/20463.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20463.patch", "merged_at": 1669646644000 }
https://api.github.com/repos/huggingface/transformers/issues/20462
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20462/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20462/comments
https://api.github.com/repos/huggingface/transformers/issues/20462/events
https://github.com/huggingface/transformers/pull/20462
1,465,110,750
PR_kwDOCUB6oc5DvKEh
20,462
Replace assertions with value errors on distilbert model #20433
{ "login": "JuheonChu", "id": 35699839, "node_id": "MDQ6VXNlcjM1Njk5ODM5", "avatar_url": "https://avatars.githubusercontent.com/u/35699839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JuheonChu", "html_url": "https://github.com/JuheonChu", "followers_url": "https://api.github.com/users/JuheonChu/followers", "following_url": "https://api.github.com/users/JuheonChu/following{/other_user}", "gists_url": "https://api.github.com/users/JuheonChu/gists{/gist_id}", "starred_url": "https://api.github.com/users/JuheonChu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JuheonChu/subscriptions", "organizations_url": "https://api.github.com/users/JuheonChu/orgs", "repos_url": "https://api.github.com/users/JuheonChu/repos", "events_url": "https://api.github.com/users/JuheonChu/events{/privacy}", "received_events_url": "https://api.github.com/users/JuheonChu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is a demonstrative PR to see if #20433 errors are resolved or not.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20462). All of your documentation changes will be reflected on that endpoint." ]
1,669
1,669
1,669
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20462/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20462", "html_url": "https://github.com/huggingface/transformers/pull/20462", "diff_url": "https://github.com/huggingface/transformers/pull/20462.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20462.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20461
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20461/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20461/comments
https://api.github.com/repos/huggingface/transformers/issues/20461/events
https://github.com/huggingface/transformers/pull/20461
1,465,089,703
PR_kwDOCUB6oc5DvGIo
20,461
FixAuxiliaryLossForDeformableDetr
{ "login": "long8v", "id": 46675408, "node_id": "MDQ6VXNlcjQ2Njc1NDA4", "avatar_url": "https://avatars.githubusercontent.com/u/46675408?v=4", "gravatar_id": "", "url": "https://api.github.com/users/long8v", "html_url": "https://github.com/long8v", "followers_url": "https://api.github.com/users/long8v/followers", "following_url": "https://api.github.com/users/long8v/following{/other_user}", "gists_url": "https://api.github.com/users/long8v/gists{/gist_id}", "starred_url": "https://api.github.com/users/long8v/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/long8v/subscriptions", "organizations_url": "https://api.github.com/users/long8v/orgs", "repos_url": "https://api.github.com/users/long8v/repos", "events_url": "https://api.github.com/users/long8v/events{/privacy}", "received_events_url": "https://api.github.com/users/long8v/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@NielsRogge Ping on this PR.", "Hi @long8v would you be able to run `make fixup` from the root of the repo, and potentially rebase on the main branch to make the CI green?\r\n\r\nThanks!", "I reuploaded this PR [here](https://github.com/huggingface/transformers/pull/20959)!" ]
1,669
1,672
1,672
CONTRIBUTOR
null
# What does this PR do? DeformableDetr does not work when auxiliary_loss=True. Since Deformable Detr has list of class_embed, bbox_embed, this code will raise NotImplementedError. ```python intermediate = outputs.intermediate_hidden_states if return_dict else outputs[4] outputs_class = self.class_embed(intermediate) outputs_coord = self.bbox_embed(intermediate).sigmoid() ``` ```python outputs_class = self.class_embed(intermediate) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 201, in _forward_unimplemented raise NotImplementedError NotImplementedError ``` To fix this, we can simply use predefined `outputs_class` and `outputs_coord` in this [line](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L1943-L1944). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @NielsRogge <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20461/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20461", "html_url": "https://github.com/huggingface/transformers/pull/20461", "diff_url": "https://github.com/huggingface/transformers/pull/20461.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20461.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20460
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20460/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20460/comments
https://api.github.com/repos/huggingface/transformers/issues/20460/events
https://github.com/huggingface/transformers/pull/20460
1,465,087,496
PR_kwDOCUB6oc5DvFuV
20,460
FixValidRatioForDeformableDetr
{ "login": "long8v", "id": 46675408, "node_id": "MDQ6VXNlcjQ2Njc1NDA4", "avatar_url": "https://avatars.githubusercontent.com/u/46675408?v=4", "gravatar_id": "", "url": "https://api.github.com/users/long8v", "html_url": "https://github.com/long8v", "followers_url": "https://api.github.com/users/long8v/followers", "following_url": "https://api.github.com/users/long8v/following{/other_user}", "gists_url": "https://api.github.com/users/long8v/gists{/gist_id}", "starred_url": "https://api.github.com/users/long8v/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/long8v/subscriptions", "organizations_url": "https://api.github.com/users/long8v/orgs", "repos_url": "https://api.github.com/users/long8v/repos", "events_url": "https://api.github.com/users/long8v/events{/privacy}", "received_events_url": "https://api.github.com/users/long8v/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I don't think it is related with my PR. As you can see, my commit is about forward result in batch inference, and has nothing to do with repo consistency, build test, and so on.", "Hi, \r\n\r\nFor repo consistency, you need to run `make fixup `from the root of the repo to fix style and quality of the code.", "Hi, for repo consistency I ran `make fixup` and it raised same error with circleci, `No module named 'keras.saving.hdf5_format`. I found https://github.com/huggingface/transformers/issues/20393, so I downgraded tf==2.10 in local and it seems passed everything.\r\n\r\n```\r\nAll done! ✨ 🍰 ✨\r\n1 file left unchanged.\r\npython utils/custom_init_isort.py\r\npython utils/sort_auto_mappings.py\r\ndoc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source\r\npython utils/check_doc_toc.py --fix_and_overwrite\r\nrunning deps_table_update\r\nupdating src/transformers/dependency_versions_table.py\r\npython utils/check_copies.py\r\npython utils/check_table.py\r\npython utils/check_dummies.py\r\npython utils/check_repo.py\r\nChecking all models are included.\r\nChecking all models are public.\r\n2022-12-04 08:42:15.140190: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2022-12-04 08:42:15.963675: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib:/usr/local/lib:\r\n2022-12-04 08:42:15.963836: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib:/usr/local/lib:\r\n2022-12-04 08:42:15.963859: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\nChecking all models are properly tested.\r\nChecking all objects are properly documented.\r\nChecking all models are in at least one auto class.\r\npython utils/check_inits.py\r\npython utils/check_config_docstrings.py\r\npython utils/tests_fetcher.py --sanity_check\r\npython utils/update_metadata.py --check-only\r\n2022-12-04 08:42:26.671899: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2022-12-04 08:42:26.916721: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2022-12-04 08:42:27.809325: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib:/usr/local/lib:\r\n2022-12-04 08:42:27.809472: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib:/usr/local/lib:\r\n2022-12-04 08:42:27.809493: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\n```\r\n\r\nand for others, I also tested all deformable detr related test codes(pytest two py files in `tests/models/deformable_detr`), and it passed everything.\r\nCould you tell me what I should do else?", "@long8v you'll need to rebase on the main branch to fix this issue:\r\n```\r\ngit remote add upstream https://github.com/huggingface/transformers.git\r\ngit fetch upstream\r\ngit rebase upstream/main\r\n```", "I reuploaded this PR [here](https://github.com/huggingface/transformers/pull/20958)!" ]
1,669
1,672
1,672
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20460/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20460", "html_url": "https://github.com/huggingface/transformers/pull/20460", "diff_url": "https://github.com/huggingface/transformers/pull/20460.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20460.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20459
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20459/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20459/comments
https://api.github.com/repos/huggingface/transformers/issues/20459/events
https://github.com/huggingface/transformers/pull/20459
1,464,971,180
PR_kwDOCUB6oc5Duuq6
20,459
Efficientformer
{ "login": "Bearnardd", "id": 43574448, "node_id": "MDQ6VXNlcjQzNTc0NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/43574448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bearnardd", "html_url": "https://github.com/Bearnardd", "followers_url": "https://api.github.com/users/Bearnardd/followers", "following_url": "https://api.github.com/users/Bearnardd/following{/other_user}", "gists_url": "https://api.github.com/users/Bearnardd/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bearnardd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bearnardd/subscriptions", "organizations_url": "https://api.github.com/users/Bearnardd/orgs", "repos_url": "https://api.github.com/users/Bearnardd/repos", "events_url": "https://api.github.com/users/Bearnardd/events{/privacy}", "received_events_url": "https://api.github.com/users/Bearnardd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @Bearnardd, thank you for working on this! Could you run `make fixup` to fix the failed style and code quality tests?\r\n\r\nAlso, type casting function arguments (e.g. `def something(arg1: torch.Tensor):`) causes errors if the type depends on a conditionally imported library (torch), you can see the failed test logs if you head over to the CI test details. Could you remove those from `test_modeling_efficientformer.py`?", "Hi @alaradirik - thank you very much for the detailed review! I will address the changes shortly :) . I am aware of the failing tests but I am not entirely sure how to count the number of expected attentions and hidden layers for this particular model since it does not have a \"standard\" transformer based architecture. Nevertheless I think that I will address the current comments and as the next step I will ask you some questions about expected attention and hidden outputs.", "> Hi @alaradirik - thank you very much for the detailed review! I will address the changes shortly :) . I am aware of the failing tests but I am not entirely sure how to count the number of expected attentions and hidden layers for this particular model since it does not have a \"standard\" transformer based architecture. Nevertheless I think that I will address the current comments and as the next step I will ask you some questions about expected attention and hidden outputs.\r\n\r\nHey @Bearnardd, no problem at all! We define our own small model architecture within the `test_modeling_efficientformer.py` file as it is faster to test with a smaller dummy model. You would just need to check `num_hidden_layers` and `num_attention_heads` attributes of the test class to see the expected number of layers. It seems the model has the correct number of attention heads and hidden layers but doesn't return all of the outputs (attentions and hidden state outputs from all layers). \r\n\r\nIf you are sure the implementation is correct and this is expected (in this case or other cases), you can always override the common tests within `test_modeling_efficientformer.py` by adding a method with the same name to the test class (`EfficientFormerModelTester`).\r\n", "_The documentation is not available anymore as the PR was closed or merged._", "Hi @NielsRogge @sgugger - could you take a look at the changes?", "@NielsRogge I have applied the changes. Would you mind to do the review?", "@sgugger thanks good catch with the resolved one. Actually I have checked that and `self.ab` is used in the `EfficientFormerSelfAttention` forward method so in fact it is needed. Moreover the original `EfficientFormer` code is based on the `levit` model and in the levit code there is a similar method.", "Model is on the hub under the following [path](https://huggingface.co/Bearnardd/efficientformer-l1-300).", "> Thanks for the explanation on the train method. Could you just make sure that caching this tensor this way does not add a key to the state dict of the model? In LeViT, the cache is a dictionary, not a tensor, so there is no problem.\r\n\r\n@sgugger I have checked that and it does not add a key.", "Thanks @Bearnardd !\r\n@NielsRogge I'll let you have one last look and merge if you're happy :-)", "Thank you so much for working on this @Bearnardd!" ]
1,669
1,674
1,674
CONTRIBUTOR
null
This PR adds Efficientformer, a model that has similar latency as MobileNets, but achieves better accuracy on ImageNet. It is based on the closed PR: https://github.com/huggingface/transformers/pull/18296 Paper: https://arxiv.org/abs/2206.01191 Code and weights: https://github.com/snap-research/EfficientFormer Fixes https://github.com/huggingface/transformers/issues/18041 ## Who can review? @alaradirik @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20459/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20459", "html_url": "https://github.com/huggingface/transformers/pull/20459", "diff_url": "https://github.com/huggingface/transformers/pull/20459.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20459.patch", "merged_at": 1674203743000 }
https://api.github.com/repos/huggingface/transformers/issues/20458
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20458/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20458/comments
https://api.github.com/repos/huggingface/transformers/issues/20458/events
https://github.com/huggingface/transformers/pull/20458
1,464,850,702
PR_kwDOCUB6oc5DuYI6
20,458
[CLIPTokenizer] Improve warning
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I feel it's a bit better with a comma or even a period. \r\n\r\n```\r\n\"ftfy or spacy is not installed, using custom BasicTokenizer instead of ftfy.\"\r\n```\r\n\r\nI am super motivated to open a PR if you even allow me to just add a comma 😃 " ]
1,669
1,669
1,669
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> As can be seen in this thread the warning is a bit confusing for libraries built on top of `transformes`: https://github.com/huggingface/diffusers/issues/1388#issuecomment-1327760610 Could we maybe downgrade it to a "info" statement and remove the mentioning of BERT? ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20458/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20458/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20458", "html_url": "https://github.com/huggingface/transformers/pull/20458", "diff_url": "https://github.com/huggingface/transformers/pull/20458.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20458.patch", "merged_at": 1669645214000 }
https://api.github.com/repos/huggingface/transformers/issues/20457
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20457/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20457/comments
https://api.github.com/repos/huggingface/transformers/issues/20457/events
https://github.com/huggingface/transformers/issues/20457
1,464,846,428
I_kwDOCUB6oc5XT8hc
20,457
No module named 'keras.saving.hdf5_format'
{ "login": "cvinker", "id": 13070943, "node_id": "MDQ6VXNlcjEzMDcwOTQz", "avatar_url": "https://avatars.githubusercontent.com/u/13070943?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cvinker", "html_url": "https://github.com/cvinker", "followers_url": "https://api.github.com/users/cvinker/followers", "following_url": "https://api.github.com/users/cvinker/following{/other_user}", "gists_url": "https://api.github.com/users/cvinker/gists{/gist_id}", "starred_url": "https://api.github.com/users/cvinker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cvinker/subscriptions", "organizations_url": "https://api.github.com/users/cvinker/orgs", "repos_url": "https://api.github.com/users/cvinker/repos", "events_url": "https://api.github.com/users/cvinker/events{/privacy}", "received_events_url": "https://api.github.com/users/cvinker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ah, I used `pip install git+https://github.com/huggingface/transformers` works now." ]
1,669
1,669
1,669
NONE
null
I am running a virtual instance of Ubuntu 22.04LTS on Google Cloud. I have followed these instructions:[https://huggingface.co/docs/transformers/installation](url) I am running solely CPU so I followed the instructions for that. Independently installed tensorflow, flax, and pytorch without error. After doing this and using the test command: `python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ` I get the following error: ``` (.env) colinvink2002@instance-1:~$ python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" 2022-11-25 18:45:40.340282: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-11-25 18:45:40.556836: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib/mesa-diverted/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu/mesa:/usr/lib/x86_64-linux-gnu/dri:/usr/lib/x86_64-linux-gnu/gallium-pipe 2022-11-25 18:45:40.556907: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 2022-11-25 18:45:41.795427: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib/mesa-diverted/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu/mesa:/usr/lib/x86_64-linux-gnu/dri:/usr/lib/x86_64-linux-gnu/gallium-pipe 2022-11-25 18:45:41.795623: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib/mesa-diverted/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu/mesa:/usr/lib/x86_64-linux-gnu/dri:/usr/lib/x86_64-linux-gnu/gallium-pipe 2022-11-25 18:45:41.795646: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english and revision af0f99b (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english). Using a pipeline without specifying a model name and revision in production is not recommended. Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 629/629 [00:00<00:00, 298kB/s] Traceback (most recent call last): File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1076, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/colinvink2002/anaconda3/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py", line 34, in <module> from ...modeling_tf_utils import ( File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 39, in <module> from keras.saving.hdf5_format import save_attributes_to_hdf5_group ModuleNotFoundError: No module named 'keras.saving.hdf5_format' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 727, in pipeline framework, model = infer_framework_load_model( File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/pipelines/base.py", line 233, in infer_framework_load_model _class = getattr(transformers_module, f"TF{architecture}", None) File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1067, in __getattr__ value = getattr(module, name) File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1066, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1078, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the following error (look up to see its traceback): No module named 'keras.saving.hdf5_format' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20457/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20456
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20456/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20456/comments
https://api.github.com/repos/huggingface/transformers/issues/20456/events
https://github.com/huggingface/transformers/pull/20456
1,464,831,208
PR_kwDOCUB6oc5DuUDU
20,456
Fix typo in FSMT Tokenizer
{ "login": "kamalkraj", "id": 17096858, "node_id": "MDQ6VXNlcjE3MDk2ODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kamalkraj", "html_url": "https://github.com/kamalkraj", "followers_url": "https://api.github.com/users/kamalkraj/followers", "following_url": "https://api.github.com/users/kamalkraj/following{/other_user}", "gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions", "organizations_url": "https://api.github.com/users/kamalkraj/orgs", "repos_url": "https://api.github.com/users/kamalkraj/repos", "events_url": "https://api.github.com/users/kamalkraj/events{/privacy}", "received_events_url": "https://api.github.com/users/kamalkraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20456/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20456/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20456", "html_url": "https://github.com/huggingface/transformers/pull/20456", "diff_url": "https://github.com/huggingface/transformers/pull/20456.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20456.patch", "merged_at": 1669421041000 }
https://api.github.com/repos/huggingface/transformers/issues/20455
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20455/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20455/comments
https://api.github.com/repos/huggingface/transformers/issues/20455/events
https://github.com/huggingface/transformers/pull/20455
1,464,771,667
PR_kwDOCUB6oc5DuHmY
20,455
Fix links of `contrastive_loss`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? We copied/pasted/replaced, but here we should not replace CLIP by the new model name.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20455/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20455", "html_url": "https://github.com/huggingface/transformers/pull/20455", "diff_url": "https://github.com/huggingface/transformers/pull/20455.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20455.patch", "merged_at": 1669629779000 }
https://api.github.com/repos/huggingface/transformers/issues/20454
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20454/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20454/comments
https://api.github.com/repos/huggingface/transformers/issues/20454/events
https://github.com/huggingface/transformers/pull/20454
1,464,737,247
PR_kwDOCUB6oc5DuARi
20,454
[trainer] apex test fix
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
there was a small mistake in a test of https://github.com/huggingface/transformers/pull/18961, this PR fixes it. the main CI doesn't have apex installed that's why it missed it. Thank you, @ydshieh for the heads up about the breakage
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20454/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20454", "html_url": "https://github.com/huggingface/transformers/pull/20454", "diff_url": "https://github.com/huggingface/transformers/pull/20454.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20454.patch", "merged_at": 1669395731000 }
https://api.github.com/repos/huggingface/transformers/issues/20453
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20453/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20453/comments
https://api.github.com/repos/huggingface/transformers/issues/20453/events
https://github.com/huggingface/transformers/pull/20453
1,464,735,684
PR_kwDOCUB6oc5Dt_8x
20,453
[Vision] Support different floating precision inputs from the `ImageProcessor`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20453). All of your documentation changes will be reflected on that endpoint.", "I think this PR is ready, at least as a PoC! \r\nTo make the PR complete, for now the arg `float_precision` needs to be manually added for each image processor. Before moving forward and start doing it for all image processors and adding tests, I would love to hear from @sgugger, @amyeroberts & @ydshieh to see if this is the approach we would like to follow!\r\nThanks again!", "Thanks so much everyone for your comments!\r\nAfter thinking a bit and trying to see if this could be useful for `flax` \r\n```\r\nimport jax.numpy as jnp\r\nfrom transformers import FlaxViTForImageClassification, ViTFeatureExtractor\r\n\r\nfrom PIL import Image\r\nimport requests\r\n\r\nurl = 'http://images.cocodataset.org/val2017/000000039769.jpg'\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\nmodel = FlaxViTForImageClassification.from_pretrained(\"google/vit-base-patch16-224\", dtype=jnp.float16)\r\nfeature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224')\r\n\r\ninputs = feature_extractor(images=image, return_tensors=\"np\")\r\noutputs = model(**inputs)\r\nprint(outputs)\r\n```\r\nit seems that `flax` can deal properly with different `dtype`, without having to explicitly cast the input. I think that a good point has been raised by @sgugger, however it could be useful if it is needed on `tf` side. If not, happy to change the PR to something that modifies only the `.to` function as this will be intended only for PyTorch. \r\n", "I don't have strong opinion though. So you can follow what @sgugger suggests. If we find it's useful for other frameworks, we can add them back.", "Thanks everyone!\r\nLet's keep this PR open in case we figure out this is needed for `tf`. I have opened a PR in #20536 for supporting dtypes in `.to`", "> @gante @Rocketknight1 - how useful would this be in TF land?\r\n\r\nI don't think our TF models are compatible with half-precision, right @Rocketknight1? At least I haven't used TF with half-precision :D ", "Extremely late reply on the TF front, but yeah, we aren't really running TF models in half precision right now. We do support mixed precision (similar to Torch AMP), but we don't officially support splatting the whole model to (b)float16 yet.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? This PR introduces the input casting mechanism for image processors. Since the introduction of `accelerate` supported models for Vision, I have been playing around with half-precision models. I found it a bit inintuitive to manually cast the `pixel_values` outside the `ImageProcessor` class. Therefore for some models, [small hacks have been introduced](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/modeling_vit.py#L571-L574) to make the casting operation more user-friendly. With this PR, it will be possible to cast the input tensors to any floating point precision, for any framework, at the`ImageProcessor` level as follows: ``` from transformers import ViTFeatureExtractor from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch32-384') inputs = feature_extractor(images=image, return_tensors="np", float_precision="float16") print(inputs.pixel_values.dtype) >>> float16 ``` The casting discards non-floating point tensors, therefore these tensors should not be affected by the casting mechanism (thinking for eg for `ViLT` that takes both text + image) With this PR, the hacks introduced on ViT and OWLViT will be removed! cc @amyeroberts @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20453/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20453/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20453", "html_url": "https://github.com/huggingface/transformers/pull/20453", "diff_url": "https://github.com/huggingface/transformers/pull/20453.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20453.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20452
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20452/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20452/comments
https://api.github.com/repos/huggingface/transformers/issues/20452/events
https://github.com/huggingface/transformers/issues/20452
1,464,563,410
I_kwDOCUB6oc5XS3bS
20,452
Documentation missing is_decoder
{ "login": "cronoik", "id": 18630848, "node_id": "MDQ6VXNlcjE4NjMwODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cronoik", "html_url": "https://github.com/cronoik", "followers_url": "https://api.github.com/users/cronoik/followers", "following_url": "https://api.github.com/users/cronoik/following{/other_user}", "gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}", "starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cronoik/subscriptions", "organizations_url": "https://api.github.com/users/cronoik/orgs", "repos_url": "https://api.github.com/users/cronoik/repos", "events_url": "https://api.github.com/users/cronoik/events{/privacy}", "received_events_url": "https://api.github.com/users/cronoik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for flagging. Would you like to open a PR with a fix?" ]
1,669
1,670
1,670
CONTRIBUTOR
null
### System Info Not relevant. Issue is about online documentation. ### Who can help? @sgugger @stevhliu ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The bert documentation [page](https://huggingface.co/docs/transformers/model_doc/bert) mentions `is_decoder` 5 times but the BertConfig [class documentation](https://huggingface.co/docs/transformers/model_doc/bert) not a single time. This probably affects other models as well. ### Expected behavior BertConfig class documentation should contain an entry for `is_decoder`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20452/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20451
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20451/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20451/comments
https://api.github.com/repos/huggingface/transformers/issues/20451/events
https://github.com/huggingface/transformers/issues/20451
1,464,555,501
I_kwDOCUB6oc5XS1ft
20,451
Wav2Vec2 adapter layer being ignored at random
{ "login": "OllieBroadhurst", "id": 46894149, "node_id": "MDQ6VXNlcjQ2ODk0MTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/46894149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OllieBroadhurst", "html_url": "https://github.com/OllieBroadhurst", "followers_url": "https://api.github.com/users/OllieBroadhurst/followers", "following_url": "https://api.github.com/users/OllieBroadhurst/following{/other_user}", "gists_url": "https://api.github.com/users/OllieBroadhurst/gists{/gist_id}", "starred_url": "https://api.github.com/users/OllieBroadhurst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OllieBroadhurst/subscriptions", "organizations_url": "https://api.github.com/users/OllieBroadhurst/orgs", "repos_url": "https://api.github.com/users/OllieBroadhurst/repos", "events_url": "https://api.github.com/users/OllieBroadhurst/events{/privacy}", "received_events_url": "https://api.github.com/users/OllieBroadhurst/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, I think it's because `Wav2Vec2AdapterLayer` layers got dropped out here.\r\n\r\nhttps://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1006-L1009\r\n\r\nYou can pass `layerdrop=0` to `from_pretrained()` to deactivate it.\r\n\r\ncc @patrickvonplaten, I understand the use of layerdrop in transformers structure, but why should we also have it in CNNs (Wav2Vec2AdapterLayer) ?", "> Hi, I think it's because `Wav2Vec2AdapterLayer` layers got dropped out here.\r\n> \r\n> https://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1006-L1009\r\n> \r\n> You can pass `layerdrop=0` to `from_pretrained()` to deactivate it.\r\n> \r\n> cc @patrickvonplaten, I understand the use of layerdrop in transformers structure, but why should we also have it in CNNs (Wav2Vec2AdapterLayer) ?\r\n\r\nWow, completely missed that. Thank you!", "Hey @OllieBroadhurst! Are you using the adapter layer to fine-tune the Wav2Vec2 model standalone? The adapter layer works best when combining the Wav2Vec2 model in a sequence-to-sequence combination (see https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#warm-started-speech-encoder-decoder-model)", "> Hey @OllieBroadhurst! Are you using the adapter layer to fine-tune the Wav2Vec2 model standalone? The adapter layer works best when combining the Wav2Vec2 model in a sequence-to-sequence combination (see https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#warm-started-speech-encoder-decoder-model)\r\n\r\nHi @sanchit-gandhi!\r\n\r\nIt actually is for an encoder-decoder model. The reason it caused an issue is that I'm passing the encoder outputs to `inputs_embeds` instead of `encoder_hidden_states` and the positional embedding dim was smaller than the encoder output dim whenever the adapter layer was skipped. So definitely an edge case :)" ]
1,669
1,670
1,669
CONTRIBUTOR
null
### System Info Hi there! During training, the hidden states of the base Wav2Vec2 model seem to be randomly skipping the adapter layer [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1322). Just repeatedly running a forward pass with the same inputs will, on occasion, produce different output sequence lengths. When this happens I've logged the shapes before the adapter layer is applied as well as after, and they are the same, indicating that the layer is being skipped completely. ### Who can help? Pinging @patrickvonplaten and @anton-l, please tell me I'm going crazy. ### Information - [X] The official example scripts - [ ] My own modified scripts I'm using the current Colab environment. ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import Wav2Vec2Model model = Wav2Vec2Model.from_pretrained("anton-l/wav2vec2-base-lang-id", add_adapter=True, adapter_stride=2, adapter_kernel_size=3, num_adapter_layers=2) model.train() # NB dummy_input = torch.randn((1, 16000)) expected_output_sequence_length = 13 for _ in range(200): output_shape = model(input_values=dummy_input)[0].shape[1] if output_shape != expected_output_sequence_length: print(output_shape) ### Expected behavior The above loop shouldn't print anything out.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20451/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20451/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20450
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20450/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20450/comments
https://api.github.com/repos/huggingface/transformers/issues/20450/events
https://github.com/huggingface/transformers/pull/20450
1,464,501,925
PR_kwDOCUB6oc5DtOLJ
20,450
fix `word_to_tokens` docstring format
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fix a format issue in the `word_to_tokens` docstring which prevented the method outputs from being displayed on the documentation site. I also added an example for the None value returned. Fix https://github.com/huggingface/transformers/issues/20449 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20450/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20450/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20450", "html_url": "https://github.com/huggingface/transformers/pull/20450", "diff_url": "https://github.com/huggingface/transformers/pull/20450.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20450.patch", "merged_at": 1669404481000 }
https://api.github.com/repos/huggingface/transformers/issues/20449
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20449/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20449/comments
https://api.github.com/repos/huggingface/transformers/issues/20449/events
https://github.com/huggingface/transformers/issues/20449
1,464,474,725
I_kwDOCUB6oc5XShxl
20,449
Encoding.word_to_tokens() returns None within valid sequence
{ "login": "carschno", "id": 4696228, "node_id": "MDQ6VXNlcjQ2OTYyMjg=", "avatar_url": "https://avatars.githubusercontent.com/u/4696228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/carschno", "html_url": "https://github.com/carschno", "followers_url": "https://api.github.com/users/carschno/followers", "following_url": "https://api.github.com/users/carschno/following{/other_user}", "gists_url": "https://api.github.com/users/carschno/gists{/gist_id}", "starred_url": "https://api.github.com/users/carschno/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/carschno/subscriptions", "organizations_url": "https://api.github.com/users/carschno/orgs", "repos_url": "https://api.github.com/users/carschno/repos", "events_url": "https://api.github.com/users/carschno/events{/privacy}", "received_events_url": "https://api.github.com/users/carschno/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @carschno , \r\nThank you very much for bringing the problem to our attention! It is indeed information filled in the docstring of the method but there is a problem when rendering this documentation on our site. I'm trying to fix it in this PR #20450 .", "I see, it is a documentation issue. Thanks for looking into it!\r\n\r\nI still think it is counter-intuitive that there can be words without corresponding tokens. I can imagine some special cases, but perhaps it would be a good occasion to elaborate or exemplify those cases in the documentation a bit?", "> I still think it is counter-intuitive that there can be words without corresponding tokens. I can imagine some special cases, but perhaps it would be a good occasion to elaborate or exemplify those cases in the documentation a bit?\r\n\r\nThanks for your feedback! We can indeed give an example in the documentation. It is a case that occurs in particular when we ask to the tokenizer to add special tokens to match a template. For example, if it is asked to add a class token at the beginning of the sentence, this token class token does not correspond to anything in the initial raw sentence.", "Thanks again, that makes sense! However, the example case you describe does not really fit the case I have encountered. No template has been involved there.\r\n\r\nOriginally, I came across the issue in a long, OCR'd text with many special characters and erroneous tokens due to OCR errors. But in the example I used in my investigations (and pasted here), this is not the case either.\r\n", "> But in the example I used in my investigations (and pasted here), this is not the case either.\r\n\r\nI could be wrong but it seems to me that your example uses a template. We can see it by running the following code:\r\n```python\r\nprint(encoding.word_ids())\r\nprint(tokenizer.convert_ids_to_tokens(encoding.input_ids))\r\n```\r\nwhich gives:\r\n```bash\r\n[None, 0, 1, 2, 3, 4, 5, None]\r\n['<s>', 'Dit', 'Ġis', 'Ġeen', 'Ġgoede', 'Ġtekst', '.', '</s>']\r\n```\r\nHere we can see that the `None` correspond to the \"template\" tokens `'<s>'` and `'</s>'`.\r\n", "I suppose you are right. I had some doubts because in my aforementioned original text (long erroneous), this occurred somewhere in the middle of the text. I will try to reproduce, but I guess there might have been special tokens as well due to longer sequences of whitespace and/or punctuation." ]
1,669
1,669
1,669
NONE
null
### System Info - `transformers` version: 4.23.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.6 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes(?) - Using distributed or parallel set-up in script?: no ### Who can help? @SaulLu @sgugger @stevhliu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Tokenize a sentence -> `BatchEncoding` 2. Iterate over `word_ids` 3. Call `word_to_chars(word_index)` 4. `TypeError` is raised at arbitrary word index (see output below) ``` MODEL_NAME = "DTAI-KULeuven/robbertje-1-gb-non-shuffled" MODEL_MAX_LENGTH = 512 from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( MODEL_NAME, model_max_length=MODEL_MAX_LENGTH, truncation=True ) text = "Dit is een goede tekst." encoding = tokenizer(text) for word_index in range(len(encoding.word_ids())): if word_index is not None: print(word_index) char_span = encoding.word_to_chars(word_index) 0 1 2 3 4 5 6 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) tokenization_test.ipynb Cell 3 in <cell line: 1>() [2](vscode-notebook-cell:/tokenization_test.ipynb#W2sZmlsZQ%3D%3D?line=1) if word_index is not None: [3](vscode-notebook-cell:/tokenization_test.ipynb#W2sZmlsZQ%3D%3D?line=2) print(word_index) ----> [4](vscode-notebook-cell:/tokenization_test.ipynb#W2sZmlsZQ%3D%3D?line=3) char_span = encoding.word_to_chars(word_index) File ~/opt/anaconda3/envs/SoS/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:615, in BatchEncoding.word_to_chars(self, batch_or_word_index, word_index, sequence_index) 613 batch_index = 0 614 word_index = batch_or_word_index --> 615 return CharSpan(*(self._encodings[batch_index].word_to_chars(word_index, sequence_index))) TypeError: transformers.tokenization_utils_base.CharSpan() argument after * must be an iterable, not NoneType ``` The word index is valid: ``` encoding.word_ids()[word_index:word_index+10] [164, 165, 166, 166, 166, 166, 167, 168, 168, 168] ``` On further investigation, I have noticed that there is a work-around by validating there is a word-to-token mapping for the word index: ``` if word_index is not None and encoding.word_to_tokens(word_index) is not None: [...] ``` So the underlying issue seems to be that `word_to_tokens()` sometimes returns None, although it seems counter-intuitive that there are words in a texts that do not have corresponding tokens. ### Expected behavior `BatchEncoding.word_to_tokens()` should not output `None`; or it should be [documented](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_to_tokens) why/if this can happen.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20449/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20448
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20448/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20448/comments
https://api.github.com/repos/huggingface/transformers/issues/20448/events
https://github.com/huggingface/transformers/issues/20448
1,464,387,199
I_kwDOCUB6oc5XSMZ_
20,448
Could you do `pip show huggingface_hub`?
{ "login": "wccccp", "id": 55964850, "node_id": "MDQ6VXNlcjU1OTY0ODUw", "avatar_url": "https://avatars.githubusercontent.com/u/55964850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wccccp", "html_url": "https://github.com/wccccp", "followers_url": "https://api.github.com/users/wccccp/followers", "following_url": "https://api.github.com/users/wccccp/following{/other_user}", "gists_url": "https://api.github.com/users/wccccp/gists{/gist_id}", "starred_url": "https://api.github.com/users/wccccp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wccccp/subscriptions", "organizations_url": "https://api.github.com/users/wccccp/orgs", "repos_url": "https://api.github.com/users/wccccp/repos", "events_url": "https://api.github.com/users/wccccp/events{/privacy}", "received_events_url": "https://api.github.com/users/wccccp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue appears to have been opened in error. If this is the case @wccccp you can close this Issue Request using the buttons below 👇 ", "> This issue appears to have been opened in error. If this is the case @wccccp you can close this Issue Request using the buttons below 👇\r\n\r\nthank you" ]
1,669
1,669
1,669
NONE
null
Could you do `pip show huggingface_hub`? _Originally posted by @NielsRogge in https://github.com/huggingface/transformers/issues/20447#issuecomment-1327215502_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20448/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20448/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20447
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20447/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20447/comments
https://api.github.com/repos/huggingface/transformers/issues/20447/events
https://github.com/huggingface/transformers/issues/20447
1,464,259,473
I_kwDOCUB6oc5XRtOR
20,447
ImportError: cannot import name 'CommitOperationAdd' from 'huggingface_hub' (C:\Users\46213\anaconda3\lib\site-packages\huggingface_hub\__init__.py)
{ "login": "wccccp", "id": 55964850, "node_id": "MDQ6VXNlcjU1OTY0ODUw", "avatar_url": "https://avatars.githubusercontent.com/u/55964850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wccccp", "html_url": "https://github.com/wccccp", "followers_url": "https://api.github.com/users/wccccp/followers", "following_url": "https://api.github.com/users/wccccp/following{/other_user}", "gists_url": "https://api.github.com/users/wccccp/gists{/gist_id}", "starred_url": "https://api.github.com/users/wccccp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wccccp/subscriptions", "organizations_url": "https://api.github.com/users/wccccp/orgs", "repos_url": "https://api.github.com/users/wccccp/repos", "events_url": "https://api.github.com/users/wccccp/events{/privacy}", "received_events_url": "https://api.github.com/users/wccccp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you do `pip show huggingface_hub`?", "> \r\n\r\n(base) C:\\Users\\46213>pip show huggingface_hub\r\nName: huggingface-hub\r\nVersion: 0.10.1\r\nSummary: Client library to download and publish models, datasets and other repos on the huggingface.co hub\r\nHome-page: https://github.com/huggingface/huggingface_hub\r\nAuthor: Hugging Face, Inc.\r\nAuthor-email: julien@huggingface.co\r\nLicense: Apache\r\nLocation: c:\\users\\46213\\anaconda3\\lib\\site-packages\r\nRequires: typing-extensions, requests, filelock, tqdm, packaging, pyyaml\r\nRequired-by: transformers, ltp, evaluate, datasets", "For anyone finding this now, if you installed `transformers` and/or `huggingface_hub` with pip, try re-installing with conda. That solved it for me.", "i.e. `conda install -c conda-forge transformers huggingface_hub`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
### System Info windows python=3.9 transformers 4.23.1 pytorch 1.13.0 py3.9_cuda11.6_cudnn8_0 pytorch ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction >>> import transformers Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\46213\anaconda3\lib\site-packages\transformers\__init__.py", line 30, in <module> from . import dependency_versions_check File "C:\Users\46213\anaconda3\lib\site-packages\transformers\dependency_versions_check.py", line 17, in <module> from .utils.versions import require_version, require_version_core File "C:\Users\46213\anaconda3\lib\site-packages\transformers\utils\__init__.py", line 48, in <module> from .hub import ( File "C:\Users\46213\anaconda3\lib\site-packages\transformers\utils\hub.py", line 32, in <module> from huggingface_hub import ( ImportError: cannot import name 'CommitOperationAdd' from 'huggingface_hub' (C:\Users\46213\anaconda3\lib\site-packages\huggingface_hub\__init__.py) >>> ### Expected behavior please help me to solve the problem!!!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20447/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/20447/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20446
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20446/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20446/comments
https://api.github.com/repos/huggingface/transformers/issues/20446/events
https://github.com/huggingface/transformers/pull/20446
1,464,170,025
PR_kwDOCUB6oc5DsGus
20,446
Add AltCLIP
{ "login": "jongjyh", "id": 37979232, "node_id": "MDQ6VXNlcjM3OTc5MjMy", "avatar_url": "https://avatars.githubusercontent.com/u/37979232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jongjyh", "html_url": "https://github.com/jongjyh", "followers_url": "https://api.github.com/users/jongjyh/followers", "following_url": "https://api.github.com/users/jongjyh/following{/other_user}", "gists_url": "https://api.github.com/users/jongjyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/jongjyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jongjyh/subscriptions", "organizations_url": "https://api.github.com/users/jongjyh/orgs", "repos_url": "https://api.github.com/users/jongjyh/repos", "events_url": "https://api.github.com/users/jongjyh/events{/privacy}", "received_events_url": "https://api.github.com/users/jongjyh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@jongjyh @shunxing1234\r\n\r\nIf there is anything unclear, or you need help regarding the commit conflict, don't hesitate!", "@sgugger @ydshieh Thanks! I'll fix it. :)\r\nBy the way, is there only processor need to be redefine? Do we need to check out all classes?", "model, config and processor should be redefined, but not tokenizer, image processor, or feature extractor.\r\n(@sgugger Is this correct?)\r\n\r\nYou can see the file structure in `models/x_clip` to get some idea", "Hi @jongjyh @shunxing1234 \r\n\r\nI had to fix two files that contain `<<<< HEAD` (from previous conflicts when you merged HF's `main` into your `main`).\r\n", "@sgugger \r\n\r\nCurrently, the PR authors don't implement `AltCLIPVisionModel`, as the vision component is just the same as `CLIPVisionModel`. They do implement the necessary modules like `AltCLIPVisionTransformer`, as this is required in `AltCLIPModel`.\r\n\r\nThis seems fair to me.\r\n\r\nHowever,\r\n\r\nIn the model tester file, they do\r\n```\r\nfrom ...models.clip.test_modeling_clip import CLIPVisionModelTester\r\n```\r\nand there is no `AltCLIPVisionModelTest` being implemented (same reason, it's just `CLIPVisionModelTest`).\r\n\r\n~Do you think this is OK - I don't like the dependency though.~\r\n\r\n@jongjyh @shunxing1234 \r\n\r\nIMO, despite the vision model is just `CLIPVisionModel`, I think for the completeness and max independency, it's good to have `AltCLIPVisionModel`. It should be very quick to add I believe. ", "No, the test files of models should be independent from each other, like the modeling files, or it will make our lives harder down the road for maintenance :-)", "Hi @jongjyh @shunxing1234\r\n\r\nPlease go ahead with adding `AltCLIPVisionModel`, removing the usage of `CLIPVisionModelTester`, adding `AltCLIPVisionModelTester`.\r\n\r\nThis could reduce a few more test failures :-)", "Hi, @ydshieh \r\n\r\nthank you for your checking! I added alt_clipvision model right now, please let me know if there are other requests. :)", "_The documentation is not available anymore as the PR was closed or merged._", "Finally the CIs are green! I will take a review in more details!", "Hey, @ydshieh @sgugger \r\n\r\nHere is my new pr. :) Please help to check whether it meets the requirements", "Hi, now we still need to convert the weights, I loaded the previous weight file under the new `modeling_altclip.py`:\r\n\r\n> Some weights of the model checkpoint at BAAI/AltCLIP were not used when initializing AltCLIPModel: ['text_model.roberta.pooler.dense.bias', 'text_model.roberta.pooler.dense.weight']\r\n\r\nIt seems only pooler in Roberta need to be removed. Could I just upload a new `pytorch.bin` with no pooler to replace original one after results of the down-steam task were reproduced using this new modeling file?\r\n\r\nI also notice processor file need to renew. \r\n\r\nThank you for your long-term follow-up and help!", "> Could I just upload a new pytorch.bin with no pooler to replace original one after results of the down-steam task were reproduced using this new modeling file?\r\n\r\nGood for me, as long as the results from the original model/checkpoint & from the added model calss/converted checkpoint matches.\r\n", "@jongjyh \r\n\r\nIt looks like the CI doesn't run anymore, but it indeed ran for some commits you pushed previously. I think you have followed this before, but could you try again 🙏 to trigger the CI. Thanks.\r\n\r\n> It seems there is an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?", "cc @sgugger It's ready for merge IMO (other than doctesting - which the author and me will take care). But before merge, would like to have you final look/approve 🙏 Thanks.", "Hi @ydshieh,\r\nI am worried that there may be many dirty commits in this PR at present. Is there any way to merge these more than 100 commits before it is officially merged into the main branch? :)", "Hi @jongjyh! No worry for the dirty commits, the GitHub page will `Squash and merge`, as shown in the button below :-)", "Hi. the doctest for the config/modeling files pass now. Thank you a lot!\r\nHowever, the doctest for `docs/source/en/model_doc/clip.mdx` has some problem, it won't be run.\r\n\r\nTo check\r\n```bash\r\npython3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules docs/source/en/model_doc/altclip.mdx -sv --doctest-continue-on-failure --doctest-glob=\"*.mdx\"\r\n```\r\n\r\nI checked the file permission \r\n```bash\r\nls -l docs/source/en/model_doc/\r\n```\r\nwhich shows that file `altclip.mdx` is executable file `-rwxr-xr-x`, but other files are not. See\r\n```bash\r\n-rw-r--r-- 1 root root 4795 Dec 21 01:32 albert.mdx\r\n-rwxr-xr-x 1 root root 4829 Dec 23 10:01 altclip.mdx\r\n-rw-r--r-- 1 root root 3491 Dec 21 01:32 audio-spectrogram-transformer.mdx\r\n-rw-r--r-- 1 root root 7523 Dec 23 10:01 auto.mdx\r\n-rw-r--r-- 1 root root 9454 Dec 23 10:01 bart.mdx\r\n```\r\n\r\nCould you fix this permission issue? Potentially replace that file with a newly created file with the content copied.\r\nThanks", "@ydshieh @sgugger Hi, we update the pr, but there are some unexcepted issues(tf error) occured", "Thanks a lot for all your work on this! There just needs to be a rebase on main and we can merge this.", "Thank you again @jongjyh @shunxing1234 🚀 !" ]
1,669
1,672
1,672
CONTRIBUTOR
null
# Adding AltCLIP We add AltCLIP model. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20446/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20446/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20446", "html_url": "https://github.com/huggingface/transformers/pull/20446", "diff_url": "https://github.com/huggingface/transformers/pull/20446.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20446.patch", "merged_at": 1672820338000 }
https://api.github.com/repos/huggingface/transformers/issues/20445
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20445/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20445/comments
https://api.github.com/repos/huggingface/transformers/issues/20445/events
https://github.com/huggingface/transformers/pull/20445
1,464,140,437
PR_kwDOCUB6oc5DsAgc
20,445
with pytorch cpu only version. without --no_cuda, using --bf16 will t…
{ "login": "sywangyi", "id": 36058628, "node_id": "MDQ6VXNlcjM2MDU4NjI4", "avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sywangyi", "html_url": "https://github.com/sywangyi", "followers_url": "https://api.github.com/users/sywangyi/followers", "following_url": "https://api.github.com/users/sywangyi/following{/other_user}", "gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions", "organizations_url": "https://api.github.com/users/sywangyi/orgs", "repos_url": "https://api.github.com/users/sywangyi/repos", "events_url": "https://api.github.com/users/sywangyi/events{/privacy}", "received_events_url": "https://api.github.com/users/sywangyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
…rigger error like "Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0" Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Library: - trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20445/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20445", "html_url": "https://github.com/huggingface/transformers/pull/20445", "diff_url": "https://github.com/huggingface/transformers/pull/20445.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20445.patch", "merged_at": 1669643769000 }
https://api.github.com/repos/huggingface/transformers/issues/20444
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20444/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20444/comments
https://api.github.com/repos/huggingface/transformers/issues/20444/events
https://github.com/huggingface/transformers/pull/20444
1,464,054,504
PR_kwDOCUB6oc5DrutS
20,444
update cpu related doc
{ "login": "sywangyi", "id": 36058628, "node_id": "MDQ6VXNlcjM2MDU4NjI4", "avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sywangyi", "html_url": "https://github.com/sywangyi", "followers_url": "https://api.github.com/users/sywangyi/followers", "following_url": "https://api.github.com/users/sywangyi/following{/other_user}", "gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions", "organizations_url": "https://api.github.com/users/sywangyi/orgs", "repos_url": "https://api.github.com/users/sywangyi/repos", "events_url": "https://api.github.com/users/sywangyi/events{/privacy}", "received_events_url": "https://api.github.com/users/sywangyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@jianan-gu @sgugger please review the doc update" ]
1,669
1,669
1,669
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20444/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20444", "html_url": "https://github.com/huggingface/transformers/pull/20444", "diff_url": "https://github.com/huggingface/transformers/pull/20444.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20444.patch", "merged_at": 1669643676000 }
https://api.github.com/repos/huggingface/transformers/issues/20443
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20443/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20443/comments
https://api.github.com/repos/huggingface/transformers/issues/20443/events
https://github.com/huggingface/transformers/pull/20443
1,463,736,182
PR_kwDOCUB6oc5DqsKD
20,443
add timeout option for deepspeed engine
{ "login": "henghuiz", "id": 116645822, "node_id": "U_kgDOBvPfvg", "avatar_url": "https://avatars.githubusercontent.com/u/116645822?v=4", "gravatar_id": "", "url": "https://api.github.com/users/henghuiz", "html_url": "https://github.com/henghuiz", "followers_url": "https://api.github.com/users/henghuiz/followers", "following_url": "https://api.github.com/users/henghuiz/following{/other_user}", "gists_url": "https://api.github.com/users/henghuiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/henghuiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/henghuiz/subscriptions", "organizations_url": "https://api.github.com/users/henghuiz/orgs", "repos_url": "https://api.github.com/users/henghuiz/repos", "events_url": "https://api.github.com/users/henghuiz/events{/privacy}", "received_events_url": "https://api.github.com/users/henghuiz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? This PR allows users to set socket timeout for deepspeed engine for multiple instance training. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20443/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20443/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20443", "html_url": "https://github.com/huggingface/transformers/pull/20443", "diff_url": "https://github.com/huggingface/transformers/pull/20443.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20443.patch", "merged_at": 1669659806000 }
https://api.github.com/repos/huggingface/transformers/issues/20442
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20442/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20442/comments
https://api.github.com/repos/huggingface/transformers/issues/20442/events
https://github.com/huggingface/transformers/issues/20442
1,463,710,458
I_kwDOCUB6oc5XPnL6
20,442
Error while importing pretrained model
{ "login": "orectique", "id": 49713741, "node_id": "MDQ6VXNlcjQ5NzEzNzQx", "avatar_url": "https://avatars.githubusercontent.com/u/49713741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orectique", "html_url": "https://github.com/orectique", "followers_url": "https://api.github.com/users/orectique/followers", "following_url": "https://api.github.com/users/orectique/following{/other_user}", "gists_url": "https://api.github.com/users/orectique/gists{/gist_id}", "starred_url": "https://api.github.com/users/orectique/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orectique/subscriptions", "organizations_url": "https://api.github.com/users/orectique/orgs", "repos_url": "https://api.github.com/users/orectique/repos", "events_url": "https://api.github.com/users/orectique/events{/privacy}", "received_events_url": "https://api.github.com/users/orectique/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @orectique. Have you tried to do something like the following:\r\nhttps://discuss.pytorch.org/t/nameerror-name-c-is-not-defined-while-importing-torch/124721/2 \r\nor\r\nhttps://github.com/pytorch/pytorch/issues/1633#issuecomment-323435572", "Thank you, @atturaioe. I explored both of those forums and tried the strategies. However, the error seems to have been something else. Anywho, in exasperation, I dumped the entire environment I was using and installed all the packages from scratch. That seems to have done it. " ]
1,669
1,669
1,669
NONE
null
### System Info - `transformers` version: 4.23.1 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.10.6 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @patil-suraj @patrickvonplaten @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("gpt2", num_labels=2) ``` The above code, on execution, returns ``` RuntimeError: Failed to import transformers.models.gpt2.modeling_gpt2 because of the following error (look up to see its traceback): name '_C' is not defined ``` This error is not specific to that model or to two labels. I ran the example code snippet given at https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer. The same error is presented, albeit with a different model name. ### Expected behavior From the tutorial, I was expecting a pretrained model object to be initialized, so that I could proceed with fine-tuning it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20442/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20441
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20441/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20441/comments
https://api.github.com/repos/huggingface/transformers/issues/20441/events
https://github.com/huggingface/transformers/pull/20441
1,463,663,752
PR_kwDOCUB6oc5DqcuA
20,441
Add ViViT
{ "login": "jegork", "id": 43540177, "node_id": "MDQ6VXNlcjQzNTQwMTc3", "avatar_url": "https://avatars.githubusercontent.com/u/43540177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jegork", "html_url": "https://github.com/jegork", "followers_url": "https://api.github.com/users/jegork/followers", "following_url": "https://api.github.com/users/jegork/following{/other_user}", "gists_url": "https://api.github.com/users/jegork/gists{/gist_id}", "starred_url": "https://api.github.com/users/jegork/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jegork/subscriptions", "organizations_url": "https://api.github.com/users/jegork/orgs", "repos_url": "https://api.github.com/users/jegork/repos", "events_url": "https://api.github.com/users/jegork/events{/privacy}", "received_events_url": "https://api.github.com/users/jegork/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, @jegork thanks for the PR! Having some experience in adding tests and fixing repo-consistency/style, I can help you with these aspects if you need any :+1: Feel free to tag me when needed.", "cc @alaradirik and @NielsRogge ", "Hi @jegork, thanks for working on this! \r\n\r\nI saw that it's not possible to import and test ViViT on your PR branch yet, this is because a lot of files need to be edited to properly import the new modules (ViViTConfig, ViViTModel, etc.). You can refer to this [PR](https://github.com/huggingface/transformers/pull/20459) to see what files need to be edited. \r\n\r\nYou can either make these changes manually or run the `transformers-cli add-new-model` command, which automatically takes care of a lot of these changes and initializes the model-specific files (modelling_vivit.py, etc.). You can learn more about this over [here.](https://github.com/huggingface/transformers/blob/main/docs/source/en/add_new_model.mdx)\r\n\r\nOnce you are done, you can run the `make fixup` command to make sure your code passes the [style, quality and repo consistency CI tests](https://huggingface.co/docs/transformers/contributing).\r\n\r\ncc @sgugger @NielsRogge ", "Hey @alaradirik, thanks for your reply and guidance! \r\n\r\nI will address what you suggested and add tests by the end of the week.\r\n", "@alaradirik I've fixed the structuring via the suggested `transformers-cli add-new-model` and have run `make fixup`. \r\n\r\nAll the relevant imports seem to work now (via `from transformers import ViViTModel, ViViTConfig, ViViTImageProcessor, ViViTFeatureExtractor, ViViTLayer, ViViTPreTrainedModel, ViViTForVideoClassification`)\r\n\r\nThanks again! Will add tests next.", "@jegork really cool work, as a next step could you try to make the CI as green as possible? Currently there are many failing checks (10 failing and 9 successful). You can click on \"details\" -> \"artifacts\" -> \"failures long\" to see why exactly a check has failed.\r\n\r\nYou will also need to rebase on the main branch due to some issues with TF versions which are fixed on the upstream branch:\r\n```\r\ngit remote add upstream https://github.com/huggingface/transformers.git\r\ngit fetch upstream\r\ngit rebase upstream/main\r\n```\r\n\r\n", "@NielsRogge thanks for your reply! I was indeed looking at the CI and not picking up the issue because most of them display `No module named 'keras.saving.hdf5_format'` as the error, but it seems that is exactly what you mentioned - the recent update to the TF version in the main branch. I will look into that today!", "Also please use `Vivit` to prefix all the classes (instead of `ViViT`), so `VivitConfig`, `VivitModel` etc (liek `BertConfig`, `BertModel` etc.). Users have complained a lot about the casing we use for our models as it makes them harder to find in the lib :-)", "@NielsRogge apparently I have run what you provided incorrectly, sorry for that! ", "Okay np, fixed 👍", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20441). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", " @NielsRogge would be great if you could take a look at the changes I made. The CI also seems to be all good. \r\nThanks!", "hey @NielsRogge, thank you for your comments!\r\n\r\nI have a question regarding the conversion script. I am using `restore_checkpoint(flax_model_path, None)` from `flax.training.checkpoints`, however, using the current jax version 0.3.6 (it was installed on my machine automatically when installing dev transformers version as per documentation). Upgrading the version of flax to 0.3.25 fixes the issue (haven't tested lower versions). Is there any other way I could load a flax checkpoint? \r\n", "@NielsRogge i think I've addressed all of your points, though I have doubt whether I implemented correctly the transformations check in the conversion script (I couldn't come up with any better approach as the preprocessing code in the original implementation is kinda scattered among multiple files, but I have tried to leave as much as possible of notes there). Would be great if you could check that and, if applicable, suggest any improvements.\r\n\r\nThanks again!", "seems like the test job fails with\r\n```\r\nFAILED tests/models/bridgetower/test_modeling_bridgetower.py::BridgeTowerModelTest::test_save_load_fast_init_to_base - AssertionError: 0.08835485577583313 not less than or equal to 0.001 : vision_model.visual.transformer.resblocks.11.attn.in_proj_weight not identical\r\n```\r\n\r\nafter i rebased from the main branch. Seems like has nothing to do with this PR. Was it already fixed and shall I rebase again?\r\n\r\nHi @jegork! You're right, the failing test is unrelated to the PR. Could you wait for a day and rebase again?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Sorry for the late reply here, I've assigned @amyeroberts to review the PR.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Fixes #15666 Add Video Vision Transformer to transformers. This PR implements a spacetime version of the Video Vision Transformer from the original paper. I have provided the model weights here https://huggingface.co/jegormeister/vivit-b-16x2-kinetics400 I will try to add Factorised Encoder version later on (these are the two versions that authors provide weight for). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/15666 - [x] Did you make sure to update the documentation with your changes? I have added the documentation, but I have troubles testing it as I couldn't run the preview command of the doc-builder, so if someone has the possibility to run and check it, I will be really grateful! - [x] Did you write any new necessary tests? WIP ## Who can review? @LysandreJik answered to the original issue so would be great if you could assist with the PR or suggesting who could. Thanks! <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20441/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20441/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20441", "html_url": "https://github.com/huggingface/transformers/pull/20441", "diff_url": "https://github.com/huggingface/transformers/pull/20441.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20441.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20440
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20440/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20440/comments
https://api.github.com/repos/huggingface/transformers/issues/20440/events
https://github.com/huggingface/transformers/issues/20440
1,463,656,580
I_kwDOCUB6oc5XPaCE
20,440
Adding a repr to pipelines
{ "login": "payoto", "id": 18074599, "node_id": "MDQ6VXNlcjE4MDc0NTk5", "avatar_url": "https://avatars.githubusercontent.com/u/18074599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/payoto", "html_url": "https://github.com/payoto", "followers_url": "https://api.github.com/users/payoto/followers", "following_url": "https://api.github.com/users/payoto/following{/other_user}", "gists_url": "https://api.github.com/users/payoto/gists{/gist_id}", "starred_url": "https://api.github.com/users/payoto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/payoto/subscriptions", "organizations_url": "https://api.github.com/users/payoto/orgs", "repos_url": "https://api.github.com/users/payoto/repos", "events_url": "https://api.github.com/users/payoto/events{/privacy}", "received_events_url": "https://api.github.com/users/payoto/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil ", "@payoto ,\r\n\r\nThis seems like a good idea ! \r\n\r\nI'm slightly worried about the size of the `repr` though. It's already really large with your example, and you are missing the extra `_{preprocess,forward,postprocess}_params`. (Which are important imo)\r\n\r\nAs a start, I would use exactly what you have done, but only without the config. \r\nFor the `model` I would actually put it, but maybe only the class names bcause `repr(model)` is also quite verbose.", "The reason I added the `model.config` entry was that it showed me information about the expected output of the pipeline, one of the things I wanted to find out was \"what are all the possible classes returned by this pipeline\"? Where would you recommend I get that information?\r\n\r\nI'll open a PR with what you've suggested, and we can iterate from there.", "> what are all the possible classes returned by this pipeline\"?\r\n\r\nYou mean `config.id2label` ? I sympathize with the goal, but `repr` is meant to be seen by devs easily during iteration, so making them short is important IMO. But we can start with your version if you prefer and we'll adapt based on community feedback.\r\nYour proposal is already much better than the current state !", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
### Feature request Would you be interested in adding a `__repr__` to the `Pipeline` class? ### Motivation This could be used to display useful information after instantiation, particularly in interactive environments like Jupyter. A useful representation would be very useful to display what defaults were loaded by a library or model. ### Your contribution I've been testing in my local install and I can submit a PR with the following: ```python class Pipeline(_ScikitCompat): ... def __repr__(self) -> str: string_out = ( f"{type(self).__name__}(\n" f" task={self.task},\n" f" modelcard={self.modelcard},\n" f" feature_extractor={self.feature_extractor},\n" f" framework={self.framework},\n" f" device={self.device},\n" f" call_count={self.call_count},\n" f" tokenizer={self.tokenizer},\n" f" model.config={self.model.config},\n" ")" ) return string_out ``` Which has output: ```python TextClassificationPipeline( task=text-classification, modelcard=None, feature_extractor=None, framework=pt, device=cpu, call_count=3, tokenizer=PreTrainedTokenizerFast(name_or_path='nlptown/bert-base-multilingual-uncased-sentiment', vocab_size=105879, model_max_len=512, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'}), model.config=BertConfig { "_name_or_path": "nlptown/bert-base-multilingual-uncased-sentiment", "_num_labels": 5, "architectures": [ "BertForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "directionality": "bidi", "finetuning_task": "sentiment-analysis", "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "1 star", "1": "2 stars", "2": "3 stars", "3": "4 stars", "4": "5 stars" }, "initializer_range": 0.02, "intermediate_size": 3072, "label2id": { "1 star": 0, "2 stars": 1, "3 stars": 2, "4 stars": 3, "5 stars": 4 }, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "output_past": true, "pad_token_id": 0, "pooler_fc_size": 768, "pooler_num_attention_heads": 12, "pooler_num_fc_layers": 3, "pooler_size_per_head": 128, "pooler_type": "first_token_transform", "position_embedding_type": "absolute", "transformers_version": "4.20.1", "type_vocab_size": 2, "use_cache": true, "vocab_size": 105879 } , ) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20440/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20439
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20439/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20439/comments
https://api.github.com/repos/huggingface/transformers/issues/20439/events
https://github.com/huggingface/transformers/pull/20439
1,463,650,404
PR_kwDOCUB6oc5DqZ14
20,439
Include image processor in add-new-model-like
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ydshieh The test requires all three frameworks (PyTorch, TensorFlow and Flax) so it is never run in the CI.", "OK, I completely misunderstand `Add model like runner / Add new model like template tests (pull_request)`.\r\nIgnore my previous comment 🙏 ." ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? * Adds logic to `add-new-model-like` CLI to include image processors * Updates `tests/utils/add_new_model_like.py` so tests run (was outdated) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20439/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20439", "html_url": "https://github.com/huggingface/transformers/pull/20439", "diff_url": "https://github.com/huggingface/transformers/pull/20439.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20439.patch", "merged_at": 1669653962000 }
https://api.github.com/repos/huggingface/transformers/issues/20438
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20438/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20438/comments
https://api.github.com/repos/huggingface/transformers/issues/20438/events
https://github.com/huggingface/transformers/issues/20438
1,463,491,307
I_kwDOCUB6oc5XOxrr
20,438
transformers + deepspeed hangs when training on multiple GPUs
{ "login": "mmarius", "id": 24918508, "node_id": "MDQ6VXNlcjI0OTE4NTA4", "avatar_url": "https://avatars.githubusercontent.com/u/24918508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mmarius", "html_url": "https://github.com/mmarius", "followers_url": "https://api.github.com/users/mmarius/followers", "following_url": "https://api.github.com/users/mmarius/following{/other_user}", "gists_url": "https://api.github.com/users/mmarius/gists{/gist_id}", "starred_url": "https://api.github.com/users/mmarius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmarius/subscriptions", "organizations_url": "https://api.github.com/users/mmarius/orgs", "repos_url": "https://api.github.com/users/mmarius/repos", "events_url": "https://api.github.com/users/mmarius/events{/privacy}", "received_events_url": "https://api.github.com/users/mmarius/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is the output produced by the minimal example. It keeps running forever and does not produce any new output.\r\n\r\n\r\n\r\n Detected CUDA_VISIBLE_DEVICES=GPU-460af155,GPU-457e4df4,GPU-08f1eba5,GPU-4793f3fd,GPU-cbc5b6ef,GPU-aa661638,GPU-a39d482a,GPU-dc0ceb93 but ignoring it because one or several of --include/--exclude/--num_gpus/--num_nodes cl args were used. If you want to use CUDA_VISIBLE_DEVICES don't pass any of these arguments to deepspeed.\r\n [2022-11-24 14:38:15,640] [INFO] [runner.py:508:main] cmd = /home/mmosbach/miniconda3/envs/llmft/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=60000 /home/mmosbach/projects/llmft/debug.py --output_dir /home/mmosbach/logs/llmft/logfiles --deepspeed /home/mmosbach/projects/llmft/deepspeed_configs/ds_config_zero3.json\r\n [2022-11-24 14:38:18,207] [INFO] [launch.py:135:main] 0 NCCL_VERSION=2.12.10+cuda11.6\r\n [2022-11-24 14:38:18,207] [INFO] [launch.py:135:main] 0 NCCL_DEBUG_SUBSYS=ALL\r\n [2022-11-24 14:38:18,207] [INFO] [launch.py:135:main] 0 NCCL_DEBUG=INFO\r\n [2022-11-24 14:38:18,207] [INFO] [launch.py:142:main] WORLD INFO DICT: {'localhost': [0, 1]}\r\n [2022-11-24 14:38:18,207] [INFO] [launch.py:148:main] nnodes=1, num_local_procs=2, node_rank=0\r\n [2022-11-24 14:38:18,207] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})\r\n [2022-11-24 14:38:18,208] [INFO] [launch.py:162:main] dist_world_size=2\r\n [2022-11-24 14:38:18,208] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0,1\r\n [2022-11-24 14:38:24,319] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl\r\n mmosbach-20307:535:535 [0] NCCL INFO Bootstrap : Using eth0:172.17.0.2<0>\r\n mmosbach-20307:535:535 [0] NCCL INFO NET/Plugin: Failed to find ncclNetPlugin_v6 symbol.\r\n mmosbach-20307:535:535 [0] NCCL INFO NET/Plugin: Loaded net plugin NCCL RDMA Plugin (v4)\r\n mmosbach-20307:535:535 [0] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v6 symbol.\r\n mmosbach-20307:535:535 [0] NCCL INFO NET/Plugin: Loaded coll plugin SHARP (v4)\r\n mmosbach-20307:535:535 [0] NCCL INFO cudaDriverVersion 11070\r\n NCCL version 2.14.3+cuda11.7\r\n mmosbach-20307:535:535 [0] NCCL INFO init.cc:1147 Cuda Host Alloc Size 4 pointer 0x7f18dc200000\r\n mmosbach-20307:536:536 [1] NCCL INFO cudaDriverVersion 11070\r\n mmosbach-20307:535:717 [0] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so\r\n mmosbach-20307:535:717 [0] NCCL INFO P2P plugin IBext\r\n mmosbach-20307:535:717 [0] NCCL INFO NET/IB : No device found.\r\n mmosbach-20307:535:717 [0] NCCL INFO NET/IB : No device found.\r\n mmosbach-20307:535:717 [0] NCCL INFO NET/Socket : Using [0]eth0:172.17.0.2<0>\r\n mmosbach-20307:535:717 [0] NCCL INFO Using network Socket\r\n mmosbach-20307:536:536 [1] NCCL INFO Bootstrap : Using eth0:172.17.0.2<0>\r\n mmosbach-20307:536:536 [1] NCCL INFO NET/Plugin: Failed to find ncclNetPlugin_v6 symbol.\r\n mmosbach-20307:536:536 [1] NCCL INFO NET/Plugin: Loaded net plugin NCCL RDMA Plugin (v4)\r\n mmosbach-20307:536:536 [1] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v6 symbol.\r\n mmosbach-20307:536:536 [1] NCCL INFO NET/Plugin: Loaded coll plugin SHARP (v4)\r\n mmosbach-20307:536:536 [1] NCCL INFO init.cc:1147 Cuda Host Alloc Size 4 pointer 0x7feb60200000\r\n mmosbach-20307:536:718 [1] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so\r\n mmosbach-20307:536:718 [1] NCCL INFO P2P plugin IBext\r\n mmosbach-20307:536:718 [1] NCCL INFO NET/IB : No device found.\r\n mmosbach-20307:536:718 [1] NCCL INFO NET/IB : No device found.\r\n mmosbach-20307:536:718 [1] NCCL INFO NET/Socket : Using [0]eth0:172.17.0.2<0>\r\n mmosbach-20307:536:718 [1] NCCL INFO Using network Socket\r\n mmosbach-20307:536:718 [1] NCCL INFO NET/Socket : GPU Direct RDMA Disabled for HCA 0 'eth0'\r\n mmosbach-20307:535:717 [0] NCCL INFO NET/Socket : GPU Direct RDMA Disabled for HCA 0 'eth0'\r\n mmosbach-20307:536:718 [1] NCCL INFO transport/p2p.cc:151 Cuda Alloc Size 2097152 pointer 0x7feb60c00000\r\n mmosbach-20307:536:718 [1] NCCL INFO === System : maxBw 24.0 totalBw 24.0 ===\r\n mmosbach-20307:535:717 [0] NCCL INFO transport/p2p.cc:151 Cuda Alloc Size 2097152 pointer 0x7f18dcc00000\r\n mmosbach-20307:536:718 [1] NCCL INFO CPU/0 (1/2/-1)\r\n mmosbach-20307:536:718 [1] NCCL INFO + PCI[5000.0] - NIC/0\r\n mmosbach-20307:536:718 [1] NCCL INFO + PCI[24.0] - GPU/1000 (0)\r\n mmosbach-20307:536:718 [1] NCCL INFO + PCI[24.0] - GPU/25000 (1)\r\n mmosbach-20307:536:718 [1] NCCL INFO ==========================================\r\n mmosbach-20307:536:718 [1] NCCL INFO GPU/1000 :GPU/1000 (0/5000.000000/LOC) GPU/25000 (2/24.000000/PHB) CPU/0 (1/24.000000/PHB) \r\n mmosbach-20307:536:718 [1] NCCL INFO GPU/25000 :GPU/1000 (2/24.000000/PHB) GPU/25000 (0/5000.000000/LOC) CPU/0 (1/24.000000/PHB) \r\n mmosbach-20307:536:718 [1] NCCL INFO Setting affinity for GPU 1 to ffffffff,ffffffff,00000000,00000000,ffffffff,ffffffff\r\n mmosbach-20307:535:717 [0] NCCL INFO === System : maxBw 24.0 totalBw 24.0 ===\r\n mmosbach-20307:535:717 [0] NCCL INFO CPU/0 (1/2/-1)\r\n mmosbach-20307:535:717 [0] NCCL INFO + PCI[5000.0] - NIC/0\r\n mmosbach-20307:535:717 [0] NCCL INFO + PCI[24.0] - GPU/1000 (0)\r\n mmosbach-20307:535:717 [0] NCCL INFO + PCI[24.0] - GPU/25000 (1)\r\n mmosbach-20307:535:717 [0] NCCL INFO ==========================================\r\n mmosbach-20307:535:717 [0] NCCL INFO GPU/1000 :GPU/1000 (0/5000.000000/LOC) GPU/25000 (2/24.000000/PHB) CPU/0 (1/24.000000/PHB) \r\n mmosbach-20307:535:717 [0] NCCL INFO GPU/25000 :GPU/1000 (2/24.000000/PHB) GPU/25000 (0/5000.000000/LOC) CPU/0 (1/24.000000/PHB) \r\n mmosbach-20307:535:717 [0] NCCL INFO Setting affinity for GPU 0 to ffffffff,ffffffff,00000000,00000000,ffffffff,ffffffff\r\n mmosbach-20307:536:718 [1] NCCL INFO Pattern 4, crossNic 0, nChannels 2, bw 12.000000/12.000000, type PHB/PIX, sameChannels 1\r\n mmosbach-20307:536:718 [1] NCCL INFO 0 : GPU/0 GPU/1\r\n mmosbach-20307:536:718 [1] NCCL INFO 1 : GPU/0 GPU/1\r\n mmosbach-20307:536:718 [1] NCCL INFO Pattern 1, crossNic 0, nChannels 2, bw 22.000000/22.000000, type PHB/PIX, sameChannels 0\r\n mmosbach-20307:536:718 [1] NCCL INFO 0 : GPU/0 GPU/1\r\n mmosbach-20307:536:718 [1] NCCL INFO 1 : GPU/1 GPU/0\r\n mmosbach-20307:536:718 [1] NCCL INFO Pattern 3, crossNic 0, nChannels 2, bw 22.000000/22.000000, type PHB/PIX, sameChannels 0\r\n mmosbach-20307:536:718 [1] NCCL INFO 0 : GPU/0 GPU/1\r\n mmosbach-20307:536:718 [1] NCCL INFO 1 : GPU/1 GPU/0\r\n mmosbach-20307:535:717 [0] NCCL INFO Pattern 4, crossNic 0, nChannels 2, bw 12.000000/12.000000, type PHB/PIX, sameChannels 1\r\n mmosbach-20307:535:717 [0] NCCL INFO 0 : GPU/0 GPU/1\r\n mmosbach-20307:535:717 [0] NCCL INFO 1 : GPU/0 GPU/1\r\n mmosbach-20307:535:717 [0] NCCL INFO Pattern 1, crossNic 0, nChannels 2, bw 22.000000/22.000000, type PHB/PIX, sameChannels 0\r\n mmosbach-20307:535:717 [0] NCCL INFO 0 : GPU/0 GPU/1\r\n mmosbach-20307:535:717 [0] NCCL INFO 1 : GPU/1 GPU/0\r\n mmosbach-20307:535:717 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 2, bw 22.000000/22.000000, type PHB/PIX, sameChannels 0\r\n mmosbach-20307:535:717 [0] NCCL INFO 0 : GPU/0 GPU/1\r\n mmosbach-20307:535:717 [0] NCCL INFO 1 : GPU/1 GPU/0\r\n mmosbach-20307:536:718 [1] NCCL INFO Tree 0 : 0 -> 1 -> -1/-1/-1\r\n mmosbach-20307:536:718 [1] NCCL INFO Tree 2 : 0 -> 1 -> -1/-1/-1\r\n mmosbach-20307:536:718 [1] NCCL INFO Tree 1 : -1 -> 1 -> 0/-1/-1\r\n mmosbach-20307:536:718 [1] NCCL INFO Tree 3 : -1 -> 1 -> 0/-1/-1\r\n mmosbach-20307:535:717 [0] NCCL INFO Tree 0 : -1 -> 0 -> 1/-1/-1\r\n mmosbach-20307:535:717 [0] NCCL INFO Tree 2 : -1 -> 0 -> 1/-1/-1\r\n mmosbach-20307:536:718 [1] NCCL INFO Ring 00 : 0 -> 1 -> 0\r\n mmosbach-20307:535:717 [0] NCCL INFO Tree 1 : 1 -> 0 -> -1/-1/-1\r\n mmosbach-20307:536:718 [1] NCCL INFO Ring 01 : 0 -> 1 -> 0\r\n mmosbach-20307:535:717 [0] NCCL INFO Tree 3 : 1 -> 0 -> -1/-1/-1\r\n mmosbach-20307:536:718 [1] NCCL INFO Ring 02 : 0 -> 1 -> 0\r\n mmosbach-20307:536:718 [1] NCCL INFO Ring 03 : 0 -> 1 -> 0\r\n mmosbach-20307:535:717 [0] NCCL INFO Channel 00/04 : 0 1\r\n mmosbach-20307:536:718 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [2] -1/-1/-1->1->0 [3] 0/-1/-1->1->-1\r\n mmosbach-20307:535:717 [0] NCCL INFO Channel 01/04 : 0 1\r\n mmosbach-20307:535:717 [0] NCCL INFO Channel 02/04 : 0 1\r\n mmosbach-20307:536:718 [1] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)\r\n mmosbach-20307:535:717 [0] NCCL INFO Channel 03/04 : 0 1\r\n mmosbach-20307:535:717 [0] NCCL INFO Ring 00 : 1 -> 0 -> 1\r\n mmosbach-20307:535:717 [0] NCCL INFO Ring 01 : 1 -> 0 -> 1\r\n mmosbach-20307:535:717 [0] NCCL INFO Ring 02 : 1 -> 0 -> 1\r\n mmosbach-20307:535:717 [0] NCCL INFO Ring 03 : 1 -> 0 -> 1\r\n mmosbach-20307:535:717 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [2] 1/-1/-1->0->-1 [3] -1/-1/-1->0->1\r\n mmosbach-20307:535:717 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)\r\n mmosbach-20307:536:718 [1] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7feb60c00000\r\n mmosbach-20307:536:718 [1] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7feb60c00600\r\n mmosbach-20307:536:718 [1] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7feb60c00800\r\n mmosbach-20307:536:718 [1] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7feb60c00e00\r\n mmosbach-20307:535:717 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f18dcc00000\r\n mmosbach-20307:536:718 [1] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7feb60c01000\r\n mmosbach-20307:536:718 [1] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7feb60c01600\r\n mmosbach-20307:535:717 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f18dcc00600\r\n mmosbach-20307:536:718 [1] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7feb60c01800\r\n mmosbach-20307:535:717 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f18dcc00800\r\n mmosbach-20307:536:718 [1] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7feb60c01e00\r\n mmosbach-20307:535:717 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f18dcc00e00\r\n mmosbach-20307:535:717 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f18dcc01000\r\n mmosbach-20307:535:717 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f18dcc01600\r\n mmosbach-20307:535:717 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f18dcc01800\r\n mmosbach-20307:535:717 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f18dcc01e00\r\n mmosbach-20307:536:719 [1] NCCL INFO Mem Realloc old size 0, new size 8 pointer 0x7feb48002c70\r\n mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002e10\r\n mmosbach-20307:535:720 [0] NCCL INFO Mem Realloc old size 0, new size 8 pointer 0x7f18d0000b60\r\n mmosbach-20307:536:719 [1] NCCL INFO New proxy recv connection 0 from local rank 1, transport 0\r\n mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002ea0\r\n mmosbach-20307:535:720 [0] NCCL INFO New proxy recv connection 0 from local rank 0, transport 0\r\n mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7feb60e00000\r\n mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7f18dce00000\r\n mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002e50\r\n mmosbach-20307:536:719 [1] NCCL INFO New proxy recv connection 1 from local rank 1, transport 0\r\n mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002ee0\r\n mmosbach-20307:535:720 [0] NCCL INFO New proxy recv connection 1 from local rank 0, transport 0\r\n mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7feb58000000\r\n mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7f18d4000000\r\n mmosbach-20307:536:719 [1] NCCL INFO New proxy recv connection 2 from local rank 1, transport 0\r\n mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002e90\r\n mmosbach-20307:535:720 [0] NCCL INFO New proxy recv connection 2 from local rank 0, transport 0\r\n mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002f20\r\n mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7feb58a00000\r\n mmosbach-20307:536:719 [1] NCCL INFO New proxy recv connection 3 from local rank 1, transport 0\r\n mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002ed0\r\n mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7f18d4a00000\r\n mmosbach-20307:535:720 [0] NCCL INFO New proxy recv connection 3 from local rank 0, transport 0\r\n mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002f60\r\n mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7feb59400000\r\n mmosbach-20307:536:718 [1] NCCL INFO Channel 00/0 : 1[25000] -> 0[1000] via P2P/IPC\r\n mmosbach-20307:536:719 [1] NCCL INFO New proxy send connection 4 from local rank 1, transport 0\r\n mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002f10\r\n mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7f18d5400000\r\n mmosbach-20307:535:717 [0] NCCL INFO Channel 00/0 : 0[1000] -> 1[25000] via P2P/IPC\r\n mmosbach-20307:535:720 [0] NCCL INFO New proxy send connection 4 from local rank 0, transport 0\r\n mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002fa0\r\n mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7feb59e00000\r\n mmosbach-20307:536:718 [1] NCCL INFO Channel 01/0 : 1[25000] -> 0[1000] via P2P/IPC\r\n mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7f18d5e00000\r\n mmosbach-20307:536:719 [1] NCCL INFO New proxy send connection 5 from local rank 1, transport 0\r\n mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002f50\r\n mmosbach-20307:535:717 [0] NCCL INFO Channel 01/0 : 0[1000] -> 1[25000] via P2P/IPC\r\n mmosbach-20307:535:720 [0] NCCL INFO New proxy send connection 5 from local rank 0, transport 0\r\n mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002fe0\r\n mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7feb61800000\r\n mmosbach-20307:536:718 [1] NCCL INFO Channel 02/0 : 1[25000] -> 0[1000] via P2P/IPC\r\n mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7f18dd800000\r\n mmosbach-20307:536:719 [1] NCCL INFO New proxy send connection 6 from local rank 1, transport 0\r\n mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002f90\r\n mmosbach-20307:535:717 [0] NCCL INFO Channel 02/0 : 0[1000] -> 1[25000] via P2P/IPC\r\n mmosbach-20307:535:720 [0] NCCL INFO New proxy send connection 6 from local rank 0, transport 0\r\n mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0003020\r\n mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7feb61a00000\r\n mmosbach-20307:536:718 [1] NCCL INFO Channel 03/0 : 1[25000] -> 0[1000] via P2P/IPC\r\n mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7f18dda00000\r\n mmosbach-20307:536:719 [1] NCCL INFO New proxy send connection 7 from local rank 1, transport 0\r\n mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002fd0\r\n mmosbach-20307:535:717 [0] NCCL INFO Channel 03/0 : 0[1000] -> 1[25000] via P2P/IPC\r\n mmosbach-20307:535:720 [0] NCCL INFO New proxy send connection 7 from local rank 0, transport 0\r\n mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0003060\r\n mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7feb61c00000\r\n mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7f18ddc00000\r\n mmosbach-20307:536:718 [1] NCCL INFO Connected all rings\r\n mmosbach-20307:536:718 [1] NCCL INFO Connected all trees\r\n mmosbach-20307:536:718 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512\r\n mmosbach-20307:536:718 [1] NCCL INFO 4 coll channels, 4 p2p channels, 2 p2p channels per peer\r\n mmosbach-20307:535:717 [0] NCCL INFO Connected all rings\r\n mmosbach-20307:535:717 [0] NCCL INFO Connected all trees\r\n mmosbach-20307:536:719 [1] NCCL INFO Allocated 4194656 bytes of shared memory in /dev/shm/nccl-JKUXpI\r\n \r\n mmosbach-20307:535:717 [0] NCCL INFO Latency/AlgBw | Tree/ LL | Tree/ LL128 | Tree/Simple | Ring/ LL | Ring/ LL128 | Ring/Simple | CollNetDirect/ LL | CollNetDirect/ LL128 | CollNetDirect/Simple | CollNetChain/ LL | CollNetChain/ LL128 | CollNetChain/Simple |\r\n mmosbach-20307:535:717 [0] NCCL INFO Max NThreads | 512 | 640 | 512 | 512 | 640 | 512 | 0 | 0 | 512 | 0 | 0 | 512 |\r\n mmosbach-20307:535:717 [0] NCCL INFO Broadcast | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 4.6/ 8.0 | 12.5/ 0.0 | 14.1/ 24.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |\r\n mmosbach-20307:535:717 [0] NCCL INFO Reduce | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 4.6/ 6.0 | 12.5/ 0.0 | 14.1/ 24.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |\r\n mmosbach-20307:535:717 [0] NCCL INFO AllGather | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 4.6/ 16.0 | 12.5/ 0.0 | 14.1/ 48.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |\r\n mmosbach-20307:535:717 [0] NCCL INFO ReduceScatter | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 4.6/ 16.0 | 12.5/ 0.0 | 14.1/ 48.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |\r\n mmosbach-20307:535:717 [0] NCCL INFO AllReduce | 6.4/ 5.3 | 8.2/ 0.0 | 56.0/ 20.2 | 5.6/ 6.0 | 15.0/ 0.0 | 19.8/ 24.0 | 5.4/ 0.0 | 5.4/ 0.0 | 27.7/ 0.0 | 4.4/ 0.0 | 4.4/ 0.0 | 16.0/ 0.0 |\r\n mmosbach-20307:535:717 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512\r\n mmosbach-20307:535:717 [0] NCCL INFO 4 coll channels, 4 p2p channels, 2 p2p channels per peer\r\n mmosbach-20307:535:720 [0] NCCL INFO Allocated 4194656 bytes of shared memory in /dev/shm/nccl-ihjCjE\r\n \r\n mmosbach-20307:536:719 [1] NCCL INFO New proxy send connection 8 from local rank 1, transport 2\r\n mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48003010\r\n mmosbach-20307:535:720 [0] NCCL INFO New proxy send connection 8 from local rank 0, transport 2\r\n mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d00030a0\r\n mmosbach-20307:536:719 [1] NCCL INFO transport/net.cc:376 Cuda Alloc Size 8388608 pointer 0x7feb47200000\r\n mmosbach-20307:536:718 [1] NCCL INFO init.cc:367 Cuda Alloc Size 5168 pointer 0x7feb60c02000\r\n mmosbach-20307:535:717 [0] NCCL INFO init.cc:367 Cuda Alloc Size 5168 pointer 0x7f18dcc02000\r\n mmosbach-20307:535:720 [0] NCCL INFO transport/net.cc:376 Cuda Alloc Size 8388608 pointer 0x7f18c3200000\r\n mmosbach-20307:535:717 [0] NCCL INFO init.cc:392 Cuda Host Alloc Size 33554432 pointer 0x7f18b6000000\r\n mmosbach-20307:535:717 [0] NCCL INFO init.cc:398 Cuda Host Alloc Size 128 pointer 0x7f18dc200200\r\n mmosbach-20307:535:717 [0] NCCL INFO comm 0x447a30b0 rank 0 nranks 2 cudaDev 0 busId 1000 - Init COMPLETE\r\n mmosbach-20307:535:535 [0] NCCL INFO Broadcast: opCount 0 sendbuff 0x7f190a000000 recvbuff 0x7f190a000000 count 411828224 datatype 0 op 0 root 0 comm 0x447a30b0 [nranks=2] stream 0x447a2580\r\n mmosbach-20307:536:718 [1] NCCL INFO init.cc:392 Cuda Host Alloc Size 33554432 pointer 0x7feb3a000000\r\n mmosbach-20307:535:535 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)\r\n mmosbach-20307:536:718 [1] NCCL INFO init.cc:398 Cuda Host Alloc Size 128 pointer 0x7feb60200200\r\n mmosbach-20307:536:718 [1] NCCL INFO comm 0x43bb7070 rank 1 nranks 2 cudaDev 1 busId 25000 - Init COMPLETE\r\n mmosbach-20307:536:536 [1] NCCL INFO Broadcast: opCount 0 sendbuff 0x7feb8a000000 recvbuff 0x7feb8a000000 count 411828224 datatype 0 op 0 root 0 comm 0x43bb7070 [nranks=2] stream 0x43bb63e0\r\n mmosbach-20307:536:536 [1] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)\r\n", "Thank you for an excellent report, @mmarius \r\n\r\nThis is almost certain an issue that you'd need to report to the Deepspeed, since the hanging isn't related to HF integration. The only hanging that could happen in the integration is in `generate` if one doesn't set the gpu sync flag on. but I don't see you using it. The rest is core deepspeed.\r\n\r\nBut here are some suggestions based on my experience that might help:\r\n\r\n1. This could be a hardware issue. Can you try the same code on a different server of the same setup?\r\n\r\n2. Sometimes these help (try one at a time and see if the hanging goes away.\r\n```\r\n# do not remove or the training will hang and nodes will be lost w/o this workaround\r\nexport CUDA_LAUNCH_BLOCKING=1\r\n\r\n# force crashing on nccl issues like hanging broadcast\r\nexport NCCL_ASYNC_ERROR_HANDLING=1\r\n```\r\nI see you have already tried the first one, I suppose it didn't help. it solved one huge hanging for BLOOM training.\r\n\r\n3. if none of the above helps, time to get your hands dirty and run `py-spy` and see where it hangs.\r\n\r\nYou can of course run it on the process directly, as you only have 2.\r\n\r\nBut also you may want to read some multi-gpu `py-spy` recipes in:\r\n\r\n- https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles-prequel.md\r\n- https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md\r\n\r\nand in general you might find some helpful notes in there. We had several hanging issues before we managed to get BLOOM-176B training on 384 A100s, albeit it was using Megatron-Deepspeed, which wasn't using ZeRO3, but sort of ZeRO-1 but customized to bf16, but the code is relatively similar and there is a lot of overlap with zero3.\r\n\r\nwhen you report to Deepspeed they will definitely ask you for an output of `py-spy` \r\n\r\np.s. `pip install py-spy; py-spy dump --pid PID`", "Thanks for your detailed reply, @stas00 \r\n\r\nI tried using \r\n\r\n # force crashing on nccl issues like hanging broadcast\r\n export NCCL_ASYNC_ERROR_HANDLING=1\r\n\r\nbut it didn't help. \r\n\r\nBefore getting into `py-spy` , I ran the this script (https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html#gpu-to-gpu-communication) to see if the GPU-to-GPU communication is working correctly on the server I am using and it seems that there are indeed some problems there. The latency is way too large.\r\n\r\n P2P=Enabled Latency (P2P Writes) Matrix (us)\r\n GPU 0 1 2 3 4 5 6 7 \r\n 0 4.95 49206.88 49206.64 49206.69 49206.75 49206.68 49206.72 49206.72 \r\n 1 49206.62 2.08 49206.51 49206.52 49206.42 49206.42 49206.39 49206.43 \r\n 2 49206.70 49206.45 2.21 49206.45 49206.47 49206.56 49206.43 49206.49 \r\n 3 49206.73 49206.53 49206.55 2.21 49206.59 49206.55 49206.55 49206.52 \r\n 4 49206.77 49206.59 49206.57 49206.61 2.11 49206.60 49206.66 49206.60 \r\n 5 49206.66 49206.47 49206.51 49206.49 49206.51 2.11 49206.46 49206.45 \r\n 6 49206.82 49206.57 49206.61 49206.58 49206.62 49206.59 2.08 49206.60 \r\n 7 49206.67 49206.51 49206.49 49206.46 49206.46 49206.47 49206.50 2.11 \r\n\r\nI will get back with more information once we resolved the problem. \r\n\r\nFeel free to close the issue as it's definitely not a transformers problem. ", "oh, so the hardware issue! Thank you for the update\r\n\r\nAlso you can try this diagnostics script:\r\nhttps://github.com/stas00/toolbox/blob/master/pytorch/torch-distributed-gpu-test.py", "I ran your diagnostic script and as with my minimal example above it simply runs forever ... ", "yeah, so it's almost certain a hardware issue then.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Did you solved it? I meet the same problem @mmarius ", "@hahchenchen, hanging is a symptom, the problem leading to it can be completely different. Please see https://github.com/huggingface/transformers/issues/22142 which will tell you the cause.", "@hahchenchen we fixed it by setting the `iommu` kernel parameter as follows: `iommu=soft`. In case your server has AMD CPUs this parameter has a different default value. \r\n\r\nYou can set this parameter in this file: `/etc/default/grub`. In our case it looks like this\r\n\r\n`GRUB_CMDLINE_LINUX_DEFAULT=\"iommu=soft\"`" ]
1,669
1,681
1,672
NONE
null
### System Info My code runs inside an NVIDIA docker container `nvcr.io/nvidia/pytorch:22.05-py3`. The installed dependencies are listed here: https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html#framework-matrix-2022 I'm using the following versions for transformers and deepspeed: - transformers==4.24.0 - deepspeed==0.7.5 ### Who can help? @stas00 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I want to train a model on multiple GPUs. The server I'm using has 8x A100 GPUs with 40GB each. I'm using deepspeed zero3 to partition the model across GPUs. Unfortunately, the code "hangs" mid execution and runs forever. I can run the same code successfully on a different server with V100 GPUs. So I am assuming the issue might be related to the communcation between the GPUs? Not sure. Below are the files I am using. I have also attached to output of the script below. Thanks for your help! Deepspeed config file: { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto", "warmup_type": "linear" } }, "zero_optimization": { "stage": 3, "stage3_gather_16bit_weights_on_model_save": true, "reduce_scatter": true, "overlap_comm": true, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 100, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } Minimal python example: import os from transformers import AutoConfig, AutoModelForSequenceClassification, TrainingArguments, HfArgumentParser, Trainer def main(): parser = HfArgumentParser(TrainingArguments) training_args = parser.parse_args_into_dataclasses()[0] config = AutoConfig.from_pretrained( "facebook/opt-1.3b", cache_dir=os.getenv("HF_MODELS_CACHE"), ) model = AutoModelForSequenceClassification.from_pretrained( "facebook/opt-1.3b", from_tf=False, config=config, cache_dir=os.getenv("HF_MODELS_CACHE"), ) trainer = Trainer( model=model, args=training_args, ) if __name__ == "__main__": main() bash script to start the python script: export NCCL_DEBUG=INFO export NCCL_DEBUG_SUBSYS=ALL export CUDA_LAUNCH_BLOCKING=1 export HF_MODELS_CACHE=/cache-dir OUTPUT_DIR=/output-dir deepspeed \ --num_gpus 2 \ --master_port 60000 \ ./debug.py \ --output_dir $OUTPUT_DIR \ --deepspeed ./deepspeed_configs/ds_config_zero3.json What happens: - The code will run forever. No error message is shown. ### Expected behavior The script terminates successfully.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20438/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20437
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20437/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20437/comments
https://api.github.com/repos/huggingface/transformers/issues/20437/events
https://github.com/huggingface/transformers/pull/20437
1,463,454,125
PR_kwDOCUB6oc5DpvvA
20,437
Rework the pipeline tutorial
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Should we also make a brief mention of [chunk batching](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-chunk-batching)?\r\n\r\nI feel like it's nice that it is transparent for most users, but adding a link might be nice. How would you present it ?", "I accepted most of the comments which are pure improvement save for two, where I think the result would be less than it is currently.\r\n\r\nAlso I feel the language tone is less human-like and more polished overall.\r\nMaking things polished and neutral is probably a lot better, especially for non natives. \r\nI'm just mentioning that because for tutorials, I like when there's a story unraveling, and not too monotone. \r\nHere I think the result with your suggestions are balanced between the two.\r\nThank you !!", "Thanks @stevhliu for the remarks ! \r\n" ]
1,669
1,670
1,670
CONTRIBUTOR
null
# What does this PR do? - Switch to `asr` instead of another NLP task. - It also has simpler to understand results. - Added a section with interaction with `datasets`. - Added a section with writing a simple webserver. Should help users: https://github.com/huggingface/transformers/issues/20414 @stevhliu @sgugger @mishig25 If I could have some initial feedback on the general direction that'd be great. After the direction is validated, I will go and fix all the tests and links. And also check more the actual result formatting in the end documentation to add/remove stuff like `<tip>` to make it hopefully more readable. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20437/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20437/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20437", "html_url": "https://github.com/huggingface/transformers/pull/20437", "diff_url": "https://github.com/huggingface/transformers/pull/20437.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20437.patch", "merged_at": 1670320051000 }
https://api.github.com/repos/huggingface/transformers/issues/20436
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20436/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20436/comments
https://api.github.com/repos/huggingface/transformers/issues/20436/events
https://github.com/huggingface/transformers/pull/20436
1,463,365,533
PR_kwDOCUB6oc5Dpc3Q
20,436
Fix ESM checkpoints for tests
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
MEMBER
null
Some of the ESM tests were still using my checkpoints instead of `facebook/` ones, so this PR fixes that! Also, TF inference tests are now enabled.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20436/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20436/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20436", "html_url": "https://github.com/huggingface/transformers/pull/20436", "diff_url": "https://github.com/huggingface/transformers/pull/20436.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20436.patch", "merged_at": 1669641569000 }
https://api.github.com/repos/huggingface/transformers/issues/20435
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20435/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20435/comments
https://api.github.com/repos/huggingface/transformers/issues/20435/events
https://github.com/huggingface/transformers/issues/20435
1,463,314,027
I_kwDOCUB6oc5XOGZr
20,435
Add DPT-hybrid
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi @NielsRogge Can I take this up if no one is working on it?", "This would be great @nandwalritik ! The Stable Diffusion Depth estimation model depends on it, so we'd definitely help you in whatever way we can :-) ", "@NielsRogge should we port the ResNetv2 exactly like ResNetv1: https://github.com/huggingface/transformers/blob/main/src/transformers/models/resnet/modeling_resnet.py or could we directly port it from `timm`?", "Could you provide some more links on how to add ViT-Hybrid?", "@patrickvonplaten yes we can add ResNetv2 in the same way as we added Resnet V1. Meaning, a separate standalone model in the library.\r\n\r\nOnce we have that, we have everything we need to define modeling_dpt_hybrid.py. This would be a copy of modeling_dpt.py, except that we leverage ResNetv2 (we can do that using the new AutoBackbone API which was just added) in the DPTViTEmbeddings class. \r\n\r\nFor stable diffusion there's no need to add hybrid ViT as a standalone model now (i.e. we can add modeling_hybrid_vit.py another time).", "@NielsRogge @patrickvonplaten Can you let me know the next steps, what should I add first?\r\nResNetv2 -> DPT-hybrid, leaving ViT-hybrid as Niels mentioned above, will this be the order?", "@nandwalritik yes, let's start by adding ResNetv2, based on timm's implementation found here: https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/resnetv2.py.\r\n\r\nNext, we can add DPT-hybrid, based on the modeling files here: https://github.com/isl-org/DPT/tree/main/dpt\r\n\r\nThis can be done in 2 separate PR's", "Ok I Will start with ResNetv2 and then I will try adding DPT-hybrid.", "Should I use `add-new-model` or `add-new-model-like` command as we already have ResNet implementation in huggingface ?", "You can use `add-new-model-like` and start from `resnet` (we might deprecate `add-new-model`)" ]
1,669
1,670
1,670
CONTRIBUTOR
null
### Model description DPT-hybrid is used in Stable Diffusion 2.0's depth model and is also a nice depth estimation model in general. We already support [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) - which is also the default model of our [depth estimation pipeline](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.DepthEstimationPipeline), but not DPT-hybrid. The latter uses ViT-hybrid as backbone. Hence, we first would need to add ViT-hybrid to the library. This model is very similar to a regular ViT, except that instead of patchifying an image and embedding each patch, this model uses a pre-trained ResNetv2 to embed the image before feeding the features to a Transformer encoder. This means that we first need to add ResNetv2 to the library. 😅 however that seems fairly doable, I'd recommend porting the one from timm: https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/resnetv2.py. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation DPT-hybrid is available here: https://github.com/isl-org/MiDaS
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20435/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20435/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20434
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20434/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20434/comments
https://api.github.com/repos/huggingface/transformers/issues/20434/events
https://github.com/huggingface/transformers/pull/20434
1,462,862,817
PR_kwDOCUB6oc5DnwK4
20,434
make tensors in function build_relative_position created on proper device
{ "login": "qq775294390", "id": 30080441, "node_id": "MDQ6VXNlcjMwMDgwNDQx", "avatar_url": "https://avatars.githubusercontent.com/u/30080441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qq775294390", "html_url": "https://github.com/qq775294390", "followers_url": "https://api.github.com/users/qq775294390/followers", "following_url": "https://api.github.com/users/qq775294390/following{/other_user}", "gists_url": "https://api.github.com/users/qq775294390/gists{/gist_id}", "starred_url": "https://api.github.com/users/qq775294390/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qq775294390/subscriptions", "organizations_url": "https://api.github.com/users/qq775294390/orgs", "repos_url": "https://api.github.com/users/qq775294390/repos", "events_url": "https://api.github.com/users/qq775294390/events{/privacy}", "received_events_url": "https://api.github.com/users/qq775294390/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
…vice instead of always on cpu # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #20413 This PR makes tensors in function build_relative_position created on proper device instead of always on cpu. More details in #20413 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20434/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20434/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20434", "html_url": "https://github.com/huggingface/transformers/pull/20434", "diff_url": "https://github.com/huggingface/transformers/pull/20434.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20434.patch", "merged_at": 1669643101000 }
https://api.github.com/repos/huggingface/transformers/issues/20433
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20433/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20433/comments
https://api.github.com/repos/huggingface/transformers/issues/20433/events
https://github.com/huggingface/transformers/pull/20433
1,462,848,843
PR_kwDOCUB6oc5DntJN
20,433
Replace assertions with ValueErros on distilbert model
{ "login": "JuheonChu", "id": 35699839, "node_id": "MDQ6VXNlcjM1Njk5ODM5", "avatar_url": "https://avatars.githubusercontent.com/u/35699839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JuheonChu", "html_url": "https://github.com/JuheonChu", "followers_url": "https://api.github.com/users/JuheonChu/followers", "following_url": "https://api.github.com/users/JuheonChu/following{/other_user}", "gists_url": "https://api.github.com/users/JuheonChu/gists{/gist_id}", "starred_url": "https://api.github.com/users/JuheonChu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JuheonChu/subscriptions", "organizations_url": "https://api.github.com/users/JuheonChu/orgs", "repos_url": "https://api.github.com/users/JuheonChu/repos", "events_url": "https://api.github.com/users/JuheonChu/events{/privacy}", "received_events_url": "https://api.github.com/users/JuheonChu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This is a new PR of my own based on your suggestion from [here](https://github.com/huggingface/transformers/pull/20375). files which improved to pass all the validity checks. \r\nCan I ask for your further suggestions for this PR to be merged? \r\nAny comments will be very appreciated! \r\n\r\nBelow mentions of PR #12789 is a related PR regarding the replacement of assertions with raising exceptions with conditions that are contrary to the pre-defined conditions.\r\n", "Hi @JuheonChu ! 👋\nThanks for the PR ;) will review it asap and let you know!", "Thank you @younesbelkada for providing us with valuable suggestions! We will make changes and make anotehr PR! \r\nSincerely appreciate it :)", "No worries @JuheonChu ! \r\nAs I made you some suggestions, actually you can directly continue on this PR, let me guide you through this step by step: \r\nStep 1: Go to \"file changed\" (top right of the github UI of this PR):\r\n<img width=\"921\" alt=\"Screenshot 2022-11-25 at 22 17 47\" src=\"https://user-images.githubusercontent.com/49240599/204059724-181f2dbc-9e09-4031-bb8c-3499b3ab12ce.png\">\r\n\r\nStep 2: For each suggestion, click on \"Add suggestion to batch\", to add each suggestion\r\n<img width=\"921\" alt=\"Screenshot 2022-11-25 at 22 18 59\" src=\"https://user-images.githubusercontent.com/49240599/204059801-2e1f1a1a-cf9b-43a9-8e9b-bf8d43ea4b84.png\">\r\n\r\nStep 3: Once you have added all the suggestions (make sure to add all of them), a pop up bar will appear on the top-right corner, and you can just click to it, and the suggestions will be pushed ;) \r\n\r\n<img width=\"1467\" alt=\"Screenshot 2022-11-25 at 22 19 43\" src=\"https://user-images.githubusercontent.com/49240599/204059837-7f8bf1fc-4c63-4234-8dae-48b52b38eb20.png\">\r\n\r\nThis way no need to open a new PR each time we do a suggestion ;) ! Let me know if anything is unclear! ", "Due to the reformatting of jupyter files, I created a new PR [here](https://github.com/huggingface/transformers/pull/20463) which passes all the validation checks. I apologize for any inconvenience, but do you mind if you can check [here](https://github.com/huggingface/transformers/pull/20463)?\r\n\r\nTo: @younesbelkada ", "Sure! I propose to close this PR as we moved everything on the other PR ;)" ]
1,669
1,670
1,669
CONTRIBUTOR
null
Co-author: @batese2001 This is an extended PR from [here](https://github.com/huggingface/transformers/pull/20432) for new valid-checks to improve _code quality._ This raises exceptions on Multi-head Attention models for following specified conditions on the _Distilbert_ model. This successfully replaces assertions with ValueErrors with the _Distilbert_ model related to #12789. Would you be open to me changing assertions if I encounter other ones? To: @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20433/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20433/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20433", "html_url": "https://github.com/huggingface/transformers/pull/20433", "diff_url": "https://github.com/huggingface/transformers/pull/20433.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20433.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20432
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20432/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20432/comments
https://api.github.com/repos/huggingface/transformers/issues/20432/events
https://github.com/huggingface/transformers/pull/20432
1,462,803,217
PR_kwDOCUB6oc5DnjS1
20,432
Raise Value Error on Distilbert Model
{ "login": "JuheonChu", "id": 35699839, "node_id": "MDQ6VXNlcjM1Njk5ODM5", "avatar_url": "https://avatars.githubusercontent.com/u/35699839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JuheonChu", "html_url": "https://github.com/JuheonChu", "followers_url": "https://api.github.com/users/JuheonChu/followers", "following_url": "https://api.github.com/users/JuheonChu/following{/other_user}", "gists_url": "https://api.github.com/users/JuheonChu/gists{/gist_id}", "starred_url": "https://api.github.com/users/JuheonChu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JuheonChu/subscriptions", "organizations_url": "https://api.github.com/users/JuheonChu/orgs", "repos_url": "https://api.github.com/users/JuheonChu/repos", "events_url": "https://api.github.com/users/JuheonChu/events{/privacy}", "received_events_url": "https://api.github.com/users/JuheonChu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Validity Checks will be improved. Closing PR." ]
1,669
1,669
1,669
CONTRIBUTOR
null
Co-author: @batese2001 This PR is a new PR extended from https://github.com/huggingface/transformers/pull/20375.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20432/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20432", "html_url": "https://github.com/huggingface/transformers/pull/20432", "diff_url": "https://github.com/huggingface/transformers/pull/20432.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20432.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20431
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20431/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20431/comments
https://api.github.com/repos/huggingface/transformers/issues/20431/events
https://github.com/huggingface/transformers/issues/20431
1,462,711,390
I_kwDOCUB6oc5XLzRe
20,431
Last hidden states different for same input even when evaluating ViT model
{ "login": "robbohua", "id": 97416182, "node_id": "U_kgDOBc5z9g", "avatar_url": "https://avatars.githubusercontent.com/u/97416182?v=4", "gravatar_id": "", "url": "https://api.github.com/users/robbohua", "html_url": "https://github.com/robbohua", "followers_url": "https://api.github.com/users/robbohua/followers", "following_url": "https://api.github.com/users/robbohua/following{/other_user}", "gists_url": "https://api.github.com/users/robbohua/gists{/gist_id}", "starred_url": "https://api.github.com/users/robbohua/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/robbohua/subscriptions", "organizations_url": "https://api.github.com/users/robbohua/orgs", "repos_url": "https://api.github.com/users/robbohua/repos", "events_url": "https://api.github.com/users/robbohua/events{/privacy}", "received_events_url": "https://api.github.com/users/robbohua/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nYes ViTMAE generates a random boolean mask to mask patches internally. This results in non-deterministic hidden states when you perform a forward pass twice on the same image.\r\n\r\nTo get deterministic behaviour, you can pass a noise tensor yourself as shown [here](https://github.com/huggingface/transformers/blob/afce73bd9d891b55dcb8d4d875d17718ffa01ff0/tests/models/vit_mae/test_modeling_vit_mae.py#L321). ", "Thanks!", "Btw another way (in case you don't want to mask patches) is to set `config.mask_ratio=0.0`" ]
1,669
1,669
1,669
NONE
null
### System Info Google Colab ### Who can help? @NielsRogge @sg ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I've created a public Colab notebook: https://colab.research.google.com/drive/1CUpyNInQg2kw7gL-mYwX8ov8fc9HuAV-#scrollTo=DAXcuQXVtTkz where a VitMAE model is created, and its weights saved using Trainer. Then it is loaded again, and placed in eval mode. However the last hidden state changes when it is passed the same input - why might that be happening? ### Expected behavior A model in eval mode should have a deterministic output for the same input
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20431/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20430
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20430/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20430/comments
https://api.github.com/repos/huggingface/transformers/issues/20430/events
https://github.com/huggingface/transformers/issues/20430
1,462,662,334
I_kwDOCUB6oc5XLnS-
20,430
Questions on ViT and ViT-MAE model image preprocessing
{ "login": "zhoutong-fu", "id": 64811959, "node_id": "MDQ6VXNlcjY0ODExOTU5", "avatar_url": "https://avatars.githubusercontent.com/u/64811959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhoutong-fu", "html_url": "https://github.com/zhoutong-fu", "followers_url": "https://api.github.com/users/zhoutong-fu/followers", "following_url": "https://api.github.com/users/zhoutong-fu/following{/other_user}", "gists_url": "https://api.github.com/users/zhoutong-fu/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhoutong-fu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhoutong-fu/subscriptions", "organizations_url": "https://api.github.com/users/zhoutong-fu/orgs", "repos_url": "https://api.github.com/users/zhoutong-fu/repos", "events_url": "https://api.github.com/users/zhoutong-fu/events{/privacy}", "received_events_url": "https://api.github.com/users/zhoutong-fu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nThanks for your interest in ViT and ViT MAE!\r\n\r\nRegarding point 1; ViT was ported from the timm library, which also uses a scale factor of 1/255. This can be verified as follows:\r\n\r\n```\r\nfrom timm.data import resolve_data_config\r\nfrom timm.data.transforms_factory import create_transform\r\n\r\nmodel = timm.create_model('vit_base_patch16_224',pretrained=True)\r\nmodel.eval()\r\n\r\n# Create Transform\r\ntransform = create_transform(**resolve_data_config(model.pretrained_cfg, model=model))\r\nprint(transform)\r\n```\r\nwhich prints:\r\n```\r\nCompose(\r\n Resize(size=248, interpolation=bicubic, max_size=None, antialias=None)\r\n CenterCrop(size=(224, 224))\r\n ToTensor()\r\n Normalize(mean=tensor([0.5000, 0.5000, 0.5000]), std=tensor([0.5000, 0.5000, 0.5000]))\r\n)\r\n```\r\n=> here, [ToTensor](https://pytorch.org/vision/stable/generated/torchvision.transforms.ToTensor.html) is used, which as stated in the docs converts a tensor with values 0 - 255 to the range [0-1]. Maybe @rwightman can clarify this.\r\n\r\nRegarding point 2:\r\n\r\nThat's a valid point, not sure why I missed that. We could update this, although I'm not sure it has a big impact on downstream results. cc @sgugger ", "@NielsRogge @zhoutong-fu\r\n\r\nToTensor: uint8 [0, 255] -> float [0, 1.0]\r\nNormalize(mean=tensor([0.5000, 0.5000, 0.5000]), std=tensor([0.5000, 0.5000, 0.5000])): float [0, 1.0] -> float [-1, 1]\r\n\r\nInterpolation should be bicubic for both ViT and ViT MAE\r\n\r\n\r\n", "Thank you guys for the discussion.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
### System Info Latest version ### Who can help? @NielsRogge ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi @NielsRogge, I'm reading the ViT image processing from HF and its original implementation and have a few questions on image processing. Really appreciate your helping me understand the differences. **ViT** The JAX implementation from Google rescales the pixel range to [-1, 1]: https://github.com/google-research/vision_transformer/blob/main/vit_jax/input_pipeline.py#L214 HF rescales it by a factor of 1/255: https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/image_processing_vit.py#L80 **ViT MAE** PyTorch implementation from Meta resizes the image with PIL.Image.BICUBIC interpolation: https://github.com/facebookresearch/mae/blob/efb2a8062c206524e35e47d04501ed4f544c0ae8/util/datasets.py#L59 HF uses BILINEAR https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/image_processing_vit.py#L78 ### Expected behavior I'd like to have clarifications of the questions above.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20430/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20430/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20429
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20429/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20429/comments
https://api.github.com/repos/huggingface/transformers/issues/20429/events
https://github.com/huggingface/transformers/pull/20429
1,462,607,319
PR_kwDOCUB6oc5Dm5d6
20,429
Pass output options to FLAVA multimodal transformer block
{ "login": "KhoomeiK", "id": 32777448, "node_id": "MDQ6VXNlcjMyNzc3NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/32777448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KhoomeiK", "html_url": "https://github.com/KhoomeiK", "followers_url": "https://api.github.com/users/KhoomeiK/followers", "following_url": "https://api.github.com/users/KhoomeiK/following{/other_user}", "gists_url": "https://api.github.com/users/KhoomeiK/gists{/gist_id}", "starred_url": "https://api.github.com/users/KhoomeiK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KhoomeiK/subscriptions", "organizations_url": "https://api.github.com/users/KhoomeiK/orgs", "repos_url": "https://api.github.com/users/KhoomeiK/repos", "events_url": "https://api.github.com/users/KhoomeiK/events{/privacy}", "received_events_url": "https://api.github.com/users/KhoomeiK/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20429). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
# What does this PR do? Currently, the FLAVA model passes `output_attentions` and `output_hidden_states` only to its text & image blocks, preventing external access to cross-modal attentions in the multimodal block. This simple PR adds this ability by passing the `output_attentions` and `output_hidden_states` options to `FlavaMultimodalModel` during the `forward` pass. I personally need to access these cross-modal attentions for implementing an auxiliary loss ([IAIS](https://github.com/lancopku/IAIS/)) which uses them, and this PR doesn't significantly change the behavior of this model. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @TristanThrush @apsdehal <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20429/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20429/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20429", "html_url": "https://github.com/huggingface/transformers/pull/20429", "diff_url": "https://github.com/huggingface/transformers/pull/20429.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20429.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20428
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20428/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20428/comments
https://api.github.com/repos/huggingface/transformers/issues/20428/events
https://github.com/huggingface/transformers/issues/20428
1,462,433,497
I_kwDOCUB6oc5XKvbZ
20,428
Migarate old cache to transformers v4.22.0
{ "login": "yzGao22", "id": 104778707, "node_id": "U_kgDOBj7L0w", "avatar_url": "https://avatars.githubusercontent.com/u/104778707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yzGao22", "html_url": "https://github.com/yzGao22", "followers_url": "https://api.github.com/users/yzGao22/followers", "following_url": "https://api.github.com/users/yzGao22/following{/other_user}", "gists_url": "https://api.github.com/users/yzGao22/gists{/gist_id}", "starred_url": "https://api.github.com/users/yzGao22/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yzGao22/subscriptions", "organizations_url": "https://api.github.com/users/yzGao22/orgs", "repos_url": "https://api.github.com/users/yzGao22/repos", "events_url": "https://api.github.com/users/yzGao22/events{/privacy}", "received_events_url": "https://api.github.com/users/yzGao22/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I am also facing this issue with 4.24", "Looks like there was a connection problem when the util tried to migrate the cache. If you don't care about the cached models, you can just ignore this, everything will work fine.\r\nTo try again to move an old cache to the new format, you can execute\r\n```\r\nfrom transformers.utils.hub import move_cache\r\n\r\nmove_cache()\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> Looks like there was a connection problem when the util tried to migrate the cache. If you don't care about the cached models, you can just ignore this, everything will work fine. To try again to move an old cache to the new format, you can execute\r\n> \r\n> ```\r\n> from transformers.util.hub import mvoe_cache\r\n> \r\n> move_cache()\r\n> ```\r\n\r\nIn case anyone else finds this, there are two typos in the supplied command. The correct import is:\r\n\r\n`from transformers.utils.hub import move_cache`" ]
1,669
1,693
1,672
NONE
null
### System Info - `transformers` version: 4.22.0 - Platform: Linux-5.4.0-1089-aws-x86_64-with-debian-buster-sid - Python version: 3.7.10 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When update transformers from v4.12.3 to v4.22.0, got following message: ``` The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. Moving 55 files to the new cache system 0% 0/55 [00:00<?, ?it/s] There was a problem when trying to move your cache: File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/utils/hub.py", line 1127, in <module> move_cache() File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/utils/hub.py", line 1071, in move_cache hub_metadata[url] = get_hub_metadata(url, token=token) File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/utils/hub.py", line 996, in get_hub_metadata huggingface_hub.file_download._raise_for_status(r) Please file an issue at https://github.com/huggingface/transformers/issues/new/choose and copy paste this whole message and we will do our best to help. ``` ### Expected behavior The migrating process can be done successfully.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20428/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20428/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20427
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20427/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20427/comments
https://api.github.com/repos/huggingface/transformers/issues/20427/events
https://github.com/huggingface/transformers/pull/20427
1,462,417,504
PR_kwDOCUB6oc5DmPhi
20,427
Add deprecation warning when image FE instantiated
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,670
1,670
COLLABORATOR
null
# What does this PR do? Adds a deprecation warning if someone tries to create a vision feature extractor. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20427/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20427", "html_url": "https://github.com/huggingface/transformers/pull/20427", "diff_url": "https://github.com/huggingface/transformers/pull/20427.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20427.patch", "merged_at": 1670532455000 }
https://api.github.com/repos/huggingface/transformers/issues/20426
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20426/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20426/comments
https://api.github.com/repos/huggingface/transformers/issues/20426/events
https://github.com/huggingface/transformers/pull/20426
1,462,388,132
PR_kwDOCUB6oc5DmJLx
20,426
Pipeline testing - using tiny models on Hub
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "_The documentation is not available anymore as the PR was closed or merged._", "Gently pining @LysandreJik for review at their convenience. We discussed last time offline the pipeline testing will eventually avoid using metaclass - I will work on it in a future PR. I think it's better to have progressive changes toward our ultimate goal 😊🙏", "Thank you for the ping, I'll have a look!", "Hi @Narsil\r\n\r\nIn this PR, commit [3d46ed81](https://github.com/huggingface/transformers/pull/20426/commits/3d46ed81a7b688b13a85114b9f4168242e24e902), I revert some changes in your (merged) PR #20851.\r\n\r\nIn short: this `def get_test_pipeline(self, model, tokenizer, feature_extractor, image_processor):` is changed to `get_test_pipeline(self, model, tokenizer, processor):`\r\n\r\nBefore you PR, it was `get_test_pipeline(self, model, tokenizer, feature_extractor):`\r\n\r\nMore context:\r\n- This PR leverages the uploaded checkpoints on the Hub for pipeline testing.\r\n- In a follow-up PR, we plan to remove the usage of `PipelineTestCaseMeta`\r\n - (therefore, this particular change will be just short-lived)\r\n\r\nLet me know if you have any question or comment 🙏\r\n", "This change was necessary to get some tests running.\n\nNamely testing that oneformer and the like are actually working. These models **do not** have a feature extractor, only a `ImageProcessor`. So how can you make it work?\n\nSince you're using tiny models maybe that function could be bypassed entirely?\n\nAlso for the network issue (too many from pretrained iiuc) isn't there a way to download all tiny models once and keep them on the runner so we could run the tests in offline mode? So no network calls? Maybe running network mode if there's a failure? (so downloading the new model) ", "@Narsil \r\n\r\n> These models do not have a feature extractor, only a ImageProcessor. So how can you make it work?\r\n\r\n- The creation and upload of tiny models (which is done in another script) should create the tokenizers and/or processors (feature extractors or image processor). During pipeline testing, we just load them. I don't see any problem here, but let me know if I miss any detail.\r\n- (however, the tiny model creation should be run in a regular basis (or triggered by some conditions) in order to make the tiny checkpoints for newly added models available on the hub)\r\n - this is not done yet, but I will work on it too\r\n - `oneformer` doesn't have a tiny model checkpoint yet, so not tested by this PR\r\n - but for other models, even they only have image processors, the tests could pass already\r\n\r\n> Also for the network issue (too many from pretrained iiuc) isn't there a way to download all tiny models once and keep them on the runner\r\n\r\nOn our hosted runners, it's fine (i.e. cached). But what I mentioned is for pull request CI - which runs on `CircleCI`. So far I haven't looked into how to do similar things on it." ]
1,669
1,675
1,675
COLLABORATOR
null
# What does this PR do? Pipeline testing - using tiny models on Hub. A few comments: - This PR moves the tiny model creation from `PipelineTestCaseMeta` (where done dynamically during testing) to `utils/create_dummy_models.py` (where the tiny models are created once and live on the Hub): - The logic is still large, but at least it is done once rather than being created dynamically - When a new model is added in `transformers`, it would **NOT** be used in pipeline testing **UNTIL** we create & upload tiny models for the new model type. - even if we upload the new tiny models (or re-create the existing one), we also have to **UPDATE** [this repo](https://huggingface.co/datasets/hf-internal-testing/tiny-random-model-summary), see comments below - While `pytest` collects the tests to run, the collection is done **in each process** (if we specify `-n N` with `N > 1`): - If we use `from_pretrained` during test collection, there will be too many calls, and the server refuses the requests at some point: the tests being collected will **vary each time and being incomplete** - So I upload a file [processor_classes.json](https://huggingface.co/datasets/hf-internal-testing/tiny-random-model-summary/blob/main/processor_classes.json) containing necessary information to call `gen_test`. The `from_pretrained` will only be called when the test is actually running. - Some tests are just not working (yet), and an important subset of those failed tests is not tested in the current `main` branch - for example, on `main`, all pipeline tests use fast tokenizers - we probably need to check (and possibly fix) some of them, but depends on `impact` and `usage`, we will leave some of them skipped for now
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20426/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20426/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20426", "html_url": "https://github.com/huggingface/transformers/pull/20426", "diff_url": "https://github.com/huggingface/transformers/pull/20426.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20426.patch", "merged_at": 1675071584000 }
https://api.github.com/repos/huggingface/transformers/issues/20425
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20425/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20425/comments
https://api.github.com/repos/huggingface/transformers/issues/20425/events
https://github.com/huggingface/transformers/pull/20425
1,462,384,207
PR_kwDOCUB6oc5DmIUP
20,425
Add Donut image processor
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm currently unable to create a PR myself but please note that `DonutImageProcessor.preprocess()` still calls `.pad()` instead of `.pad_image()`, generating a *lot* of logger noise...", "@pasky Thanks for raising! This should now be resolved with the merging of #20904 " ]
1,669
1,672
1,669
COLLABORATOR
null
# What does this PR do? Adds an image processor for Donut. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20425/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20425", "html_url": "https://github.com/huggingface/transformers/pull/20425", "diff_url": "https://github.com/huggingface/transformers/pull/20425.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20425.patch", "merged_at": 1669718281000 }
https://api.github.com/repos/huggingface/transformers/issues/20424
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20424/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20424/comments
https://api.github.com/repos/huggingface/transformers/issues/20424/events
https://github.com/huggingface/transformers/pull/20424
1,462,371,389
PR_kwDOCUB6oc5DmFc4
20,424
Make `add_special_tokens` more clear
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? Fix tokenization. The short term goal is to fix #20418 (well, make things more clear) The long term goal is to be able to have a generic fix for #20401, but the tokenizer thing is somehow complicated, and I can only fix step by step.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20424/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20424/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20424", "html_url": "https://github.com/huggingface/transformers/pull/20424", "diff_url": "https://github.com/huggingface/transformers/pull/20424.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20424.patch", "merged_at": 1669809392000 }
https://api.github.com/repos/huggingface/transformers/issues/20423
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20423/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20423/comments
https://api.github.com/repos/huggingface/transformers/issues/20423/events
https://github.com/huggingface/transformers/issues/20423
1,462,285,251
I_kwDOCUB6oc5XKLPD
20,423
ImportError: tokenizers>=0.11.1,!=0.11.3,<0.14 is required for a normal functioning of this module, but found tokenizers==0.10.3. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main
{ "login": "Justinfungi", "id": 79019929, "node_id": "MDQ6VXNlcjc5MDE5OTI5", "avatar_url": "https://avatars.githubusercontent.com/u/79019929?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Justinfungi", "html_url": "https://github.com/Justinfungi", "followers_url": "https://api.github.com/users/Justinfungi/followers", "following_url": "https://api.github.com/users/Justinfungi/following{/other_user}", "gists_url": "https://api.github.com/users/Justinfungi/gists{/gist_id}", "starred_url": "https://api.github.com/users/Justinfungi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Justinfungi/subscriptions", "organizations_url": "https://api.github.com/users/Justinfungi/orgs", "repos_url": "https://api.github.com/users/Justinfungi/repos", "events_url": "https://api.github.com/users/Justinfungi/events{/privacy}", "received_events_url": "https://api.github.com/users/Justinfungi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Justinfungi,\r\n\r\nCould you share the command you run that generates this error? Possibly the associated python script?\r\n\r\nThe error suggests that you have a version problem in your python environment. How did you install transformers? Did you try to run the recommended command \"Try: `pip install transformers -U` or `pip install -e '.[dev]'` if you're working with git main\" ? ", "```\r\nimport customtkinter as ctk\r\nimport os\r\nimport torch\r\n#import torchaudio\r\nfrom transformers import AutoModel, AutoTokenizer\r\nimport tortoise\r\n#from tortoise.utils.audio import load_voice\r\n#import vlc\r\n#from tkVideoPlayer import TkinterVideo\r\n\r\nimport tkinter as tk\r\nfrom tkinter import ttk\r\n\r\n# ✅ Works\r\napp = tk.Tk()\r\napp.geometry(\"700x700\")\r\napp.title(\"Justris\")\r\nctk.set_appearance_mode(\"dark\")\r\n\r\n```\r\n\r\nThis is my python script.\r\ni use conda forge to install the transformer\r\nI have run ``` pip install transformers -U ```. It don't generate any error", "Thanks for these details. :hugs: \r\n\r\nGiven your error, I think that what should solve your problem is to install a version of tokenizer compatible as indicated in the error message., e.g. `pip install tokenizers==0.13.2`. Let me know if this works", "[Voice - Jupyter Notebook.pdf](https://github.com/huggingface/transformers/files/10089433/Voice.-.Jupyter.Notebook.pdf)\r\n\r\ni dont know why this error still exist", "Couldn't the problem be that the package is not installed in your virtual environment since you are running the start from your jupyter notebook? \r\n\r\nCf [this thread](https://stackoverflow.com/questions/38368318/installing-a-pip-package-from-within-a-jupyter-notebook-not-working)", "> Couldn't the problem be that the package is not installed in your virtual environment since you are running the start from your jupyter notebook? \n> \n> Cf [this thread](https://stackoverflow.com/questions/38368318/installing-a-pip-package-from-within-a-jupyter-notebook-not-working)\n\ni think i have pip install both in jupyter and terminal. But it doesnt work. it is so werid", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
### System Info I git the newest version OS ubuntu system python3 ``` ImportError: tokenizers>=0.11.1,!=0.11.3,<0.14 is required for a normal functioning of this module, but found tokenizers==0.10.3. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main ``` ### Who can help? @SaulLu ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Traceback (most recent call last): File "AI_clone.py", line 11, in <module> from transformers import AutoModelForCausalLM, AutoTokenizer File "/home/fish/anaconda3/envs/comp3340/lib/python3.7/site-packages/transformers/__init__.py", line 30, in <module> from . import dependency_versions_check File "/home/fish/anaconda3/envs/comp3340/lib/python3.7/site-packages/transformers/dependency_versions_check.py", line 41, in <module> require_version_core(deps[pkg]) File "/home/fish/anaconda3/envs/comp3340/lib/python3.7/site-packages/transformers/utils/versions.py", line 123, in require_version_core return require_version(requirement, hint) File "/home/fish/anaconda3/envs/comp3340/lib/python3.7/site-packages/transformers/utils/versions.py", line 117, in require_version _compare_versions(op, got_ver, want_ver, requirement, pkg, hint) File "/home/fish/anaconda3/envs/comp3340/lib/python3.7/site-packages/transformers/utils/versions.py", line 51, in _compare_versions f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}" ImportError: tokenizers>=0.11.1,!=0.11.3,<0.14 is required for a normal functioning of this module, but found tokenizers==0.10.3. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main ### Expected behavior How to use the Transformer lib without this error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20423/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20423/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20422
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20422/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20422/comments
https://api.github.com/repos/huggingface/transformers/issues/20422/events
https://github.com/huggingface/transformers/issues/20422
1,462,244,472
I_kwDOCUB6oc5XKBR4
20,422
Linguistic words (not word pieces) enter Transformer
{ "login": "fivehills", "id": 40301946, "node_id": "MDQ6VXNlcjQwMzAxOTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/40301946?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fivehills", "html_url": "https://github.com/fivehills", "followers_url": "https://api.github.com/users/fivehills/followers", "following_url": "https://api.github.com/users/fivehills/following{/other_user}", "gists_url": "https://api.github.com/users/fivehills/gists{/gist_id}", "starred_url": "https://api.github.com/users/fivehills/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fivehills/subscriptions", "organizations_url": "https://api.github.com/users/fivehills/orgs", "repos_url": "https://api.github.com/users/fivehills/repos", "events_url": "https://api.github.com/users/fivehills/events{/privacy}", "received_events_url": "https://api.github.com/users/fivehills/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The model only \"knows\" the tokens present in the tokenizer vocabulary, so you won't be able to pass something else to it.", "Many thanks! @sgugger\r\n\r\nI am wondering whether a separate third-party tokenizer tool (from existing packages) is able to first split sentences into natural words, and compute alignments between full words and the sub-words split by Transformer tokenzier.\r\n\r\nIt will be greatly appreciated if you could kindly point out these tools.", "Hi @fivehills,\r\n\r\nThe fast versions of our tokenizers have methods that will surely be useful to you. I advise you to look at [the section \" Fast tokenizers' special powers\"](https://huggingface.co/course/chapter6/3?fw=pt ) of our course that will explain how you can map tokens to \"words\" and map the tokens back to the input sentence.", "@SaulLu Many thanks!\r\n[the section \" Fast tokenizers' special powers\"](https://huggingface.co/course/chapter6/3?fw=pt) can solve the problem of alignment between word pieces and natural words.", "I'm glad this helped you! I'm closing this issue as it seems to have been resolved :blush: " ]
1,669
1,669
1,669
NONE
null
### System Info Linux Debian Bert GPT ### Who can help? @ArthurZucker @SaulLu @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi, The Tokenizer from the pretrained model tokenizes natural words (delimited by whitespace) into word pieces automatically. For example, ```python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") tokenizer.tokenize("Laced with dreams-dripping in reality, the American Dream reignites after 9.11 with a true story about the Devil Ray's mid-life rookie , Jimmy Morris. ") ['[CLS]', 'laced', 'with', 'dreams', '-', 'dripping', 'in', 'reality', ',', 'the', 'american', 'dream', 'reign', '##ites', 'after', '9', '.', '11', 'with', 'a', 'true', 'story', 'about', 'the', 'devil', 'ray', "'", 's', 'mid', '-', 'life', 'rookie', ',', 'jimmy', 'morris', '.', '[SEP]'] # gpt2 ['<|endoftext|>', 'L', 'aced', 'Ġwith', 'Ġdreams', 'Ġ-', 'Ġdripping', 'Ġin', 'Ġreality', ',', 'Ġthe', 'ĠAmerican', 'ĠDream', 'Ġreign', 'ites', 'Ġafter', 'Ġ9', '.', '11', 'Ġwith', 'Ġa', 'Ġtrue', 'Ġstory', 'Ġabout', 'Ġthe', 'ĠDevil', 'ĠRay', "'s", 'Ġmid', '-', 'life', 'Ġrookie', ',', 'ĠJimmy', 'ĠMorris', '.', '<|endoftext|>'] # xlnet-base-cased ['<cls>', '▁Lac', 'ed', '▁with', '▁dreams', '▁', '-', '▁dripping', '▁in', '▁reality', ',', '▁the', '▁American', '▁Dream', '▁reign', 'ites', '▁after', '▁9', '.', '11', '▁with', '▁a', '▁true', '▁story', '▁about', '▁the', '▁Devil', '▁Ray', "'", 's', '▁mid', '-', 'life', '▁rookie', ',', '▁Jimmy', '▁Morris', '.', '</s>'] # xlm-mlm-enfr-1024 ['<s>', 'laced</w>', 'with</w>', 'dreams</w>', '-</w>', 'dri', 'pping</w>', 'in</w>', 'reality</w>', ',</w>', 'the</w>', 'americ', 'an</w>', 'dream</w>', 're', 'ign', 'ites</w>', 'after</w>', '9.', '11</w>', 'with</w>', 'a</w>', 'true</w>', 'story</w>', 'about</w>', 'the</w>', 'devil</w>', 'ray</w>', "'s</w>", 'mid</w>', '-</w>', 'life</w>', 'rookie</w>', ',</w>', 'j', 'im', 'my</w>', 'mor', 'ris</w>', '.</w>', '</s>'] ``` However, I want to tokenize the sentence into linguistic words rather than word pieces when the Transformer pretrained model is introduced and its Tokenizer is employed. I want to use natural words to enter transformer. The result I want to get and the natural words enter the Transforer model to do some calculations. ``` ['Laced', 'with', 'dreams-dripping', 'in', 'reality', ',', 'the', 'American', 'Dream', 'reignites', 'after', '9.11', 'with', 'a', 'true', 'story', 'about', 'the', 'Devil', 'Ray', "'s", 'mid-life', 'rookie', ',', 'Jimmy', 'Morris', '.'] ``` How to make some setup in the Tokenizer to realize this? Many thanks! Best, Kevin ### Expected behavior The result I want to get is the natural words which enter the Transforer model to do some calculations. ``` ['Laced', 'with', 'dreams-dripping', 'in', 'reality', ',', 'the', 'American', 'Dream', 'reignites', 'after', '9.11', 'with', 'a', 'true', 'story', 'about', 'the', 'Devil', 'Ray', "'s", 'mid-life', 'rookie', ',', 'Jimmy', 'Morris', '.'] ``` How to make some setup in the Tokenizer to realize this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20422/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20421
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20421/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20421/comments
https://api.github.com/repos/huggingface/transformers/issues/20421/events
https://github.com/huggingface/transformers/pull/20421
1,462,216,162
PR_kwDOCUB6oc5Dli8D
20,421
add in layer gpt2 tokenizer
{ "login": "piEsposito", "id": 47679710, "node_id": "MDQ6VXNlcjQ3Njc5NzEw", "avatar_url": "https://avatars.githubusercontent.com/u/47679710?v=4", "gravatar_id": "", "url": "https://api.github.com/users/piEsposito", "html_url": "https://github.com/piEsposito", "followers_url": "https://api.github.com/users/piEsposito/followers", "following_url": "https://api.github.com/users/piEsposito/following{/other_user}", "gists_url": "https://api.github.com/users/piEsposito/gists{/gist_id}", "starred_url": "https://api.github.com/users/piEsposito/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/piEsposito/subscriptions", "organizations_url": "https://api.github.com/users/piEsposito/orgs", "repos_url": "https://api.github.com/users/piEsposito/repos", "events_url": "https://api.github.com/users/piEsposito/events{/privacy}", "received_events_url": "https://api.github.com/users/piEsposito/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Just figured we have to solve some max lengths on the xla_tests. Changing the way we get the shapes broke some stuff. \r\n\r\nTomorrow I'll fix it either on the method or on all the broken tests. Feel free to do it if anyone here is in a rush. ", "Actually, if you want I can handle the generate stuff on another PR, but IMHO it makes sense to keep it here, as it ensures the model will actually work along with the tokenizer. ", "Yeah, I would definitely keep any generate fixes that you need in this PR. Overall this looks extremely good, though, and great job figuring out a mapping from the existing BPE tokenizers to TF BPE that works!", "Because this is touching the core library (by adding a `keras-nlp` dependency and an `is_keras_nlp_available()` check) and TF `generate()` code, I'm going to ping @sgugger and @gante to take a look too just to make sure it's all okay!", "Yeah @Rocketknight1 actually I'm think that the generate thing can get bigger than this PR. Let me cut it out and open another PR for that.", "Actually, just did it. Let's keep it simple and handle TF hell on another time. ", "Totally fine with that too - as long as it works for the core model we can add extra features over time. I'd love to see how many models in our codebase we could just copy-paste this new approach to as well!", "@Rocketknight1 it works for the core model as per a test on this PR. Do you have any idea on why the add model like runner test is not passing? It seems related to some strange code quality check on a file I didn't change.", "@piEsposito This happens sometimes - it's usually because you just happened to fork the repo at a bad time when that bug was present. The easiest way to fix these issues is just to rebase your PR and force push.\r\n\r\nAlso, since we're adding a new dependency here I want to wait to get a couple of reviews from the core team, but half the company is missing for Thanksgiving, so things might be a little slow with this PR until Monday! I definitely want to push to get it in for the next release, though.", "@Rocketknight1 happy thanksgiving! \r\n\r\nAll right, let me rebase it and we wait for the other folks to review it. Thank you for your support :D.", "Should finish to address your review early next week. Stable Diffusion v2 got me into the rabbit hole haha. ", "btw @piEsposito if rebasing isn't fixing those tests, don't worry - they're very clearly in files totally untouched by this PR, so I'm happy to merge with them still red! Let me know whenever you're happy with the rest of the PR, and also note that we're planning a branch cut on Tuesday for the release, so this can go into the release if we merge before then!", "@Rocketknight1 this should be finished today if I'm lucky. Thank you for understanding the red test thing.", "@Rocketknight1 by mistake I removed you from the reviews, but I was actually trying to ask you to do one. I'm sorry.\r\n\r\n@gante i've addressed your review.", "@Rocketknight1 you can merge it now to keep it on the next release.\n\nI will start trying to replicate this to other BPE tokenizers in the sequence.\n\nThank you, @gante and @sgugger for the kindness and support." ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? - Adds in layer `TFGPT2Tokenizer` to enable serialization and serving it with TF Serving - Small fixes on Tensorflow generation utils and GPT2 attentions to use `tf.shape` instead of `Tensor.shape` to solve max sequence length; and - Explicitly unstacking the past key values on TF GPT2 Attention layer to avoid `None` shape issues that come with `generate` on TF compiled models. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Addresses first step of #19992 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. -> https://github.com/huggingface/transformers/issues/19992 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20421/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20421/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20421", "html_url": "https://github.com/huggingface/transformers/pull/20421", "diff_url": "https://github.com/huggingface/transformers/pull/20421.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20421.patch", "merged_at": 1669734160000 }
https://api.github.com/repos/huggingface/transformers/issues/20420
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20420/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20420/comments
https://api.github.com/repos/huggingface/transformers/issues/20420/events
https://github.com/huggingface/transformers/pull/20420
1,462,206,497
PR_kwDOCUB6oc5DlgzR
20,420
Add BioGPT
{ "login": "kamalkraj", "id": 17096858, "node_id": "MDQ6VXNlcjE3MDk2ODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kamalkraj", "html_url": "https://github.com/kamalkraj", "followers_url": "https://api.github.com/users/kamalkraj/followers", "following_url": "https://api.github.com/users/kamalkraj/following{/other_user}", "gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions", "organizations_url": "https://api.github.com/users/kamalkraj/orgs", "repos_url": "https://api.github.com/users/kamalkraj/repos", "events_url": "https://api.github.com/users/kamalkraj/events{/privacy}", "received_events_url": "https://api.github.com/users/kamalkraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@younesbelkada \r\nDone changes according to your suggestions.\r\nThanks for the review", "Thanks @kamalkraj \nLet me give it another round of review and I'll get back to you", "@sgugger \r\nDone changes according to your suggestions.\r\nThanks for the review", "Hi @kamalkraj \r\nThe repo has been moved to microsoft: https://huggingface.co/microsoft/biogpt \r\nCould you please update the PR accordingly? Also it seems that you need to rebase to main\r\nThanks!", "@younesbelkada Done the changes \r\nThanks", "thanks @kamalkraj !\r\nIt seems that styling tests are failing, could you please run `make fixup`?", "@younesbelkada fixed", "Thanks so much @kamalkraj !\nLet's leave it now to @sgugger to give his review ;) \nThanks!" ]
1,669
1,670
1,670
CONTRIBUTOR
null
# What does this PR do? Adding BioGPT Original Implementation and weights - https://github.com/microsoft/BioGPT <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20420/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20420/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20420", "html_url": "https://github.com/huggingface/transformers/pull/20420", "diff_url": "https://github.com/huggingface/transformers/pull/20420.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20420.patch", "merged_at": 1670253124000 }
https://api.github.com/repos/huggingface/transformers/issues/20419
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20419/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20419/comments
https://api.github.com/repos/huggingface/transformers/issues/20419/events
https://github.com/huggingface/transformers/pull/20419
1,462,130,006
PR_kwDOCUB6oc5DlQEQ
20,419
Fix device in longformer onnx path
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "gently pinging @sgugger for final approval :)", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? Longformer has a custom path in its `_chunk()` method, in order to be tracable (to some extent) + exportable to ONNX. https://github.com/huggingface/transformers/pull/20292 fixed a bug where this special path was always registering a non-general case: https://github.com/huggingface/transformers/blob/9ef46659da45f6b605873ca59124d03976990b33/src/transformers/models/longformer/modeling_longformer.py#L785-L787 It seems that the `else` path that should be taken in the export was actually never tested, and notably never tested on GPU. This PR fixes a device assignment. The test `RUN_SLOW=1 python -m pytest -v tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_102_longformer_token_classification` is now running fine. ## Who can review? @lewtun @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20419/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20419", "html_url": "https://github.com/huggingface/transformers/pull/20419", "diff_url": "https://github.com/huggingface/transformers/pull/20419.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20419.patch", "merged_at": 1669234021000 }
https://api.github.com/repos/huggingface/transformers/issues/20418
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20418/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20418/comments
https://api.github.com/repos/huggingface/transformers/issues/20418/events
https://github.com/huggingface/transformers/issues/20418
1,462,108,080
I_kwDOCUB6oc5XJf-w
20,418
`additional_special_tokens` is replaced instead of being updated
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I also saw this issue, and found it very unintuitive. Should be addressed in V5 imo. Cc @LysandreJik @SaulLu ", "To share what has been discussed on slack. \r\n\r\nI think the naming is confusing but the current behavior of the method is useful because we need to be able to completely change the list of tokens associated with additional special tokens. Maybe calling this method `set_special_tokens` would be less confusing?\r\n\r\nBasically, the difficulty here is that I think it's not necessarily obvious what types of tokens a tokenizer can have.\r\n\r\nA tokenizer has a vocabulary (a dictionary mapping tokens to ids) that consists of:\r\n1. The vocabulary learned with a tokenization algorithm (BPE, wordpiece or Unigram) \r\n2. The vocabulary corresponding to the tokens added afterwards and which will be isolated upstream of the tokenization algorithm. They are called `added_tokens`.\r\n\r\nAfterwards, some tokens can be special in the sense that they will be associated with a specific role for the model (e.g. the begin of sentence token, the mask token). The additional special tokens list is here to be able to flag more tokens as special for the models that need them [_Even if for the model it is not safe to go and find a special token based on an index in a list but that's another subject_].\r\n", "The problem is that we need some consistency between `additional_special_tokens` and `added_tokens_encoder`. I don't mean we can achieve this 100%, but in this example, if the role of `additional_special_tokens` is to `completely change the list of tokens associated with additional special tokens`, we can't have `added_tokens_encoder` still keep the original added tokens. This doesn't make any sense and can cause to strange errors like the wrong length of the full tokenizer, which is defined as\r\n```python\r\n def __len__(self):\r\n \"\"\"\r\n Size of the full vocabulary with the added tokens.\r\n \"\"\"\r\n return self.vocab_size + len(self.added_tokens_encoder)\r\n```", "I understood what was bothering you :smile: , that's why in my previous message I tried to show that the notion of \"special token\" is different from the notion of \"added token\". So from my point of view, we _don't absolutely need_ some consistency between `additional_special_tokens` and `added_tokens_encoder`.\r\n\r\nIn the end, what you propose is maybe the desirable behavior of the tokenizer but before making this breaking change I would like to explain why I think that the length of the full tokenizer is not wrong now. \r\n\r\nCurrently, the approach is that it's not because we change a flag on a token (e.g. removing a token from the `additional_special_tokens` list) that we necessarily want to remove it from the vocabulary. Indeed, if we do so, it means that we can potentially reassign these ids to new tokens, which is not an action without consequences either. From my point of view, if we are willing to change the current behavior we need to answer the following questions: in which situations would a user use the `add_special_tokens` method? Can it be used once the `additional_special_tokens`, `bos_token`, etc are already set? If yes, does the user want to exclude those previous tokens from the vocabulary? Can the effect of the method be error-prone (on the result of the model)?\r\n\r\nFinally, what I want to say is that I was told to avoid breaking changes as much as possible in transformers. That's why I spent a lot of time trying to figure out the intention behind the effect of each method. In this case, I think there is a valid reason why these methods act the way they do. Maybe today the usage has changed a lot and this effect has finally more disadvantages than advantages. For this particular discussion, I'm not (yet) convinced that's the case.\r\n\r\n---------------\r\nTo illustrate why I think that the length of the full tokenizer is not currently wrong.\r\n\r\n```python\r\nfrom transformers import T5Tokenizer\r\nfrom transformers.tokenization_utils import AddedToken\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"t5-small\", extra_ids=0, additional_special_tokens=[\"new_token_1\"])\r\ntext = \"this is a text with new_token_1, new_token_2 and new_token_3 \"\r\n\r\nprint(tokenizer.additional_special_tokens)\r\nprint(tokenizer.added_tokens_encoder)\r\nprint(tokenizer.convert_ids_to_tokens(tokenizer.encode(text)))\r\nprint(\"***\")\r\n\r\ntokenizer.add_special_tokens({\"additional_special_tokens\": [\"new_token_2\"]})\r\nprint(tokenizer.additional_special_tokens)\r\nprint(tokenizer.added_tokens_encoder)\r\nprint(tokenizer.convert_ids_to_tokens(tokenizer.encode(text)))\r\nprint(\"***\")\r\n\r\ntokenizer.add_special_tokens({\"additional_special_tokens\": [\"new_token_3\"]})\r\nprint(tokenizer.additional_special_tokens)\r\nprint(tokenizer.added_tokens_encoder)\r\nprint(tokenizer.convert_ids_to_tokens(tokenizer.encode(text)))\r\n```\r\n\r\n```\r\n['new_token_1']\r\n{'new_token_1': 32000}\r\n['▁this', '▁is', '▁', 'a', '▁text', '▁with', 'new_token_1', '▁', ',', '▁new', '_', 'to', 'ken', '_', '2', '▁and', '▁new', '_', 'to', 'ken', '_', '3', '</s>']\r\n***\r\n['new_token_2']\r\n{'new_token_1': 32000, 'new_token_2': 32001}\r\n['▁this', '▁is', '▁', 'a', '▁text', '▁with', 'new_token_1', '▁', ',', 'new_token_2', '▁and', '▁new', '_', 'to', 'ken', '_', '3', '</s>']\r\n\r\n***\r\n['new_token_3']\r\n{'new_token_1': 32000, 'new_token_2': 32001, 'new_token_3': 32002}\r\n['▁this', '▁is', '▁', 'a', '▁text', '▁with', 'new_token_1', '▁', ',', 'new_token_2', '▁and', 'new_token_3', '</s>']\r\n```\r\nThe last tokenization shows that `new_token_2` and `new_token_3` still are in the vocabulary even if they are not \"special tokens\" flagged.\r\n", "Oh! I get your point @SaulLu now ! Thank you for the patience to correct my desire to break everything.\r\n", "> Oh! I get your point @SaulLu now ! Thank you for the patience to correct my desire to break everything.\r\n\r\nNo worries, actually I took a long time to understand this subtlety of the code and I'm glad that it can be useful! In any case, this discussion shows that tokenizers are not easy to understand and that we can surely improve this aspect!\r\n\r\n" ]
1,669
1,669
1,669
COLLABORATOR
null
### Reproduction ``` from transformers import T5Tokenizer from transformers.tokenization_utils import AddedToken tokenizer = T5Tokenizer.from_pretrained("t5-small", extra_ids=0, additional_special_tokens=["new_token_1"]) print(tokenizer.additional_special_tokens) print(tokenizer.added_tokens_encoder) tokenizer.add_special_tokens({"additional_special_tokens": ["new_token_2"]}) print(tokenizer.additional_special_tokens) print(tokenizer.added_tokens_encoder) tokenizer.add_special_tokens({"additional_special_tokens": ["new_token_3"]}) print(tokenizer.additional_special_tokens) print(tokenizer.added_tokens_encoder) ``` gives ``` ['new_token_1'] {'new_token_1': 32000} ['new_token_2'] {'new_token_1': 32000, 'new_token_2': 32001} ['new_token_3'] {'new_token_1': 32000, 'new_token_2': 32001, 'new_token_3': 32002} ``` ### Expected behavior We should get ['new_token_1', 'new_token_2', 'new_token_3']
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20418/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20418/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20417
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20417/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20417/comments
https://api.github.com/repos/huggingface/transformers/issues/20417/events
https://github.com/huggingface/transformers/pull/20417
1,461,950,511
PR_kwDOCUB6oc5DkpGc
20,417
Add FAN Model
{ "login": "kiansierra", "id": 47116198, "node_id": "MDQ6VXNlcjQ3MTE2MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/47116198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kiansierra", "html_url": "https://github.com/kiansierra", "followers_url": "https://api.github.com/users/kiansierra/followers", "following_url": "https://api.github.com/users/kiansierra/following{/other_user}", "gists_url": "https://api.github.com/users/kiansierra/gists{/gist_id}", "starred_url": "https://api.github.com/users/kiansierra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kiansierra/subscriptions", "organizations_url": "https://api.github.com/users/kiansierra/orgs", "repos_url": "https://api.github.com/users/kiansierra/repos", "events_url": "https://api.github.com/users/kiansierra/events{/privacy}", "received_events_url": "https://api.github.com/users/kiansierra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @sgugger thanks for you're feedback. I'll try to implement the changes soon", "Implemented suggestions by @sgugger.\r\n\r\nPending the change on the README.md path, since I'm uncertain if I need to change only the README.md path or the actual doc path.\r\n\r\nAlso pending rebase", "Thanks for working on this! You need to change the link to the doc in the READMEs as suggested, but not the path to the file. You will also need to rebase/resolve the conflicts.\r\n\r\n@NielsRogge could you have a review before I do a final pass?", "I've applied the README.md update and rebased the branch.\r\n", "Hi @NielsRogge, @sgugger.\r\n\r\nFirst of all happy new year, I hope 2023 is greater success than 2022 was for the huggingface team.\r\nI've resolved the merge conflicts, and was hoping to know if any additional steps were required for this PR?\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20417). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,675
1,675
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #17234 Implements the FAN Models described in this [paper](https://arxiv.org/pdf/2204.12451.pdf) and available in the following [github repo](https://github.com/NVlabs/FAN), Additionally this repo has some of the weights available as described in their README file. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? This is a cleanup from previous PR #20288 in order to mantain branch integrity, recommendations by @NielsRogge were implemented ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge, @sgugger, @patrickvonplaten ## Additional Request If this PR gets merged, would it be possible to migrate the model files from [my HF space](https://huggingface.co/ksmcg) to the nvidia space <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20417/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20417/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20417", "html_url": "https://github.com/huggingface/transformers/pull/20417", "diff_url": "https://github.com/huggingface/transformers/pull/20417.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20417.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20416
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20416/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20416/comments
https://api.github.com/repos/huggingface/transformers/issues/20416/events
https://github.com/huggingface/transformers/pull/20416
1,461,894,406
PR_kwDOCUB6oc5DkdFV
20,416
Fix ModelOutput instantiation when there is only one tuple
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? This PR fixes a bug discovered by @NielsRogge [here](https://github.com/huggingface/transformers/pull/20407#discussion_r1030465759). To behave like a dict, `ModelOutput` need to accept a single iterator of key/value pairs as its first argument (otherwise many properties of dictionaries instantiation are left) which caused an issue here since Niels is creating one with a single tuple (but not of key/value pairs). To fix this, added some stronger checks, and new tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20416/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20416/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20416", "html_url": "https://github.com/huggingface/transformers/pull/20416", "diff_url": "https://github.com/huggingface/transformers/pull/20416.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20416.patch", "merged_at": 1669234161000 }
https://api.github.com/repos/huggingface/transformers/issues/20415
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20415/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20415/comments
https://api.github.com/repos/huggingface/transformers/issues/20415/events
https://github.com/huggingface/transformers/issues/20415
1,461,867,752
I_kwDOCUB6oc5XIlTo
20,415
GPT2LMHeadModel not working with mps device, RuntimeError: tensors must be 2-D
{ "login": "AlexVialaBellander", "id": 42417723, "node_id": "MDQ6VXNlcjQyNDE3NzIz", "avatar_url": "https://avatars.githubusercontent.com/u/42417723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlexVialaBellander", "html_url": "https://github.com/AlexVialaBellander", "followers_url": "https://api.github.com/users/AlexVialaBellander/followers", "following_url": "https://api.github.com/users/AlexVialaBellander/following{/other_user}", "gists_url": "https://api.github.com/users/AlexVialaBellander/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlexVialaBellander/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlexVialaBellander/subscriptions", "organizations_url": "https://api.github.com/users/AlexVialaBellander/orgs", "repos_url": "https://api.github.com/users/AlexVialaBellander/repos", "events_url": "https://api.github.com/users/AlexVialaBellander/events{/privacy}", "received_events_url": "https://api.github.com/users/AlexVialaBellander/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks like this comes from an op not implemented in PyTorch. I'd try upgrading PyTorch (even to the nigthlies) as support for MPS is still in progress on their side.", "Thanks @sgugger, sorry for wasting your time. I had pytorch 1.12.1 and 1.13 worked fine." ]
1,669
1,669
1,669
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: macOS-13.0-arm64-arm-64bit - Python version: 3.9.13 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes, (mps) - Using distributed or parallel set-up in script?: no ### Who can help? @patil-suraj, @patrickvonplaten, @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Load the model and add to `mps` device ```python from transformers import GPT2LMHeadModel model = GPT2LMHeadModel.from_pretrained("gpt2-large").to("mps").eval() ``` Then run the model with some sample random input. ```python test = torch.randint(0, 100, (1, 10)).to("mps") predictions = model(input_ids=test) ``` Then an error is thrown ```python --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In [27], line 1 ----> 1 predictions = model(input_ids=test) 3 predictions.logits File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs) 1126 # If we don't have any hooks, we want to skip the rest of the logic in 1127 # this function, and just call forward. 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py:1046, in GPT2LMHeadModel.forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1038 r""" 1039 labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): 1040 Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set 1041 `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100` 1042 are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]` 1043 """ 1044 return_dict = return_dict if return_dict is not None else self.config.use_return_dict -> 1046 transformer_outputs = self.transformer( 1047 input_ids, 1048 past_key_values=past_key_values, 1049 attention_mask=attention_mask, 1050 token_type_ids=token_type_ids, 1051 position_ids=position_ids, 1052 head_mask=head_mask, 1053 inputs_embeds=inputs_embeds, 1054 encoder_hidden_states=encoder_hidden_states, 1055 encoder_attention_mask=encoder_attention_mask, 1056 use_cache=use_cache, 1057 output_attentions=output_attentions, 1058 output_hidden_states=output_hidden_states, 1059 return_dict=return_dict, 1060 ) 1061 hidden_states = transformer_outputs[0] 1063 # Set device for model parallelism File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs) 1126 # If we don't have any hooks, we want to skip the rest of the logic in 1127 # this function, and just call forward. 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py:889, in GPT2Model.forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions, output_hidden_states, return_dict) 879 outputs = torch.utils.checkpoint.checkpoint( 880 create_custom_forward(block), 881 hidden_states, (...) 886 encoder_attention_mask, 887 ) 888 else: --> 889 outputs = block( 890 hidden_states, 891 layer_past=layer_past, 892 attention_mask=attention_mask, 893 head_mask=head_mask[i], 894 encoder_hidden_states=encoder_hidden_states, 895 encoder_attention_mask=encoder_attention_mask, 896 use_cache=use_cache, 897 output_attentions=output_attentions, 898 ) 900 hidden_states = outputs[0] 901 if use_cache is True: File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs) 1126 # If we don't have any hooks, we want to skip the rest of the logic in 1127 # this function, and just call forward. 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py:389, in GPT2Block.forward(self, hidden_states, layer_past, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions) 387 residual = hidden_states 388 hidden_states = self.ln_1(hidden_states) --> 389 attn_outputs = self.attn( 390 hidden_states, 391 layer_past=layer_past, 392 attention_mask=attention_mask, 393 head_mask=head_mask, 394 use_cache=use_cache, 395 output_attentions=output_attentions, 396 ) 397 attn_output = attn_outputs[0] # output_attn: a, present, (attentions) 398 outputs = attn_outputs[1:] File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs) 1126 # If we don't have any hooks, we want to skip the rest of the logic in 1127 # this function, and just call forward. 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py:311, in GPT2Attention.forward(self, hidden_states, layer_past, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions) 309 attention_mask = encoder_attention_mask 310 else: --> 311 query, key, value = self.c_attn(hidden_states).split(self.split_size, dim=2) 313 query = self._split_heads(query, self.num_heads, self.head_dim) 314 key = self._split_heads(key, self.num_heads, self.head_dim) File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs) 1126 # If we don't have any hooks, we want to skip the rest of the logic in 1127 # this function, and just call forward. 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/transformers/pytorch_utils.py:112, in Conv1D.forward(self, x) 110 def forward(self, x): 111 size_out = x.size()[:-1] + (self.nf,) --> 112 x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 113 x = x.view(size_out) 114 return x RuntimeError: tensors must be 2-D ``` ### Expected behavior If we change the model to device `cuda` or `cpu`: ```python from transformers import GPT2LMHeadModel model = GPT2LMHeadModel.from_pretrained("gpt2-large").to("cpu").eval() test = torch.randint(0, 100, (1, 10)).to("cpu") predictions = model(input_ids=test) ``` This works and produces a prediction vector.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20415/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20415/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20414
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20414/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20414/comments
https://api.github.com/repos/huggingface/transformers/issues/20414/events
https://github.com/huggingface/transformers/issues/20414
1,461,713,243
I_kwDOCUB6oc5XH_lb
20,414
ASR Pipeline is not super user-friendly
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "One additional point! We can't pass generation kwargs to the `generate` method:\r\nhttps://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/pipelines/automatic_speech_recognition.py#L369-L372\r\n\r\nThis means our stdout is bombarded with UserWarnings from the `generate` method:\r\n```\r\n/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py:1364: UserWarning: Neither `max_length` nor `max_new_tokens` has been set, `max_length` will default to 448 (`self.config.max_length`). Controlling `max_length` via the config is deprecated and `max_length` will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.\r\n```\r\n\r\nWould be nice to be able to override generation kwargs to prevent these messages and have flexibility over max length, beams, temperature, length penalty, etc\r\n\r\ncc @Vaibhavs10 ", "Just went through the code in more-detail and found that \"array\" is pop'd from the input dict!\r\nhttps://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/pipelines/automatic_speech_recognition.py#L278-L280\r\n\r\nMaybe we can add this to the docstring to highlight!", "Multiple points:\r\n\r\n> However, pipeline expects the audio samples in the format\r\n\r\nas far as I remember we can also accept `array` for that reason. (`raw` came before `datasets` had normalized iirc so that's the reason for the discrepancy, but since we don't break, neither is going to go away in pipeline I'm afraid.\r\nThe problem is not `array` it's `audio`. See more in the docs about `KeyDataset` (or the iterator which I think is more elegant, but it lacks the number of items) : https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline \r\n\r\n> It would be nice if pipeline returned a [ModelOutput](https://github.com/huggingface/transformers/blob/1c6309bf79c76b45de2266c586caccbfbc8ef958/src/transformers/utils/generic.py#L190) class. That way, we could index the text column directly from the returned object:\r\n\r\nThis is not going to happen for reasons I'll explain in following points\r\n\r\n> We can't pass generation kwargs to the generate method:\r\n\r\nWe can add it as a `generate_kwargs` but I think we wanted to change the configs instead of the affected model (which were not defined for whisper I think) @ArthurZucker . If `max_length` is the actual maximal capacity of the model, everything should be fine, no warnings no nothing.\r\n\r\nWe could also make the warning appear only once. @sgugger since reducing noise seems something desirable.\r\n\r\nBeing able to send `generate_kwargs` would still be nice. (Careful I'm meaning `pipe(..., generate_kwargs={\"max_new_tokens\":20})` NOT `pipe(...., max_new_tokens=20)` the reason is because generate has clashed in the past with tokenizer kwargs for instance and it's impossible to distentangle after the fact. That's for passing generic kwargs (all of them through time and eternity), but we can definitely add some first class parameters (like `max_new_tokens` for instance).\r\n\r\n> Maybe we can add this to the docstring to highlight!\r\n\r\nTotally !\r\n\r\n> dataset = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation[:10]\")\r\n\r\n\r\nI highly recommend NOT loading the entire array of the datasets in memory when working on datasets. That means NOT passing around lists, and not being able to batch with `ModelOutputs`.\r\n\r\nThat because objects are meant to be consumed one by one in an iterable fashion.\r\nThis is true for datasets, but also for webservers, you can have pretty much the same code, do dynamic batching and such crazy stuff and still keep the code the same for instance.\r\nThis is not relevant for `dataset.map` since it does the slicing and batching on its own, but it is relevant when `pipe.preprocess` can leverage the streaming mode to compute multiple things at once. \r\n\r\nUsing generator and streams is much more efficient (and the pipeline will actually do the batching too, passing around lists to the pipeline will NOT batch things. ( More on batching in pipelines : https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching)\r\n\r\nHere is the recommendation from the docs: https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline (Still need to upgrade that part to make it to the tutorial).\r\n\r\nHere is a gist of few several examples: https://gist.github.com/Narsil/4f5b088f4dd23200d16dd2cc575fdc16\r\n\r\n\r\n\r\n```python\r\nMethod 1 (pipe) 0:00:00.294485\r\nMethod 2 (dataset) 0:00:00.308238\r\nMethod 3 (raw file) 0:00:00.635527\r\n```\r\n\r\nThe 5% speedup is pretty consistent on this smallish data. \r\n\r\nMethod 3 is slower, but because you don't need to decode the audio files within the dataset, this can save some disk space (at a compute cost). Keep in mind the `num_workers=1` means the actual decompression of audio files happens in a different thread (and even process since we're relying on ffmpeg for it).\r\n\r\nI tried actually batching inputs, but it seems it's detrimental in this case (just add `, batch_size=2` during pipeline initialization).\r\nMethod 1 is 20% faster than method 2 with actual batching, but 50% slower than without :\r\n\r\nhttps://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching for more info on why batching can hurt.\r\n\r\nI had to add a \"warmup\" to do fair comparisons, it seems `dataset` is decompressing the flies on first access (it's my best guess) and it seems to do it slower than the raw pipeline (it's because of the threading and because librosa is actually slower that raw ffmpeg, I think, at least I remember it was \"slow\" to decompress).\r\n\r\n\r\nHappy to discuss further how to make the integration easier. I should mention that `KeyDataset` is probably the nicest to use as it should keep the length, it's just one weird import away\r\n\r\n```python\r\nfrom transformers.pipelines.pt_utils import KeyDataset\r\n\r\n...\r\nfor out in pipe(KeyDataset(dataset, \"audio\")):\r\n pass\r\n```\r\nIt has the same performance as method1 but plays better with `tqdm`. It's just less flexible imo.\r\n", "Thanks for the super in-depth explanation, @Narsil! Incredibly helpful and much appreciated 🤗\r\n\r\nMaybe I'm missing the point a bit with why pipelines exist - are they geared more towards maximising performance for inference (or at least giving you the option to)? Rather than just being a nice wrapper around the feature extractor, model and tokenizer?\r\n\r\nSounds good regarding:\r\n1. Updating the doc string to reflect the fact that we can pass `array` as well as `raw` as the keys for audio input\r\n2. Passing the **gen kwargs** as a specified dict to the generate method\r\n\r\nThanks for explaining why `ModelOutputs` is not viable! It makes sense using a generator and streams, rather than throwing a list into pipe.\r\n\r\n> (Still need to upgrade that part to make it to the tutorial).\r\n\r\nIs there a tutorial that's been published or a WIP? That'll be super handy!\r\n\r\n> Here is a gist of few several examples: https://gist.github.com/Narsil/4f5b088f4dd23200d16dd2cc575fdc16\r\n\r\nSuper comprehensive, thanks for these benchmarks! Interesting to see how `.map` compares to the generator method! \r\n\r\n> I should mention that KeyDataset is probably the nicest to use as it should keep the length, it's just one weird import away\r\n\r\nThanks for flagging this! I had a follow-up question - are there docs / examples for using pipe when loading a dataset in streaming mode? Here, we can't use KeyDataset (as we can't index a streamed dataset):\r\n\r\nhttps://github.com/huggingface/transformers/blob/afce73bd9d891b55dcb8d4d875d17718ffa01ff0/src/transformers/pipelines/pt_utils.py#L305\r\n\r\nIs the best option just to go for a generator here?\r\n\r\n```python\r\ndef data():\r\n for i, sample in enumerate(dataset):\r\n yield sample[\"audio\"]\r\n\r\noutput = []\r\nfor out in pipe(data(), batch_size=2):\r\n output.append(out[\"text\"])\r\n```\r\n\r\nWith this generator method, we currently `yield` the audio samples which we pass to the pipe. Is there a way of iterating over the streaming dataset to get the target transcriptions (`sample[\"text\"]`) as well? Here, we would not need to pass the target text to the pipe, but simply return it in the generator. Ideally, want the target transcriptions `sample[\"text\"]` so that we can assess our predictions.\r\n\r\n(this is the actual example I'm working with: https://github.com/sanchit-gandhi/codesnippets/blob/main/benchmark_inference_whisper.ipynb)", "> Thanks for the super in-depth explanation, @Narsil! Incredibly helpful and much appreciated hugs\r\n\r\nWell you initial issue was also pretty comprehensive, so thanks for creating it.\r\n\r\n> Maybe I'm missing the point a bit with why pipelines exist - are they geared more towards maximising performance for inference (or at least giving you the option to)? Rather than just being a nice wrapper around the feature extractor, model and tokenizer?\r\n\r\nPipeline started without any real guidelines into what they should or should not do.\r\nCurrently the first and foremost goal is to **make ML accessible for users who have no clue what is a model or tensors**, it's the primary target.\r\nThat being said, being efficient for inference goes along since we don't want to provide a 10x slowdown experience for those users.\r\nIt's not the primary focus though, otherwise it would not be written in python, and it would not be that convenient :).\r\n\r\nLet's say there are 2 kinds of performance:\r\n- Don't do useless work (Remove crap code, or code which is not really useful, or work that's discarded, useless copies etc..)\r\n- Actual performance by making sure every inch of your hardware is properly used at the appropriate time. (Read understanding CPU instructions, looking a SIMD, optimizing threading layout, maximizing L1 cache hits, minimizing branching predictions, using custom GPU kernels, etc..)\r\n\r\nWe're only doing the first kind here. (Maybe a little of 2 for the GPU feeding that needs to be as fast as possible because CPU-GPU is a bottleneck really quick otherwise)\r\n\r\n\r\n> Is there a tutorial that's been published or a WIP? That'll be super handy!\r\n\r\nThere this tutorial https://huggingface.co/docs/transformers/pipeline_tutorial which I find less comprehensive than this https://huggingface.co/docs/transformers/main_classes/pipelines unfortunatly.\r\n\r\nI'm in the process of rewriting it, as it seems most people read only that. And you're not the first person to not be aware of those cool features, so I'd say it's a doc problem.\r\n\r\n> Super comprehensive, thanks for these benchmarks! Interesting to see how .map compares to the generator method!\r\n\r\nCan't tell you why there is a difference, but I can tell you I went to great length to optimize everything I could in the pipeline directly. (Only the first kind of optimization, and it's still written in Python so far from perfect but hey ... :) )\r\n\r\n> With this generator method, we currently yield the audio samples which we pass to the pipe. Is there a way of iterating over the streaming dataset to get the target transcriptions (sample[\"text\"]) as well? \r\n\r\nActually if you pass along other keys in your data, they should be passed along all the way to the result with the asr pipeline.\r\nI would like to be the case for all pipelines, but never got down to doing it.\r\nBut since it is streaming, yes you need to pass things around since otherwise it's tricky to start matching results with inputs at the end. \r\n\r\n```python\r\n\r\ndef data():\r\n for item in streaming_data:\r\n yield {**item[\"audio\"], \"expected\": item[\"text\"]}\r\n \r\nfor out in pipe(data()):\r\n generated = out[\"text\"]\r\n expected = out[\"expected\"]\r\n # Do you WER thing.\r\n```\r\n\r\nWould that work ? (I haven't tested this)\r\n\r\n\r\nIf it wasn't you could do\r\nSomething like that might be a useful hack though (Provided you're running in a single thread for the server looping).\r\n\r\n```python\r\nGLOBAL_INDEX = {}\r\n\r\ndef data():\r\n for i, item in enumerate(streaming_data):\r\n GLOBAL_INDEX[i] = item[\"text\"]\r\n yield item\r\n \r\nfor i, out in enumerate(pipe(data())):\r\n generated = out[\"text\"]\r\n expected = GLOBAL_INDEX.pop(i) # Pop will remove it enabling releasing memory\r\n # Do you WER thing.\r\n```\r\n", "Thank you again for the super comprehensive reply, really appreciate the time given to answering this thread!\r\n\r\n> make ML accessible for users who have no clue what is a model or tensors\r\n\r\nAwesome! Think it's fantastic in this regard. Having some easy examples that show you how to run pipeline in different scenarios / tasks like a little 'recipe' book would be great to further this.\r\n\r\n> otherwise it would not be written in python, and it would not be that convenient :)\r\n\r\nDid someone say Rust 👀\r\n\r\nThanks for linking the tutorials - I learnt quite a lot from this thread + docs after knowing where to look. I guess you have two camps of people that will be using pipeline:\r\n\r\n1. Those migrating from the transformers approach (feature extractor + model + processor)\r\n2. Those who don't use transformers\r\n\r\nFor me, it was making the link between my transformers approach and pipeline that made the penny drop. There's a bit of a different mindset which you have to adopt vs the usual datasets `.map` method. I think some more examples showing how to make actual transformers tasks work in pipeline would go a long way! In this regard, your updated tutorial looks amazing (doing exactly this)! Happy to do a pass of the PR when it's in a review ready state!\r\n\r\n> Would that work ? (I haven't tested this)\r\n\r\nIt did indeed work, thanks 🙌", "I think we should definitely try to avoid by default displaying warnings when running the ASRPipeline. \r\nAlso, since Whisper is a Encoder-Decoder model architecture the main use case for speech recognition might soon switch from Wav2Vec2CTC to Encoder-Decoder => thus we should also try to adapt the ASR pipeline into this direction. \r\n\r\n**Short term:**\r\nLet's try to not display any warnings by default & I agree with @sanchit-gandhi - it'd also be nice to pipelines to be directly used in combination with datasets. Could we maybe adapt the pipeline API from:\r\n```\r\n{\"sampling_rate\": int, \"raw\": np.array}\r\n```\r\nto\r\n```\r\n{\"sampling_rate\": int, \"raw\": Optional[np.array], \"array\": Optional[np.array]}\r\n```\r\nto just allow both use cases? What is the big drawback of this?\r\n\r\n**Mid/Long term**\r\nAs discussed with @sgugger and @sanchit-gandhi already a bit, I think we should really think about creating a new generate method just for audio. The current `generate` method is a) too bloated and b) just not adapted for speech recognition. Chunking, long-range audio recognition, streamed audio recognition are much more of a required use case for speech recognition then for NLP. Also we could design the new generate method to be future compatible with models like the Transducer. \r\n\r\nThis would then also render the ASR pipeline much easier IMO. ", "> What is the big drawback of this?\r\n\r\nThis is already done, it's a doc issue. And specifically for sanchit, datasets are using `{\"audio\" : {\"sampling_rate\": .., \"audio\": ..}}` instead of the inner dict.\r\n\r\n> The current generate method is a) too bloated and b) just not adapted for speech recognition.\r\n\r\nTrue, I have potential suggestions for it, which mainly are going full on Processor/StoppingCriteria route. This is what was necessary to enable complex batching within bloom inference. \r\nSplitting specifically for audio might be necessary but I am under the impression it's only a matter of defaults for those objects.", "Maybe a bigger discussion, but could it make sense to move some more complicated tasks such as real-time speech recognition to something like: https://github.com/huggingface/speechbox ? ", "For cases like realtime ASR more optimized methods, for example as rust modules, would be super cool.\r\nMaybe with functionality for community pipelines as in diffusers, just for speech ?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,682
1,682
CONTRIBUTOR
null
### Feature request Firstly, thank you to @Narsil for developing a the speech recognition pipeline - it's incredibly helpful for running the full speech-to-text mapping in one call, pre and post-processing included. There are a couple of things that currently mean the pipeline is not super compatible with 🤗 Datasets. I'll motivate them below with an example. ### Motivation Let's take the example of evaluating a (dummy) Wav2Vec2 checkpoint on the (dummy) LibriSpeech ASR dataset: ```python from transformers import pipeline from datasets import load_dataset pipe = pipeline("automatic-speech-recognition", model="hf-internal-testing/tiny-random-wav2vec2") dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation[:10]") ``` Printing the first audio sample of the dataset: ```python print(dataset[0]["audio"]) ``` **Print Output:** ``` {'path': '/home/sanchit_huggingface_co/.cache/huggingface/datasets/downloads/extracted/0393f71a8093c6541f95c89f60982213cf086569876e1195926741f097ad47fc/dev_clean/1272/128104/1272-128104-0000.flac', 'array': array([0.00238037, 0.0020752 , 0.00198364, ..., 0.00042725, 0.00057983, 0.0010376 ], dtype=float32), 'sampling_rate': 16000} ``` So the audio's are in the format: `{"path": str, "array": np.array, "sampling_rate": int}`. The np audio array values are stored under the key "array". This format is ubiquitous across audio datasets in 🤗 Datasets: all audio datasets take this format. However, pipeline expects the audio samples in the format `{"sampling_rate": int, "raw": np.array}`: https://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/pipelines/automatic_speech_recognition.py#L209-L211 This means we have to do some hacking around to get the audio samples into the right format for pipeline: ```python def predict(batch): audios = batch["audio"] # hacky renaming audios = [{"raw": sample["array"], "sampling_rate": sample["sampling_rate"]} for sample in audios] predictions = pipe(audios) # unpack and index predictions (List[Dict]) batch["predictions"] = [pred["text"] for pred in predictions] return batch ``` And then apply the function to our dataset using the `map` method: ```python batch_size = 4 result_set = dataset.map( predict, batched=True, batch_size=batch_size, remove_columns=dataset.features.keys(), ) ``` If pipeline's `__call__` method was matched to Datasets' audio features, we'd be able to use any audio dataset **directly** with pipeline (no hacky feature renaming): ```python def hypothetical_predict(batch): predictions = pipe(audios) batch["predictions"] = [pred["text"] for pred in predictions] return batch ``` This would be very nice for the user! Furthermore, the outputs returned by pipeline are a list of dicts (`List[Dict]`): https://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/pipelines/automatic_speech_recognition.py#L477 This means we have to unpack and index them before we can use them for any downstream use (such as WER calculations). It would be nice if pipeline returned a [`ModelOutput`](https://github.com/huggingface/transformers/blob/1c6309bf79c76b45de2266c586caccbfbc8ef958/src/transformers/utils/generic.py#L190) class. That way, we could index the text column directly from the returned object: ```python def hypothetical_predict(batch): batch["predictions"] = pipe(batch["audio"]).text return batch ``` IMO this is more intuitive to the user than renaming their audio column and then iterating over the returned Dict object to get the predicted text. ### Your contribution WDYT @Narsil @patrickvonplaten? Happy to add these changes to smooth out the user experience!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20414/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20413
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20413/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20413/comments
https://api.github.com/repos/huggingface/transformers/issues/20413/events
https://github.com/huggingface/transformers/issues/20413
1,461,682,667
I_kwDOCUB6oc5XH4Hr
20,413
DeBERTa-v2's build_relative_position method initializes tensor on cpu and costs much time
{ "login": "qq775294390", "id": 30080441, "node_id": "MDQ6VXNlcjMwMDgwNDQx", "avatar_url": "https://avatars.githubusercontent.com/u/30080441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qq775294390", "html_url": "https://github.com/qq775294390", "followers_url": "https://api.github.com/users/qq775294390/followers", "following_url": "https://api.github.com/users/qq775294390/following{/other_user}", "gists_url": "https://api.github.com/users/qq775294390/gists{/gist_id}", "starred_url": "https://api.github.com/users/qq775294390/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qq775294390/subscriptions", "organizations_url": "https://api.github.com/users/qq775294390/orgs", "repos_url": "https://api.github.com/users/qq775294390/repos", "events_url": "https://api.github.com/users/qq775294390/events{/privacy}", "received_events_url": "https://api.github.com/users/qq775294390/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We'd be happy to review a PR if you want to fix this :-)", "> We'd be happy to review a PR if you want to fix this :-)\r\n\r\nThanks for replying, I've created a PR for this issue :-)" ]
1,669
1,669
1,669
CONTRIBUTOR
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-4.14.105-1-tlinux3-0013-x86_64-with-glibc2.10 - Python version: 3.8.3 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.10.1+cu113 (True) - Tensorflow version (GPU?): 2.10.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction hello, I am using DeBERTa-v2 in my code, and it runs slowly. With `torch.profile`, I find `Self CPU time`(more than 1.7s) is much larger than `Self CUDA time`(about 30ms). And most CPU time is from function `build_relative_position`, where two tensors are initialized without specifying their `device`. So the two tensors and the following codes are running on cpu, and it costs much time. https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_deberta_v2.py#592:~:text=q_ids%20%3D%20torch,%2C%20key_size) ![image](https://user-images.githubusercontent.com/30080441/203538636-a0698855-af21-45b0-b2fa-a344620d15d3.png) ![image](https://user-images.githubusercontent.com/30080441/203538516-eec6a4ce-9a95-49d2-9b41-debd3a993413.png) My test script is a simple forward process with `torch.profile` to get time cost. ``` import torch from torch.profiler import profile, record_function, ProfilerActivity from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline, BertTokenizer, AutoConfig, AutoModel, DebertaV2ForMaskedLM model_path = data_path = tokenizer=AutoTokenizer.from_pretrained(model_path) model = AutoModel.from_pretrained(model_path).to("cuda") data = open(data_path).readlines() max_length, batch_size = 256, 16 with torch.no_grad(): for step in range(10): L = data[step * batch_size: step * batch_size + batch_size] batch_inputs = {i: torch.tensor(j).to('cuda') for (i, j) in tokenizer(L, max_length=max_length, padding='max_length').items()} with profile( activities=[ProfilerActivity.CUDA, ProfilerActivity.CPU], with_stack=True, ) as prof: outputs = model(**batch_inputs) print(prof.key_averages(group_by_stack_n=15, group_by_input_shape=True).table(sort_by="self_cpu_time_total", row_limit=2)) ``` ### Expected behavior I hope the tensors in function `build_relative_position` are on the proper device(cuda if accessible) when initialized, instead of making the results `to(xxx.device)` after some computation. In my modified code where the two tensors are initialized on cuda, the cost of cpu time will be reduced to about 77ms from 1s on my machine. ![image](https://user-images.githubusercontent.com/30080441/203542086-18226d81-5b44-4135-af61-907bf8ea148a.png) ![image](https://user-images.githubusercontent.com/30080441/203542455-f3279dd9-289f-4d61-9635-f8ffe00bc682.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20413/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20412
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20412/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20412/comments
https://api.github.com/repos/huggingface/transformers/issues/20412/events
https://github.com/huggingface/transformers/pull/20412
1,461,645,630
PR_kwDOCUB6oc5Djm_k
20,412
Add run_mim_no_trainer.py example script
{ "login": "Saad135", "id": 22683922, "node_id": "MDQ6VXNlcjIyNjgzOTIy", "avatar_url": "https://avatars.githubusercontent.com/u/22683922?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saad135", "html_url": "https://github.com/Saad135", "followers_url": "https://api.github.com/users/Saad135/followers", "following_url": "https://api.github.com/users/Saad135/following{/other_user}", "gists_url": "https://api.github.com/users/Saad135/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saad135/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saad135/subscriptions", "organizations_url": "https://api.github.com/users/Saad135/orgs", "repos_url": "https://api.github.com/users/Saad135/repos", "events_url": "https://api.github.com/users/Saad135/events{/privacy}", "received_events_url": "https://api.github.com/users/Saad135/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20412). All of your documentation changes will be reflected on that endpoint.", "@NielsRogge can I get some feedback on the code so far. Thanks :-). Could you also tell me if there are any specific tests that I need to run for this case.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@NielsRogge friendly ping on this PR.", "Hi @Saad135 the initial draft looks great already!\r\n\r\nLet me know if you need any help finishing this PR.", "Hello @NielsRogge, thank you for checking out the draft. I would appreciate if you could point me towards the next steps I should take. I mean, should I make the draft ready for review or should I run some specific tests or maybe something else? I am still quite new to OS contributions, so the next step might be a very simple one which is not apparent to me right now.", "I'd recommend running both the trainer.py and no_trainer.py scripts on a small dataset and see if they progress similarly.\r\n\r\nOnce that's done, you can mark the PR as ready for review", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Cc @amyeroberts, this PR should be ready for review.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@Saad135 - any updates on this PR? Once feature extractor references have been updated to image processor and the check Niels suggested have been done it should be ready to merge :) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@amyeroberts This PR seems to be have been stalled and closed due to inactivity. I have taken it over in PR #23156 to complete it. " ]
1,669
1,683
1,680
CONTRIBUTOR
null
# What does this PR do? Adds a no_trainer example script for image pretraining. Relates to https://github.com/huggingface/transformers/issues/20053 @NielsRogge, It is still incomplete but could you please have a look at the code so far to see if something needs to be changed :-) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20412/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20412", "html_url": "https://github.com/huggingface/transformers/pull/20412", "diff_url": "https://github.com/huggingface/transformers/pull/20412.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20412.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20411
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20411/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20411/comments
https://api.github.com/repos/huggingface/transformers/issues/20411/events
https://github.com/huggingface/transformers/pull/20411
1,461,637,262
PR_kwDOCUB6oc5DjlMA
20,411
`accelerate` support for `OwlViT`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks everyone! " ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? This PR adds `accelerate` support for `OwlViT` so that any model from this family can be loaded in `8-bit`! Here is a small snippet on how to load and run the model in `8-bit` (based on the snippet from the model card): ``` # pip install accelerate bitsandbytes import requests from PIL import Image import torch from transformers import OwlViTProcessor, OwlViTForObjectDetection processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32", device_map="auto", load_in_8bit=True) url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = [["a photo of a cat", "a photo of a dog"]] inputs = processor(text=texts, images=image, return_tensors="pt") outputs = model(**inputs) # Target image sizes (height, width) to rescale box predictions [batch_size, 2] target_sizes = torch.Tensor([image.size[::-1]]) # Convert outputs (bounding boxes and class logits) to COCO API results = processor.post_process(outputs=outputs, target_sizes=target_sizes) i = 0 # Retrieve predictions for the first image for the corresponding text queries text = texts[i] boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"] # Print detected objects and rescaled box coordinates score_threshold = 0.1 for box, score, label in zip(boxes, scores, labels): box = [round(i, 2) for i in box.tolist()] if score >= score_threshold: print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}") >>> Detected a photo of a cat with confidence 0.705 at location [321.25, 19.8, 643.12, 376.88] >>> Detected a photo of a cat with confidence 0.729 at location [0.94, 53.55, 319.69, 473.91] ``` Added also a slow test to make sure users can run inference in `fp16` (`8bit` converts the model in. `fp16` under the hood) All slow tests pass except the one mentioned in #20410 that should be fixed once merged cc @alaradirik @sgugger @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20411/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20411/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20411", "html_url": "https://github.com/huggingface/transformers/pull/20411", "diff_url": "https://github.com/huggingface/transformers/pull/20411.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20411.patch", "merged_at": 1669371645000 }
https://api.github.com/repos/huggingface/transformers/issues/20410
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20410/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20410/comments
https://api.github.com/repos/huggingface/transformers/issues/20410/events
https://github.com/huggingface/transformers/pull/20410
1,461,582,305
PR_kwDOCUB6oc5DjZBf
20,410
[OWL VIT] make daily CI happy
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks @ydshieh , just updated the description! \r\nYes without this change, the test would fail on GPU (`accelerate` tests excluded)" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? Fixes a slow test for OWLViT while trying to integrate `accelerate` support for this model! cc @ydshieh @alaradirik Slow test that was failing: `tests/models/owlvit/test_modeling_owlvit.py::OwlViTModelIntegrationTest::test_inference_one_shot_object_detection` all slow tests pass!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20410/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20410", "html_url": "https://github.com/huggingface/transformers/pull/20410", "diff_url": "https://github.com/huggingface/transformers/pull/20410.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20410.patch", "merged_at": 1669209897000 }
https://api.github.com/repos/huggingface/transformers/issues/20409
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20409/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20409/comments
https://api.github.com/repos/huggingface/transformers/issues/20409/events
https://github.com/huggingface/transformers/pull/20409
1,461,556,549
PR_kwDOCUB6oc5DjTK1
20,409
[BNB] Throw `ValueError` when trying to cast or assign
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "What happens if we don't have the changes in this PR and users try to do assign 8-bit loaded models into a new device and/or dtype?", "Users can face unexpected behaviors such as: https://github.com/huggingface/transformers/issues/20361#issuecomment-1324113579" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? I have seen several piece of code where users try to assign `8-bit` loaded models into a new device and/or `dtype` this PR aims to throw an error to the users that perform this. Also added a set of slow tests cc @sgugger @ydshieh Do not merge before #20408
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20409/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20409/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20409", "html_url": "https://github.com/huggingface/transformers/pull/20409", "diff_url": "https://github.com/huggingface/transformers/pull/20409.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20409.patch", "merged_at": 1669215110000 }
https://api.github.com/repos/huggingface/transformers/issues/20408
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20408/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20408/comments
https://api.github.com/repos/huggingface/transformers/issues/20408/events
https://github.com/huggingface/transformers/pull/20408
1,461,555,696
PR_kwDOCUB6oc5DjS-J
20,408
[BNB] fix nasty `bnb` bug
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? This PR fixes a very nasty bug that can be reproduced with the following script: ``` from transformers import AutoModel model_16bit = AutoModel.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=False) model_8bit = AutoModel.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=True) print(model_16bit.is_loaded_in_8bit) >>> True ``` In fact we should assign the attribute `is_loaded_in_8bit` to the variable `model` instead of `cls`. Otherwise `is_loaded_in_8bit` will be overriden by the next model that is loaded in 8bit @sgugger @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20408/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20408", "html_url": "https://github.com/huggingface/transformers/pull/20408", "diff_url": "https://github.com/huggingface/transformers/pull/20408.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20408.patch", "merged_at": 1669210268000 }
https://api.github.com/repos/huggingface/transformers/issues/20407
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20407/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20407/comments
https://api.github.com/repos/huggingface/transformers/issues/20407/events
https://github.com/huggingface/transformers/pull/20407
1,461,545,989
PR_kwDOCUB6oc5DjRCT
20,407
[AutoBackbone] Improve API
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Pinging @ydshieh on this PR as I made 2 updates to test_modeling_common.py to support models which output a tuple of tensors as their first output. I've updated `test_determinism` and `test_save_load` to make them more general.", "@michaelbenayoun as seen on the CI, the torch fx tests fail because backbones aren't supported yet.\r\n\r\nCould you add support for them in a separate PR?", "> Pinging @ydshieh on this PR as I made 2 updates to test_modeling_common.py to support models which output a tuple of tensors as their first output. I've updated `test_determinism` and `test_save_load` to make them more general.\r\n\r\nLooks good for me regarding the tests.\r\n\r\nI would personally put the check `isinstance(first, tuple)` inside the `check_xxx` methods, and call them recursively.\r\nThis way we don't need to worry if each element in the list/tuple would contain list/tuple. But no obligation for now.", "@ydshieh for some reason the CI isn't run when I push new commits, do you know why?", "Not really, try again with a new empty commit?\r\n\r\ngit commit --allow-empty -m \"empty commit to trigger CI\"\r\n", "@ydshieh Thanks a lot! @sgugger feel free to approve :)" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? As backbones themselves have hidden states and optional attentions, this PR adds them to the `BackboneOutput`. This way, frameworks that leverage backbones can return hidden states/attentions of the backbone if the user specifies `output_hidden_states=True` or `output_attentions=True`. To do: - [x] perhaps we should test backbones with all common tests
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20407/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20407/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20407", "html_url": "https://github.com/huggingface/transformers/pull/20407", "diff_url": "https://github.com/huggingface/transformers/pull/20407.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20407.patch", "merged_at": 1669652425000 }
https://api.github.com/repos/huggingface/transformers/issues/20406
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20406/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20406/comments
https://api.github.com/repos/huggingface/transformers/issues/20406/events
https://github.com/huggingface/transformers/pull/20406
1,461,501,443
PR_kwDOCUB6oc5DjHHP
20,406
[Image Transformers] to_pil fix float edge cases
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes a quite nasty type checking bug: https://github.com/huggingface/transformers/issues/20394 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20406/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20406", "html_url": "https://github.com/huggingface/transformers/pull/20406", "diff_url": "https://github.com/huggingface/transformers/pull/20406.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20406.patch", "merged_at": 1669207679000 }
https://api.github.com/repos/huggingface/transformers/issues/20405
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20405/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20405/comments
https://api.github.com/repos/huggingface/transformers/issues/20405/events
https://github.com/huggingface/transformers/pull/20405
1,461,496,640
PR_kwDOCUB6oc5DjGCq
20,405
Correct rescale
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20405). All of your documentation changes will be reflected on that endpoint." ]
1,669
1,669
1,669
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20405/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20405/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20405", "html_url": "https://github.com/huggingface/transformers/pull/20405", "diff_url": "https://github.com/huggingface/transformers/pull/20405.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20405.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20404
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20404/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20404/comments
https://api.github.com/repos/huggingface/transformers/issues/20404/events
https://github.com/huggingface/transformers/issues/20404
1,461,468,361
I_kwDOCUB6oc5XHDzJ
20,404
Fail when using FeatureExtractionPipeline for the inference of t5-small
{ "login": "tonifuc3m", "id": 46200970, "node_id": "MDQ6VXNlcjQ2MjAwOTcw", "avatar_url": "https://avatars.githubusercontent.com/u/46200970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tonifuc3m", "html_url": "https://github.com/tonifuc3m", "followers_url": "https://api.github.com/users/tonifuc3m/followers", "following_url": "https://api.github.com/users/tonifuc3m/following{/other_user}", "gists_url": "https://api.github.com/users/tonifuc3m/gists{/gist_id}", "starred_url": "https://api.github.com/users/tonifuc3m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tonifuc3m/subscriptions", "organizations_url": "https://api.github.com/users/tonifuc3m/orgs", "repos_url": "https://api.github.com/users/tonifuc3m/repos", "events_url": "https://api.github.com/users/tonifuc3m/events{/privacy}", "received_events_url": "https://api.github.com/users/tonifuc3m/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Do you have `sentencepiece` installed ?\r\n\r\n`pip install sentencepiece` should fix it (as suggests the last line of the error).", "Thanks for the reply.\r\nInstalling `sentencepiece` could solve the error when loading `google/mt5-small`.\r\nHowever, the error when using the other T5 models (`t5-small` and `google/flan-t5-base`) is not related to `sentencepiece`. ", "```\r\n(1) a `tokenizers` library serialization file, \r\n(2) a slow tokenizer instance to convert or \r\n(3) an equivalent slow tokenizer class to instantiate and convert. \r\nYou need to have sentencepiece installed to convert a slow tokenizer to a fast one.\r\n```\r\n\r\nEither you don't have `tokenizers` installed, or `sentencepiece`. It works perfectly in my environement.\r\n\r\nCan you try this:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/mt5-small\") # This will use `tokenizers` but there is a warning about byte-fallback\r\n# OR \r\ntokenizer = AutoTokenizer.from_pretrained(\"google/mt5-small\", use_fast=False) # This will use sentencepiece\r\n```", "Thank you for the response.\r\n\r\nI have checked and both `tokenizers` and `sentencepiece` are now installed in my environment.\r\n```\r\n$ python -c \"import sentencepiece\"\r\n$ python -c \"import tokenizers\"\r\n```\r\n\r\n\r\nAdditionally, all these tests work:\r\n1. Initializing the tokenizers\r\n```\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/mt5-small\") # This will use `tokenizers` but there is a warning about byte-fallback\r\n# OR \r\ntokenizer = AutoTokenizer.from_pretrained(\"google/mt5-small\", use_fast=False) # This will use sentencepiece\r\n```\r\n\r\n2. Executing the fast and slow tokenizers :\r\n```\r\nsentence = \"This is John\"\r\ntokenizer(sentence)\r\n{'input_ids': [1494, 339, 4040, 1], 'attention_mask': [1, 1, 1, 1]}\r\n```\r\n\r\n3. Initializing the feature extraction pipeline with models `google/mt5-small`, `t5-small` and `google/flan-t5-base`. \r\n\r\nHowever, **the error arises when executing the feature extraction pipeline** with the models `google/mt5-small`, `t5-small` and `google/flan-t5-base`:\r\n```\r\nfrom transformers import pipeline\r\nfeature_extraction = pipeline('feature-extraction', model=\"google/mt5-small\")\r\nsentence = \"This is John.\"\r\nfeature_extraction(sentence) # This line breaks\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/pipelines/feature_extraction.py\", line 92, in __call__\r\n return super().__call__(*args, **kwargs)\r\n File \"/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 1074, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 1081, in run_single\r\n model_outputs = self.forward(model_inputs, **forward_params)\r\n File \"/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 990, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/pipelines/feature_extraction.py\", line 70, in _forward\r\n model_outputs = self.model(**model_inputs)\r\n File \"/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1190, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py\", line 1435, in forward\r\n decoder_outputs = self.decoder(\r\n File \"/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1190, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py\", line 937, in forward\r\n raise ValueError(f\"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds\")\r\nValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds\r\n```\r\n\r\nPS: In case it is relevant, my protobuf version is 3.19.1 (I checked it via `$ protoc --version` and via `$ python -m pip show protobuf`)", "Hello, yes `feature-extraction` doesn't really mean much for a encoder-decoder model.\r\n\r\nYou could use just the encoder, which is probably the closest you want. Keep in mind that models that were not intended for feature extraction will not necessarily be really good at it.\r\n\r\n```\r\npipe.model = pipe.model.encoder # This might depend on a model per model basis, but I think this is what you are looking for, you want just the encoder part of the model.\r\n```", "Thank you for the hack! Closing this issue :)" ]
1,669
1,669
1,669
NONE
null
### System Info - transformers version: '4.24.0' - platform: Ubuntu 20.04.5 LTS - Python version: 3.8.10 - Huggingface_hub version: '0.10.1' - torch version: '1.13.0+cu117' ### Who can help? @patrickvonplaten @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am comparing the embeddings of several very popular HF models (gpt2, bert-base-cased, etc). Right now, I want to use the FeatureExtractionPipeline to extract the embeddings of the T5 (small) model. ``` from transformers import pipeline feature_extraction = pipeline('feature-extraction', model="t5-small") sentence = "This is John" embeddings = feature_extraction(sentence) ``` Error message: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/feature_extraction.py", line 92, in __call__ return super().__call__(*args, **kwargs) File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1074, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1081, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/base.py", line 990, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/feature_extraction.py", line 70, in _forward model_outputs = self.model(**model_inputs) File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1435, in forward decoder_outputs = self.decoder( File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 937, in forward raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds") ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds ``` This error occurs at well when using the model at `google/flan-t5-base`. However, when trying the model at `google/mt5-small`, an error arises when loading the pipeline (before trying to us it): ``` >>> feature_extraction = pipeline('feature-extraction', model="google/mt5-small") Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 553/553 [00:00<00:00, 393kB/s] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████| 1.20G/1.20G [01:09<00:00, 17.3MB/s] Some weights of the model checkpoint at google/mt5-small were not used when initializing MT5Model: ['lm_head.weight'] - This IS expected if you are initializing MT5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing MT5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 82.0/82.0 [00:00<00:00, 44.2kB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████| 4.31M/4.31M [00:05<00:00, 721kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 99.0/99.0 [00:00<00:00, 4.37kB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 801, in pipeline tokenizer = AutoTokenizer.from_pretrained( File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 619, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1777, in from_pretrained return cls._from_pretrained( File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1932, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 134, in __init__ super().__init__( File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 120, in __init__ raise ValueError( ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one. ``` Is this pipeline not yet prepared for T5 models? ### Expected behavior I would expect the typical output of the pipeline (a list with a list of embeddings).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20404/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20404/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20403
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20403/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20403/comments
https://api.github.com/repos/huggingface/transformers/issues/20403/events
https://github.com/huggingface/transformers/issues/20403
1,461,410,143
I_kwDOCUB6oc5XG1lf
20,403
SwiGLU activation function
{ "login": "drAbreu", "id": 44996651, "node_id": "MDQ6VXNlcjQ0OTk2NjUx", "avatar_url": "https://avatars.githubusercontent.com/u/44996651?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drAbreu", "html_url": "https://github.com/drAbreu", "followers_url": "https://api.github.com/users/drAbreu/followers", "following_url": "https://api.github.com/users/drAbreu/following{/other_user}", "gists_url": "https://api.github.com/users/drAbreu/gists{/gist_id}", "starred_url": "https://api.github.com/users/drAbreu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drAbreu/subscriptions", "organizations_url": "https://api.github.com/users/drAbreu/orgs", "repos_url": "https://api.github.com/users/drAbreu/repos", "events_url": "https://api.github.com/users/drAbreu/events{/privacy}", "received_events_url": "https://api.github.com/users/drAbreu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, I was able to solve the issue with the loading of the SwiGLU layers using the ugly fix of keeping the activation function definition in `EXcellRobertaConfig` as `gelu`, while adding a `swiglu` parameter that, it set to True, overrides activation function. \r\nI am not sure if this is a recommended procedure... Would expect that it is not. \r\n\r\nI would be happy to get any comment on this and contribute with the addition of SwiGLU as an activation function.\r\n", "Hi there! We would recommend you to modify the modeling file to suit your needs, you can then include it with your checkpoint using the [custom model on the Hub](https://huggingface.co/docs/transformers/custom_models#sending-the-code-to-the-hub) feature.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Awesome ! Thanks for your contributions @drAbreu " ]
1,669
1,672
1,672
NONE
null
### Feature request Since it has been recently used in [PaLM](https://arxiv.org/abs/2204.02311) and several papers report its better performance, it would be good to have access to a [SwiGLU](https://arxiv.org/pdf/2002.05202v1.pdf) implementation as an activation function. ### Motivation I am building a biomedical RoBERTa-based model with specific biomedical vocabulary. It could be seen as a PubMedBERT version wirth RoBERTa architecture and BPE vocab. Since RoBERTa has already some years, I want to also add recent improvements to architecture and training. I have tried myself to generate a RoBERTa model with two extra features. One is to remove bias from the FFN layers and the other to add the SwiGLU activation to these. My approach has been to copy the code of [roberta_modeling.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_roberta.py) and modify its `RobertaIntermediate` class to a `EXcellRobertaIntermediate` class including the `swiglu` activation and a bias=`config.dense_layer_bias` attribute in the `nn.Linear` instantiation. This works good for a first training of the model. However, when loading the model I find problems. The first problem was that the model config has `activation=swiglu` and there is some ContextManager that does not allow for that option. I did a dirty work around, keeping `activation=gelu` while keeping the swiglu in the code. This works and the model trains... but if I want to then further train it or use it for fine-tuning it will drop the extra layers generated by the swiglu. Here is an example output: ``` from smtag.excell_roberta.modeling_excell_roberta import EXcellRobertaForMaskedLM model = EXcellRobertaForMaskedLM.from_pretrained('/app/excell-roberta-training/checkpoint-50/') loading configuration file /app/excell-roberta-training/checkpoint-50/config.json Model config EXcellRobertaConfig { "architectures": [ "EXcellRobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bias_dense_layers": false, "bias_norm": false, "bos_token_id": 0, "classifier_dropout": null, "dense_layer_bias": false, "eos_token_id": 1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 3, "position_embedding_type": "absolute", "sep_token_id": 1, "swiglu": true, "tokenizer_class": "RobertaTokenizerFast", "torch_dtype": "float32", "transformers_version": "4.20.0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 64000 } loading weights file /app/excell-roberta-training/checkpoint-50/pytorch_model.bin Some weights of the model checkpoint at /app/excell-roberta-training/checkpoint-50/ were not used when initializing EXcellRobertaForMaskedLM: ['roberta.encoder.layer.2.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.0.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.3.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.11.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.8.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.7.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.9.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.5.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.6.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.4.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.1.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.10.intermediate.intermediate_dense.weight'] - This IS expected if you are initializing EXcellRobertaForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing EXcellRobertaForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). All the weights of EXcellRobertaForMaskedLM were initialized from the model checkpoint at /app/excell-roberta-training/checkpoint-50/. If your task is similar to the task the model of the checkpoint was trained on, you can already use EXcellRobertaForMaskedLM for predictions without further training. model(**excell("acetyltransferase is something that should give extra subtokens to the tokenizer", truncation=True, padding="max_length", return_tensors='pt')) MaskedLMOutput(loss=None, logits=tensor([[[-0.1479, 0.3992, -0.3396, ..., -0.3373, -0.8730, -0.7037], [ 0.1812, 0.5421, -0.4052, ..., -0.0612, -0.6076, -1.0300], [-0.1578, 0.6487, -0.8400, ..., 0.0745, -0.6941, -0.7082], ..., [-0.2610, 0.6921, -0.6040, ..., -0.0400, -0.6101, -0.9326], [-0.2610, 0.6921, -0.6040, ..., -0.0400, -0.6101, -0.9326], [-0.2610, 0.6921, -0.6040, ..., -0.0400, -0.6101, -0.9326]]], grad_fn=<AddBackward0>), hidden_states=None, attentions=None) model = EXcellRobertaForMaskedLM.from_pretrained('/app/excell-roberta-training/checkpoint-50/') loading configuration file /app/excell-roberta-training/checkpoint-50/config.json Model config EXcellRobertaConfig { "architectures": [ "EXcellRobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bias_dense_layers": false, "bias_norm": false, "bos_token_id": 0, "classifier_dropout": null, "dense_layer_bias": false, "eos_token_id": 1, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 3, "position_embedding_type": "absolute", "sep_token_id": 1, "swiglu": true, "tokenizer_class": "RobertaTokenizerFast", "torch_dtype": "float32", "transformers_version": "4.20.0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 64000 } loading weights file /app/excell-roberta-training/checkpoint-50/pytorch_model.bin Some weights of the model checkpoint at /app/excell-roberta-training/checkpoint-50/ were not used when initializing EXcellRobertaForMaskedLM: ['roberta.encoder.layer.2.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.0.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.3.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.11.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.8.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.7.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.9.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.5.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.6.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.4.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.1.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.10.intermediate.intermediate_dense.weight'] - This IS expected if you are initializing EXcellRobertaForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing EXcellRobertaForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). All the weights of EXcellRobertaForMaskedLM were initialized from the model checkpoint at /app/excell-roberta-training/checkpoint-50/. If your task is similar to the task the model of the checkpoint was trained on, you can already use EXcellRobertaForMaskedLM for predictions without further training. ``` I would like to check with you if there is any best way that this could be done, or whether it is possible at all without big modifications on transformers. We plan to eventually, once the model is published to submit a request to add it to the library. I would also be happy with a contribution of the SwiGLU activation if this would be possible. The main issue I see here is that instantiating a SwiGLU class requires instantiating an extra `nn.Linear` class. This therefore changes the behavior of the typical callables to other activation functions. I will be happy also to contribute on this topic. ### Your contribution I have added two main modifications to the original code of RoBERTa: First, I generated the class `SwiGLU`. I know that this is not the place to define this class, but this has been a test so far. ```python class SwiGLU(nn.Module): def forward(self, x): x, gate = x.chunk(2, dim=-1) return F.silu(gate) * x ``` The other modification is: ```python class EXcellRobertaIntermediate(nn.Module): def __init__(self, config): super().__init__() self.dense = nn.Linear(config.hidden_size, config.intermediate_size, bias=config.dense_layer_bias) self.swiglu = config.swiglu if self.swiglu: self.swiglu = True self.intermediate_act_fn = SwiGLU() self.intermediate_dense = nn.Linear(config.intermediate_size//2, config.intermediate_size, bias=config.dense_layer_bias) elif isinstance(config.hidden_act, str): self.intermediate_act_fn = ACT2FN[config.hidden_act] else: self.intermediate_act_fn = config.hidden_act def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: if self.swiglu: hidden_states = self.dense(hidden_states) hidden_states = self.intermediate_act_fn(hidden_states) hidden_states = self.intermediate_dense(hidden_states) else: hidden_states = self.dense(hidden_states) hidden_states = self.intermediate_act_fn(hidden_states) return hidden_states ``` Iwould be happy to contribute with tthe SwiGLU activation and eventually to bring the entire model to transformers.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20403/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20403/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20402
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20402/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20402/comments
https://api.github.com/repos/huggingface/transformers/issues/20402/events
https://github.com/huggingface/transformers/issues/20402
1,461,345,918
I_kwDOCUB6oc5XGl5-
20,402
Cached model files can't be referred in docker container on AWS Lambda
{ "login": "chenye-814", "id": 30120334, "node_id": "MDQ6VXNlcjMwMTIwMzM0", "avatar_url": "https://avatars.githubusercontent.com/u/30120334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chenye-814", "html_url": "https://github.com/chenye-814", "followers_url": "https://api.github.com/users/chenye-814/followers", "following_url": "https://api.github.com/users/chenye-814/following{/other_user}", "gists_url": "https://api.github.com/users/chenye-814/gists{/gist_id}", "starred_url": "https://api.github.com/users/chenye-814/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chenye-814/subscriptions", "organizations_url": "https://api.github.com/users/chenye-814/orgs", "repos_url": "https://api.github.com/users/chenye-814/repos", "events_url": "https://api.github.com/users/chenye-814/events{/privacy}", "received_events_url": "https://api.github.com/users/chenye-814/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,669
1,669
1,669
NONE
null
### System Info { "errorMessage": "Can't load tokenizer for 'model_name'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'model_name' is the correct path to a directory containing all relevant files for a BlenderbotTokenizer tokenizer.", "errorType": "OSError", "requestId": "", "stackTrace": [ " File \"/var/lang/lib/python3.9/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n", " File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n", " File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n", " File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n", " File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n", " File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\n", " File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n", " File \"/var/task/app.py\", line 22, in <module>\n tokenizer = BlenderbotTokenizer.from_pretrained(MODEL_NM,local_files_only=True)\n", " File \"/var/task/transformers/tokenization_utils_base.py\", line 1761, in from_pretrained\n raise EnvironmentError(\n" ] } ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am pretty sure cached model files are included in my image and TRANSFORMERS_CACHE is also set properly when building the image. I can see that my model files exist in container through log output: ``` models--xxxxxxx : ['blobs', 'refs', 'snapshots'] TRANSFORMERS_CACHE = ./parent_dir_for_model/ ``` Code is simply like this, and it works well on my local python 3.10 environment (can automatically go for the path defined in `TRANSFORMERS_CACHE` without trying to download) . ``` tokenizer = BlenderbotTokenizer.from_pretrained(path,local_files_only=True) model = BlenderbotForConditionalGeneration.from_pretrained(path,local_files_only=True) ``` My version of transformers is 4.24. I suspect something wrong about the compatibility with Lambda python3.9 runtime? ### Expected behavior Cached model files should be loaded properly based on the path defined by `TRANSFORMERS_CACHE` .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20402/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20401
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20401/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20401/comments
https://api.github.com/repos/huggingface/transformers/issues/20401/events
https://github.com/huggingface/transformers/pull/20401
1,461,308,201
PR_kwDOCUB6oc5Dib9n
20,401
Use updated attributes when saving tokenizers
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "For the record: running slow tests with the generic fix:\r\n\r\n2 are (By-)T5 issues, other 3 from Bert/RocBert/NLLB\r\n\r\n```bash\r\n=FAILED tests/models/bert/test_tokenization_bert_tf.py::BertTokenizationTest::test_saved_model - ValueError: The two structures don't have the same nested structure.\r\nFAILED tests/models/byt5/test_tokenization_byt5.py::ByT5TokenizationTest::test_added_token_serializable - ValueError: Both extra_ids (125) and additional_special_tokens (['new_token']) are provided to ByT5Tokenizer. In this case the additional_special_tokens must include the extra_ids tokens\r\nFAILED tests/models/nllb/test_tokenization_nllb.py::NllbTokenizationTest::test_save_pretrained - ValueError: Non-consecutive added token 'ar_AR' found. Should have index 1229 but has index 1204 in saved vocabulary.\r\nFAILED tests/models/roc_bert/test_tokenization_roc_bert.py::BertTokenizationTest::test_sequence_builders - assert [1, 5, 6, 2] == [101, 5, 6, 102]\r\nFAILED tests/models/t5/test_tokenization_t5.py::T5TokenizationTest::test_added_token_serializable - ValueError: Both extra_ids (100) and additional_special_tokens (['new_token']) are provided to T5Tokenizer. In this case the additional_special_tokens must include the extra_ids tokens\r\n\r\n```", "Will try to make tokenization code better, but need to merge in order to unblock tiny model creation (so for pipeline testing and ONNX testing)" ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? Use updated attributes when saving tokenizers. Fix #20395
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20401/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20401", "html_url": "https://github.com/huggingface/transformers/pull/20401", "diff_url": "https://github.com/huggingface/transformers/pull/20401.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20401.patch", "merged_at": 1669223787000 }
https://api.github.com/repos/huggingface/transformers/issues/20400
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20400/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20400/comments
https://api.github.com/repos/huggingface/transformers/issues/20400/events
https://github.com/huggingface/transformers/pull/20400
1,461,255,496
PR_kwDOCUB6oc5DiQf9
20,400
Fix doctest file path issue
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20400). All of your documentation changes will be reflected on that endpoint." ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? Fix doctest file path issue. (Currently, the whole suite fails from the beginning)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20400/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20400", "html_url": "https://github.com/huggingface/transformers/pull/20400", "diff_url": "https://github.com/huggingface/transformers/pull/20400.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20400.patch", "merged_at": 1669207235000 }
https://api.github.com/repos/huggingface/transformers/issues/20399
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20399/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20399/comments
https://api.github.com/repos/huggingface/transformers/issues/20399/events
https://github.com/huggingface/transformers/issues/20399
1,461,170,933
I_kwDOCUB6oc5XF7L1
20,399
Tokens truncated if exceeded 512 tensor shape
{ "login": "thoufeeq1218", "id": 86664855, "node_id": "MDQ6VXNlcjg2NjY0ODU1", "avatar_url": "https://avatars.githubusercontent.com/u/86664855?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thoufeeq1218", "html_url": "https://github.com/thoufeeq1218", "followers_url": "https://api.github.com/users/thoufeeq1218/followers", "following_url": "https://api.github.com/users/thoufeeq1218/following{/other_user}", "gists_url": "https://api.github.com/users/thoufeeq1218/gists{/gist_id}", "starred_url": "https://api.github.com/users/thoufeeq1218/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thoufeeq1218/subscriptions", "organizations_url": "https://api.github.com/users/thoufeeq1218/orgs", "repos_url": "https://api.github.com/users/thoufeeq1218/repos", "events_url": "https://api.github.com/users/thoufeeq1218/events{/privacy}", "received_events_url": "https://api.github.com/users/thoufeeq1218/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @thoufeeq1218! That's a lot of apple crumble! cc'ing in the vision specialist @NielsRogge 😎", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@NielsRogge ping on this issue.", "Hi,\r\n\r\nThe recommendation here is to apply a \"sliding window\" approach, which means that, if your sequence of tokens is > 512, you apply a sliding window with a certain stride (like a window that each time takes 512 tokens as input, and then you shift the window 128 tokens - this is called the stride) and then average predictions for tokens which are part of several windows.\r\n\r\nYou can specify `return_overflowing_tokens` and `stride` arguments in the processor/tokenizer's call method.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,675
1,675
NONE
null
### System Info Hi, Im using Layoutlmv2 pretrained transformer model, if I put "truncation =True" only half of the image were detected remain not ![res](https://user-images.githubusercontent.com/86664855/203488657-64f08e35-7c07-4580-a5f0-3b34675640a2.png) ### Who can help? @sanchit-gandhi @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction can't modify model weights shape to match the input tensor shape ### Expected behavior I want to get full predictions, Thanks in advance🙂
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20399/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20398
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20398/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20398/comments
https://api.github.com/repos/huggingface/transformers/issues/20398/events
https://github.com/huggingface/transformers/issues/20398
1,460,756,683
I_kwDOCUB6oc5XEWDL
20,398
Generating exactly 1 word (instead of token) using autoregressive LMs
{ "login": "joey234", "id": 12378617, "node_id": "MDQ6VXNlcjEyMzc4NjE3", "avatar_url": "https://avatars.githubusercontent.com/u/12378617?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joey234", "html_url": "https://github.com/joey234", "followers_url": "https://api.github.com/users/joey234/followers", "following_url": "https://api.github.com/users/joey234/following{/other_user}", "gists_url": "https://api.github.com/users/joey234/gists{/gist_id}", "starred_url": "https://api.github.com/users/joey234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joey234/subscriptions", "organizations_url": "https://api.github.com/users/joey234/orgs", "repos_url": "https://api.github.com/users/joey234/repos", "events_url": "https://api.github.com/users/joey234/events{/privacy}", "received_events_url": "https://api.github.com/users/joey234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "Hi @joey234 👋 \r\n\r\nBy default, we don't directly support that functionality. While generating, the model is unaware of where a word starts and ends -- it exclusively works at a token level. The solution you described is IMO the simplest way to do it, and the engineering effort of a working solution probably exceeds the extra computing time... unless you intend to use it for a large number of models!\r\n\r\nI also haven't seen other requests for this feature, so I won't devote resources to building it :) However, if you're interested in building it, I'd be happy to guide you.\r\n\r\nP.S.: you mentioned \"I would calculate the product\" related to scores. Be mindful that our scores are UNORMALIZED LOG-PROBABILITIES, so the probability of a word is `exp(sum of the log_softmax(scores))` or `prod(softmax(scores))` ⚠️ \r\n\r\n_________________________________________\r\n\r\nHere are my two cents on how to build a solution to this problem. You have two options, A and B. If it was me working on this problem, I'd go with B (A seems painful to build). Both option can be easily extended to force generate to stop after N words.\r\n\r\n### Option A (most compute efficient)\r\n\r\nIn essence, you must mix two pieces of knowledge: 1) knowing which tokens correspond to the end of a word; 2) building a custom stopping mechanism.\r\n\r\nPiece 1) depends from model to model, and you should refer to our tokenizer documentation to find the needed information. Here's an intro to how tokenizers work: https://huggingface.co/course/chapter2/4?fw=pt\r\n\r\nAs for 2), you can see related examples of stopping criteria [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/stopping_criteria.py). In essence, you want to return `True` when a token from 1) is generated. You'd have to build a new class and pass an instance to `generate` using the `stopping_criteria` argument.\r\n\r\n### Option B (easiest to build)\r\n\r\nHere you would only need to build a stopping criteria class. In essence, the class would decode the text with the tokenizer at each generation step and, if the word is complete (e.g. if a space is detected), return `True`. As described in option A, you would pass an instance to `generate` using the `stopping_criteria` argument. ", "Thank you so much for the guidance 🤗. I will try building based on the `stopping_criteria` argument.", "@joey234 Hi, any progress?" ]
1,669
1,672
1,669
NONE
null
### Feature request I am not sure if there exists any way to make the autoregressive type of LMs (like GPT) generate exactly 1 next word (may consists of multiple tokens) instead of just 1 token. e.g. ` Singing is a kind of ____ -> entertain (token), entertainment (word)` ### Motivation I came across this problem when trying to evaluate LMs performance on the next word prediction task, especially cloze style prompting (like in the LAMA dataset). AFAIK, most existing solutions just generate 1 token, which could lead to incorrect evaluation. ### Your contribution My current workaround is to overgenerate then split on whitespace to get the first word. Then to get the scores for the generated word, I would calculate the product of all the constituted tokens. This is in no way optimal, especially when we need to get the top k word predictions as we need to increase the number of beams and return sequences. Moreover, the first few generated tokens would likely to be the same across different beams.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20398/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20397
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20397/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20397/comments
https://api.github.com/repos/huggingface/transformers/issues/20397/events
https://github.com/huggingface/transformers/issues/20397
1,460,683,673
I_kwDOCUB6oc5XEEOZ
20,397
[CodeGen] RuntimeError: where expected condition to be a boolean tensor, but got a tensor with dtype Half
{ "login": "jrdzha", "id": 12738689, "node_id": "MDQ6VXNlcjEyNzM4Njg5", "avatar_url": "https://avatars.githubusercontent.com/u/12738689?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jrdzha", "html_url": "https://github.com/jrdzha", "followers_url": "https://api.github.com/users/jrdzha/followers", "following_url": "https://api.github.com/users/jrdzha/following{/other_user}", "gists_url": "https://api.github.com/users/jrdzha/gists{/gist_id}", "starred_url": "https://api.github.com/users/jrdzha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jrdzha/subscriptions", "organizations_url": "https://api.github.com/users/jrdzha/orgs", "repos_url": "https://api.github.com/users/jrdzha/repos", "events_url": "https://api.github.com/users/jrdzha/events{/privacy}", "received_events_url": "https://api.github.com/users/jrdzha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "Another way to reproduce is to use https://github.com/huggingface/transformers-bloom-inference\r\n\r\nand run:\r\n\r\n```\r\nmake codegen-mono\r\n```", "Hi @jrdzha \r\nThanks so much for your issue! Unfortunately I did not managed to reproduce your issue. How are you getting this error? Could you provide me a full example script please?\r\nHere is the snippet I used: \r\n```\r\nimport torch\r\nfrom transformers import AutoTokenizer, CodeGenForCausalLM\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"Salesforce/codegen-16B-mono\")\r\nmodel = CodeGenForCausalLM.from_pretrained(\"Salesforce/codegen-16B-mono\", device_map=\"auto\")\r\n\r\ntext = \"def main():\"\r\ninputs = tokenizer(text, return_tensors=\"pt\")\r\n\r\nmodel.generate(**inputs)\r\n```\r\nAlso can you make sure to use the latest version of `accelerate` ? `pip install --upgrade accelerate` + is there anything blocking on your side not to use the latest version of `transformers`? `pip install --upgrade transformers` ", "Seems also to be a duplicate of https://github.com/arrmansa/Basic-UI-for-GPT-J-6B-with-low-vram/issues/4 I also ran the experiment with `torch==1.10` and not getting any error ! ", "It seems this issue is inconsistent. I did have it work once, but without changing any code... I'll spend some more time trying to reproduce this more reliably...", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
### System Info transformers==4.21.2 torch==1.10.2+cu113 HW: 12 cpu, 90gb ram, 1xA100-80g ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction **The following works with CodeGen-350M-mono / CodeGen-6B-mono** ``` import torch from transformers import AutoTokenizer, CodeGenForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono") model = CodeGenForCausalLM.from_pretrained("Salesforce/codegen-350M-mono", device_map="auto") ``` **But throws the RuntimeError for CodeGen-16B-mono** ``` import torch from transformers import AutoTokenizer, CodeGenForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-mono") model = CodeGenForCausalLM.from_pretrained("Salesforce/codegen-16B-mono", device_map="auto") ``` ### Expected behavior Expect the 16B model to work as well.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20397/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20397/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20396
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20396/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20396/comments
https://api.github.com/repos/huggingface/transformers/issues/20396/events
https://github.com/huggingface/transformers/pull/20396
1,460,632,850
PR_kwDOCUB6oc5DgEGl
20,396
Add type hints for Whisper models
{ "login": "donelianc", "id": 7807897, "node_id": "MDQ6VXNlcjc4MDc4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7807897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donelianc", "html_url": "https://github.com/donelianc", "followers_url": "https://api.github.com/users/donelianc/followers", "following_url": "https://api.github.com/users/donelianc/following{/other_user}", "gists_url": "https://api.github.com/users/donelianc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donelianc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donelianc/subscriptions", "organizations_url": "https://api.github.com/users/donelianc/orgs", "repos_url": "https://api.github.com/users/donelianc/repos", "events_url": "https://api.github.com/users/donelianc/events{/privacy}", "received_events_url": "https://api.github.com/users/donelianc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @Rocketknight1, can you help me understand failed test? The following [message](https://app.circleci.com/pipelines/github/huggingface/transformers/52274/workflows/ccc607ce-79a5-4b31-a567-22e5e638338b/jobs/626467?invite=true#step-111-353) comes from the CI test logs:\r\n\r\n```\r\nYou have to specify either decoder_input_ids or decoder_inputs_embeds\r\n```\r\n\r\nAccording to Whisper [documentation](https://huggingface.co/docs/transformers/v4.24.0/en/model_doc/whisper#transformers.TFWhisperForConditionalGeneration.call.past_key_values),\r\n\r\n> If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds.\r\n\r\n\r\nBut, when I checked the argument `inputs_embeds` set for testing, I noticed that `inputs_embeds=None`.\r\nI'm confused because the model had `=None` for all its parameters before I added the type hints.", "Hey! I think you are right here, the documentation is misleading. This only happens for the `...ForConditionalGeneration` and not for the model. Opening a PR right now to fix this", "@Rocketknight1 this PR is ready for review. \r\n\r\nBesides including the type hints for main Whisper models, I changed the `output_type` to `TFSeq2SeqModelOutput` in the `TFWhisperModel`'s _docstrings_ since it has a wrong output (probably a confusion with the `TFWhisperForConditionalGeneration` model)." ]
1,669
1,670
1,670
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adding type hints for ` Whisper` model (TensorFlow). Related to the issue #16059. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? _Task requested [here](https://github.com/huggingface/transformers/issues/16059#issuecomment-1324302192)._ - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? _Ran `make fixup` before last commit._ ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20396/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20396/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20396", "html_url": "https://github.com/huggingface/transformers/pull/20396", "diff_url": "https://github.com/huggingface/transformers/pull/20396.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20396.patch", "merged_at": 1670855961000 }
https://api.github.com/repos/huggingface/transformers/issues/20395
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20395/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20395/comments
https://api.github.com/repos/huggingface/transformers/issues/20395/events
https://github.com/huggingface/transformers/issues/20395
1,460,569,727
I_kwDOCUB6oc5XDoZ_
20,395
some tokenizer(s) don't save the updated attributes
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }, { "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false } ]
[ "It turns out that we use\r\nhttps://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/tokenization_utils_fast.py#L735\r\nwhich is defined\r\nhttps://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/tokenization_utils_base.py#L1475\r\n\r\nIs this the expected behavior, i.e. we don't want to save the modified attributes like `model_max_length`?", "Hi @ydshieh, \r\n\r\nThis is a very good remark! I've also often wondered... what I'm afraid of is that for some attributes the tokenizer's behavior is different between:\r\n1. A tokenizer initialized with some parameters and then with a parameter that is modified on the fly\r\n2. A tokenizer that would be initialized with the final parameters of the previous tokenizer\r\n\r\nWhat is complicated with tokenizers is that all tokenizers share the lines of code you mention but many of them have specificities implemented on top of them and it's quite hard to be sure that we won't break more things than we fix (and this if we consider that we don't add a breaking change...). I think there must be a reason why historically it was chosen to only save the `init_kwargs` for the `tokenizer_config`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/tokenization_utils_base.py#L2084" ]
1,669
1,669
1,669
COLLABORATOR
null
### System Info transformers version: 4.25.0.dev0 Torch version: 1.13.0+cpu Cuda available: False Cuda version: None CuDNN version: None Number of GPUs available: 0 ### Description For `GPT2Tokenizer(Fast)`, Set `tokenizer.model_max_length` to `128` (originally `1024`), save it then reload, will give `tokenizer.model_max_length` being `1024`. ### Reproduction ```python from transformers import GPT2Tokenizer, GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") print(tokenizer.model_max_length) tokenizer.model_max_length = 128 print(tokenizer.model_max_length) tokenizer.save_pretrained("my-gpt2") tokenizer_loaded = GPT2TokenizerFast.from_pretrained("my-gpt2") print(tokenizer_loaded.model_max_length) ``` The output is ```bash 1024 128 1024 ``` ### Expected behavior `tokenizer_loaded.model_max_length` should be `128` in the above example. In general, the updated attribute(s) should be saved.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20395/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20395/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20394
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20394/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20394/comments
https://api.github.com/repos/huggingface/transformers/issues/20394/events
https://github.com/huggingface/transformers/issues/20394
1,460,450,555
I_kwDOCUB6oc5XDLT7
20,394
Regression in CLIPProcessor from 4.24.0 -> 4.25.0.dev0
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false } ]
[]
1,669
1,669
1,669
MEMBER
null
### System Info - `transformers` version: 4.24.0 / 4.25.0.dev0 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - Huggingface_hub version: 0.11.0.dev0 - PyTorch version (GPU?): 1.11.0+cpu (False) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu) - Jax version: 0.3.16 - JaxLib version: 0.3.15 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @amyeroberts @sg ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction There seems to be a regression of `CLIPProcessor` between current `main` and `4.24` You can easily reproduce it by running the following script with current main `4.25.0.dev0` and `4.24` to see a difference: ```python #!/usr/bin/env python3 from transformers import CLIPProcessor import transformers from PIL import Image import PIL.Image import numpy as np import torchvision.transforms as tvtrans import requests from io import BytesIO print(transformers.__version__) url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" response = requests.get(url) image = Image.open(BytesIO(response.content)).convert("RGB") BICUBIC = PIL.Image.Resampling.BICUBIC image = image.resize([512, 512], resample=BICUBIC) image = tvtrans.ToTensor()(image) np_image = np.asarray(image) processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14") pixel_values = processor(images=2 * [np_image], return_tensors="pt").pixel_values print(pixel_values.abs().sum()) print(pixel_values.abs().mean()) ``` The outputs for the different versions are as follows: ``` 4.24.0 tensor(287002.5000) tensor(0.9533) ``` ``` 4.25.0.dev0 tensor(503418.8125) tensor(1.6722) ``` The code snippet above comes from reproducing a problem that happens when updating `transformers` to main for https://github.com/SHI-Labs/Versatile-Diffusion . https://github.com/SHI-Labs/Versatile-Diffusion only works with `transformers==4.24.0` - the pipeline gives random results when using `transformers==4.25.0.dev0` ### Expected behavior It seems like a bug was introduced for after the 4.24. release. The code snippet above might seem a bit edge-casy but I believe people have started to build any kind of image processing pipelines with CLIP already.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20394/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20393
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20393/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20393/comments
https://api.github.com/repos/huggingface/transformers/issues/20393/events
https://github.com/huggingface/transformers/issues/20393
1,460,439,119
I_kwDOCUB6oc5XDIhP
20,393
can't from transformers import TFBertModel
{ "login": "minig0lem", "id": 92259521, "node_id": "U_kgDOBX_EwQ", "avatar_url": "https://avatars.githubusercontent.com/u/92259521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/minig0lem", "html_url": "https://github.com/minig0lem", "followers_url": "https://api.github.com/users/minig0lem/followers", "following_url": "https://api.github.com/users/minig0lem/following{/other_user}", "gists_url": "https://api.github.com/users/minig0lem/gists{/gist_id}", "starred_url": "https://api.github.com/users/minig0lem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/minig0lem/subscriptions", "organizations_url": "https://api.github.com/users/minig0lem/orgs", "repos_url": "https://api.github.com/users/minig0lem/repos", "events_url": "https://api.github.com/users/minig0lem/events{/privacy}", "received_events_url": "https://api.github.com/users/minig0lem/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is due to the latest release of TensorFlow which broke many things. You need to either install `transformers` from source (this is fixed on the main branch) or downgrade `Tensorflow` to 2.10. :-)", "This also broke the `evaluate` CI :) Will fix to `<=2.10` for now.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.8 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.13.0+cpu (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @LysandreJik @Rocket ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ![image](https://user-images.githubusercontent.com/92259521/203403557-481c7ac6-fbd7-4a75-8cbb-088f25a8f43e.png) ### Expected behavior from transformers import BertTokenizer works well. but from transformers import TFBertModel doesn't work . It runs like picture above. How do I resolve this error(ModuleNotFoundError: No module named 'keras.saving.hdf5_format')?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20393/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20392
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20392/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20392/comments
https://api.github.com/repos/huggingface/transformers/issues/20392/events
https://github.com/huggingface/transformers/issues/20392
1,460,311,563
I_kwDOCUB6oc5XCpYL
20,392
Add BART-LS
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "repos_url": "https://api.github.com/users/KMFODA/repos", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Any update on this ? @KMFODA I can help please let me know if you want to collaborate on this ?", "Hey @thakursc1 I'm still waiting on someone from the HF team to confirm if this can be integrated into their codebase if we work on this as this only becomes beneficial for my use case if I can use it in the transformers master branch.\r\n\r\nHappy to collaborate on this as soon as we hear back.", "Hey @KMFODA, wondering if there are any updates on this? Thanks! ", "Hey @jmzeng. I haven't heard back from anyone from the HF team yet and unfortunately a few things have changed in my workloads and I don't think I'll be able to work on this. Maybe someone else can work on this if they have the bandwidth and ping the HF team when they have a draft PR for them to review?", "@KMFODA BART-LS looks like it would be a great addition to the library :) \r\n\r\nIf you or another community member would still like to add the model, please feel free to open a PR and let us know in the meantime if there's any difficulties integrating it. " ]
1,669
1,678
null
CONTRIBUTOR
null
### Model description BART-LS (Long Bart), presented in this [paper](https://arxiv.org/pdf/2209.10052.pdf), establishes a new SOTA on a number of NLP tasks and long form datasets. It uses pooling-augmented block-wise attention and a novel pre-training strategy to achieve this. Given my interest in long text summarisation I'm very keen to get this into the wonderful transformers library and to start benchmarking it against other models. Therefore, happy to take this on and ping any members of the team if I face any blockers. If this fits with the library's plans let me know and I'll start working on a PR for this. ### Open source status - [X] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation Original Model Repo (which includes the model weights): https://github.com/facebookresearch/bart_ls
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20392/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20392/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/20391
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20391/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20391/comments
https://api.github.com/repos/huggingface/transformers/issues/20391/events
https://github.com/huggingface/transformers/pull/20391
1,460,263,962
PR_kwDOCUB6oc5DezVM
20,391
Generate: fix plbart generation tests
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
MEMBER
null
# What does this PR do? As the title says :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20391/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20391", "html_url": "https://github.com/huggingface/transformers/pull/20391", "diff_url": "https://github.com/huggingface/transformers/pull/20391.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20391.patch", "merged_at": 1669139765000 }
https://api.github.com/repos/huggingface/transformers/issues/20390
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20390/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20390/comments
https://api.github.com/repos/huggingface/transformers/issues/20390/events
https://github.com/huggingface/transformers/pull/20390
1,460,242,970
PR_kwDOCUB6oc5DeuvS
20,390
[OPT/Galactica] Load large `galactica` models
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I am not 100% sure if this is the approach we want to have, despite I can understand the intention. Would like to hear from @sgugger.\r\n\r\nFor reference, `class OPTDecoderLayer` from `galai` does pass `bias` to `OPTAttention`\r\n\r\nhttps://github.com/paperswithcode/galai/blob/c1e16979c1748e7e823fe96da941d6df60f1006b/galai/architecture.py#L280", "Yes, I think it was a mistake from our side. We should either port a new model (with controlable bias and layer norm) and remove the `bias` boolean from `OPTAttention` as it is always set to `True` or go with this fix", "Thanks! \r\nSorry for the last minute clarification as I just realized that the description and title are not clear, but the main goal of this PR is to support loading and using large `galatica` models that uses `OPT` architecture, initially reported in: https://huggingface.co/facebook/galactica-30b/discussions/4 / therefore the title + description is slightly misleading\r\nThe snippet to reproduce: \r\n```\r\nimport torch\r\nfrom transformers import AutoTokenizer, OPTForCausalLM, AutoModel\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/galactica-30b\")\r\nmodel = OPTForCausalLM.from_pretrained(\"facebook/galactica-30b\", device_map=\"auto\", torch_dtpye=torch.float16)\r\n\r\ninput_text = \"The Transformer architecture [START_REF]\"\r\ninput_ids = tokenizer(input_text, return_tensors=\"pt\").input_ids.to(\"cuda\")\r\n\r\noutputs = model.generate(input_ids)\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\nIn case we don't merge this PR we may be want to add `galatica` as a separate new architecture as some `galactica` models (such as `30b`) does not use `bias` on linear layers and don't have any learnable weights on their `LayerNorm`", "I understood, and yes, that will be the alternative if this PR is declined :-)", "Hi @sgugger and @younesbelkada, it's one of the Galactica authors here. We think that there might be something wrong with the 30bn model specifically on HuggingFace. We're currently migrating our galai library to use the huggingface model without our custom OPT config. There seems to have been a conversion process applied to our models to give null weights to the biases (or something else similar to the OPT models), but specifically not on the 30bn file. Hopefully, this can be resolved without a PR by fixing the model file. See the great investigations done by @Jackmin801 on this ticket https://github.com/paperswithcode/galai/issues/37#issuecomment-1323929437", "> For reference, `class OPTDecoderLayer` from `galai` does pass `bias` to `OPTAttention`\r\n> \r\n\r\nHi @ydshieh, the `bias` flag is passed only so that `Galactica` extension of `OPT` architecture is backward compatible. We set all the additional config parameters to the values used by `OPT` (see https://github.com/paperswithcode/galai/blob/main/galai/config.py#L92-L95) so that `OPT` checkpoints work as before, but we set them accordingly in `Galactica` configs (see f.e., https://huggingface.co/mrm8488/galactica-125m/blob/main/config.json#L18). Whether these changes should be ported back to `modeling_opt` or the `Galactica` should be forked-out from it depends on how much it deviates from the general philosophy of Transformers as @sgugger noted.", "Hi @AnthonyHartshorn \r\nThanks a lot for your message. Indeed, big kudos to @Jackmin801 for the investigation, his investigation in https://huggingface.co/facebook/galactica-30b/discussions/4#637e90571dbae0919104b582 helped me define the rootcause of the bug. \r\nI guess it can be also fixed by saving zero bias and ones for the layer norms, updating the weights on the hub with the new ones can do the trick too yes.", "As @sgugger said above, this goes very clearly against the foundation of `transformers` to add configurable parameters to a previous model architecture to support a new model architecture.\r\n\r\nHowever, fixing this any other way would result in some setups breaking in the wild; it would require us to update the architecture name to `galactica` instead of `opt` which would break every existing setup that currently uses these models unless the upgrade to the latest version. \r\n\r\nGiven that, I'm also inclined to accept this change even if it goes against our design decisions. If we could do it all over again however, I would heavily push for a new model architecture.", "Thanks @LysandreJik for approving this PR. I have another related question. As pointed by Jackmin801 in the comment linked above by Anthony (https://github.com/paperswithcode/galai/issues/37#issuecomment-1323929437), almost all of the checkpoints were converted from our float16 checkpoints and uploaded to the hub in full float32 precision (except for 30B which is an exact copy). That's not the best for user experience: download time, disk usage and loading time doubles for no benefit. I wonder if we can fix it, there are couple options I see:\r\n\r\n* upload our float16 checkpoints once this PR is merged. This would not be backward compatible as this PR is required,\r\n* do the same conversion that @mrm8488 did, but `.half()` the models before exporting. This would be almost backward compatible except for the case when a user doesn't specify `pytorch_dtype` when loading a model, as after that the models would load as float16 by default,\r\n* keep the existing checkpoints, potentially fix the 30B to be float32 as well for consistency (it wasn't working before this PR anyway). Not the best user experience,\r\n* add new checkpoints galactica-125m-fp16, ..., galactica-120b-fp16. Might be too confusing for users.\r\n\r\nWhat do you think? I'm in favor of the second option as it's the best for backward compatibility and user experience.", "PyTorch automatically converts checkpoint weights to the dtype of the model when you load the state_dict, so option 2 is actually 100% backward compatible.", "Thanks @sgugger, I missed the fact that `torch_dtype` is part of `config.json`.", "This PR is ok for me - galactica is build on top of OPT so one could fine-tune OPT using these two configs => so this PR is def ok for me", "Thanks everyone! \r\n@mkardas @mrm8488 : https://huggingface.co/facebook/galactica-30b/discussions/5 since now this PR has been merged, can you merge this PR to fix the initial issue for `30b` ? ", "@younesbelkada I'm not a member of the org yet. I've verified my work email address, but wasn't auto-added. How can I learn who are the admins?", "@patrickvonplaten can you add me to https://huggingface.co/facebook (same username)?", "I can merge the PR if this is the only thing needed! 🤗 \r\n", "Thanks @ArthurZucker. I was working on providing float16 weights in the backward compatible way as discussed above. I think it's best to just fix all the checkpoints to make them float16 and keep zero biases for backward compatibility with HF 4.21.0-4.24.0. I'm in the middle of preparing a new HF hub PR for this, I'll let you know in case I won't be able to merge it.\r\n\r\n@sgugger\r\nFrom my tests on backward compatibility, it seems that calling `OPTForCausalLM.from_pretrained` with `torch_dtype=None, device_map=None` results in `float32` weights regardless of what's in the checkpoint bin files and `config.json`. However, `torch_dtype=None, device_map=\"auto\"` results in the same weights type as in the checkpoint bin files, regardless of `config.json`. Is it to be expected?", "I think this is expected as if you want to load a model natively without using `accelerate` (i.e. without adding `device_map=\"auto\"`), `transformers` will automatically load the weights in fp32, in this case whenever you want to load a model with its native dtype of the weights you need to use `torch_dtype=\"auto\"`. ", "Mmm, no. If it is indeed the case then it's a bug. Do you have a small reproducer/repo ID I could look at?", "This is what I used:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import OPTForCausalLM\r\n\r\nfor device_map in [None, \"auto\"]:\r\n for dtype in [None, torch.float16, torch.float32]:\r\n model = OPTForCausalLM.from_pretrained(\r\n \"facebook/galactica-125m\",\r\n revision=\"refs/pr/6\",\r\n torch_dtype=dtype,\r\n device_map=device_map\r\n )\r\n print(f\"[device_map={device_map}]: {dtype} -> {model.lm_head.weight.dtype}\")\r\n print()\r\n```\r\n\r\nWhat I get for `refs/pr/6` (which has `torch_dtype=float32` in config.json and `float16` bin files):\r\n\r\n```\r\n[device_map=None]: None -> torch.float32\r\n[device_map=None]: torch.float16 -> torch.float16\r\n[device_map=None]: torch.float32 -> torch.float32\r\n\r\n[device_map=auto]: None -> torch.float16\r\n[device_map=auto]: torch.float16 -> torch.float16\r\n[device_map=auto]: torch.float32 -> torch.float32\r\n```\r\n\r\nFor `facebook/opt-125m` the output is the same, even though `opt-125m` has `float16` both in config.json and bin files.", "PRs replacing the existing `float32` checkpoints with `float16` checkpoints:\r\n\r\nhttps://huggingface.co/facebook/galactica-125m/discussions/6\r\nhttps://huggingface.co/facebook/galactica-1.3b/discussions/6\r\nhttps://huggingface.co/facebook/galactica-6.7b/discussions/8\r\nhttps://huggingface.co/facebook/galactica-30b/discussions/6\r\nhttps://huggingface.co/facebook/galactica-120b/discussions/6", "Found the issue. The PR mentioned above should make the result consistent between `device_map=None` and `device_map=\"auto\"`." ]
1,669
1,670
1,669
CONTRIBUTOR
null
# What does this PR do? This PR fixes a small bug on `OPT`. Before, the `bias` term [was always set to `True`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/opt/modeling_opt.py#L277) - leading to some external implementations to hardcode it if they wanted to train an OPT model without bias terms. See for example [here](https://github.com/paperswithcode/galai/blob/c1e16979c1748e7e823fe96da941d6df60f1006b/galai/architecture.py#L280). This PR aims to give more control on whether we should use or not `bias` terms on `Linear` layers of OPT. The PR also fixes the same issue with `nn.LayerNorm`. Some derivatives of OPT does not use learnable parameters for layer norm's weights and biases (ie, set `elementwise_affine` to `False`), therefore avoids having hardcoded hacks in the future. This PR should not be a breaking change as the default values of these booleans are set to `True` (as we were doing nothing) This PR should also fix: https://huggingface.co/facebook/galactica-30b/discussions/4 (ofc, after updating the relevant config files) cc @sgugger @ydshieh @mrm8488 All slow tests pass
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20390/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20390/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20390", "html_url": "https://github.com/huggingface/transformers/pull/20390", "diff_url": "https://github.com/huggingface/transformers/pull/20390.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20390.patch", "merged_at": 1669812915000 }
https://api.github.com/repos/huggingface/transformers/issues/20389
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20389/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20389/comments
https://api.github.com/repos/huggingface/transformers/issues/20389/events
https://github.com/huggingface/transformers/pull/20389
1,460,159,191
PR_kwDOCUB6oc5DecUQ
20,389
[WIP] Add a `get_token_embeddings_size` to `PreTrainedModel`
{ "login": "damian0815", "id": 144366, "node_id": "MDQ6VXNlcjE0NDM2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/144366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/damian0815", "html_url": "https://github.com/damian0815", "followers_url": "https://api.github.com/users/damian0815/followers", "following_url": "https://api.github.com/users/damian0815/following{/other_user}", "gists_url": "https://api.github.com/users/damian0815/gists{/gist_id}", "starred_url": "https://api.github.com/users/damian0815/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/damian0815/subscriptions", "organizations_url": "https://api.github.com/users/damian0815/orgs", "repos_url": "https://api.github.com/users/damian0815/repos", "events_url": "https://api.github.com/users/damian0815/events{/privacy}", "received_events_url": "https://api.github.com/users/damian0815/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "WIP because missing tests, i guess. I don't have a local environment set up atm, this was a quick edit on github. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20389). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "i still am interested in finishing this, just gotta find some time.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,674
1,674
NONE
null
Adds an intuitive and obvious method, `get_token_embeddings_size()`, to get the size of the current token embeddings on `PreTrainedModel`. This can be used to compute the new size when calling `resize_token_embeddings()`. ### Motivation Current API design around the `resize_token_embeddings()` method requires doing the following to increase the size of the token embeddings by 1: ```python # add 1 new embedding current_embeddings = my_xformer.resize_token_embeddings(None) new_embeddings_size = current_embeddings.num_embeddings + 1 my_xformer.resize_token_embeddings(new_embeddings_size) ``` This is counterintuitive and bleeds implementation details to the call site. It requires me to know 1. that calling a "resize" method with the argument `None` returns the object to be resized (which is not intuitive), and 2. that "size" means the property `num_embeddings` on the returned object (admittedly it's not difficult to guess, but it is still a *guess*, and is in fact an implementation detail that I shouldn't need to know). # What does this PR do? This PR enables instead the following: ```python # add 1 new embedding current_embeddings_size = my_xformer.get_token_embeddings_size() new_embeddings_size = current_embeddings_size + 1 my_xformer.resize_token_embeddings(new_embeddings_size) ``` This provides an intuitively-named method to determine the current "size", and appropriately hides the implementation detail that "size" means the `num_embeddings` property on the object to be resized. Fixes #20377 . ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [*] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [*] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - #20377 . - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20389/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20389/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20389", "html_url": "https://github.com/huggingface/transformers/pull/20389", "diff_url": "https://github.com/huggingface/transformers/pull/20389.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20389.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20388
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20388/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20388/comments
https://api.github.com/repos/huggingface/transformers/issues/20388/events
https://github.com/huggingface/transformers/pull/20388
1,460,135,671
PR_kwDOCUB6oc5DeXK8
20,388
Generate: use `GenerationConfig` as the basis for `.generate()` parametrization
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Fully agree with @sgugger here. \r\n\r\nTotally ok to just link to the `GenerateConfig` doc page -> think this make the docs online also cleaner actually. \r\nAlso I'd maybe rename `generate_config` to just `config` in generate or do you think this will cause confusion with the model's config?", "Overall, this is a great improvement !", "@sgugger @patrickvonplaten It is ready for review.\r\n\r\nMajor changes since the last review request:\r\n1. `ModelClass.from_pretrained()` pre-loads a `generation_config` attribute to the model if a `generation_config.json` exists, as suggested above\r\n2. Handle the case where the model config has nested dictionaries (e.g. a `decoder` component)\r\n3. Keep full retrocompatibility, including ad hoc `model.config` changes before calling `GenerationMixin` functions (that's why you'll see `GenerationConfig.from_model_config` in so many places, all those functions may be called independently 😓 )\r\n4. Add documentation and enhance examples\r\n\r\nAlso FYI, I'm off until the 8th of Dec 🌴 ", "Agreed with you @patrickvonplaten , that's a very good idea!", "@sgugger @patrickvonplaten \r\n\r\nHere is a summary of the key changes since your last review:\r\n- (thanks for the suggestion!) In `model.from_pretrained`, `model.generation_config` is set from the model config if the generation config doesn’t exist, effectively making all future generation-capable models hold a default generation config parameter. NOTE: This required minor legacy handling logic, for the case where the user makes ad hoc model config changes to control generation (which the previous solution intentionally accounted for)\r\n- added a default `prepare_inputs_for_generation`, which raises `NotImplementedError`, and updated the new `can_generate` check accordingly. Contrarily to @patrickvonplaten's suggestion, I've kept the `_validate_model()` check -- it returns an informative exception to the user if they try to generate with an incorrect class of a model with generation capabilities, like `AutoModel.from_pretrained(“gpt2”)`. Not using the right class was a common source of issues in the past.\r\n- Improved the example to use named generation config files with an actual T5 example. I think two named generation configs would make the example too long 🤔 (cc @patrickvonplaten) \r\n\r\nI was thinking of doing the following in a follow-up PR (to avoid adding more features to this already long PR that is blocking Arthur on Whisper work):\r\n- Add the needed modifications such that `model.save_pretrained` can push to the hub a default generation config if the file doesn’t yet exist, from the `model.generation_config` parameter (as @sgugger suggested)", "@patrickvonplaten -- I'm merging to unblock @ArthurZucker's work on Whisper. \r\n\r\nComments to the points above are still helpful, and I can include them in a subsequent PR! :D ", "The addition of `can_generate()` is breaking in Optimum, where we use `generate()` on models which do not inherit from `PreTrainedModel`. Why isn't `can_generate()` in `GenerationMixin`? Can a model inherit from `GenerationMixin` but not use `generate()`? cc @gante ", "@fxmarty `can_generate()` is called in `PreTrainedModel` at initialization time, to initialize the (new) generation config if it's a generation-compatible model. All models in `transformers` inherit `GenerationMixin`, regardless of whether they can generate, but in fact `can_generate()` is tangling the two classes at the moment, which is undesirable.\r\n\r\nI may be able to rework this part, but I need to know -- what breaks on your end exactly?", "> All models in transformers inherit GenerationMixin\r\n\r\nYes thanks, I forgot this part!\r\n\r\nThe PR I linked fix the issue on our end. I think what is breaking is that `generate()` is no more usable on models that are not inheriting from `PreTrainedModel` or that don't redefine `can_generate()`, because of https://github.com/huggingface/transformers/blob/8637316e5e94ba0a2493e5df7846f2f23f46eaef/src/transformers/generation/utils.py#L934\r\n\r\nBut it's a very minor issue, and the fix is easy, so it's probably not too important.", "@fxmarty 👍 \r\n\r\nIn the long run, I'd like to see if it's possible to separate the two (`PreTrainedModel` and `GenerationMixin`, where a model only inherits `GenerationMixin` if it can generate). It should help libraries downstream like `optimum`!\r\n\r\nLet me know if I can be of further assistance." ]
1,669
1,672
1,671
MEMBER
null
# What does this PR do? This PR introduces `generation_config` as the main controller of `.generate()` calls. In particular: 1. It adds a `from_model_config` class method to `GenerateConfig`, to load a generation config from a (legacy) model config; 2. Adds a `generation_config` argument to `.generate()`. If it is not passed, it will be loaded from a pre-determined sequence (check for `generation_config.json` -> if it fails, load from the model config); 3. Because we always have a `generation_config` in `.generate()`, which holds all parametrization, gets rid of all local variables; 4. ⚠️ Changes the arguments to `generate()` (and corresponding docstring) so as to exclude `generate_config` parameters (i.e. they were moved to `**kwargs`). This is mostly to avoid a massive docstring and list of arguments that make `.generate()` very messy at the moment -- `GenerationConfig`'s docstring explains all the ways `.generate()` can be controlled, organized by type of manipulation, while `.generate()`'s docstring focuses on the API. Notes: I've successfully run SLOW tests of GPT2 (which has a `generate_config.json`) and BART (which does not) against this PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20388/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20388", "html_url": "https://github.com/huggingface/transformers/pull/20388", "diff_url": "https://github.com/huggingface/transformers/pull/20388.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20388.patch", "merged_at": 1671128841000 }
https://api.github.com/repos/huggingface/transformers/issues/20387
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20387/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20387/comments
https://api.github.com/repos/huggingface/transformers/issues/20387/events
https://github.com/huggingface/transformers/pull/20387
1,460,101,400
PR_kwDOCUB6oc5DePvE
20,387
[ESM] fix `accelerate` tests for esmfold
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? FIxes slow tests that were not passing for `ESM`. In fact I was running `RUN_SLOW=1 pytest tests/models/esm/test_modeling_esm.py` and forgot to run `RUN_SLOW=1 pytest tests/models/esm/test_modeling_esmfold.py` cc @sgugger @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20387/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20387/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20387", "html_url": "https://github.com/huggingface/transformers/pull/20387", "diff_url": "https://github.com/huggingface/transformers/pull/20387.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20387.patch", "merged_at": 1669138016000 }
https://api.github.com/repos/huggingface/transformers/issues/20386
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20386/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20386/comments
https://api.github.com/repos/huggingface/transformers/issues/20386/events
https://github.com/huggingface/transformers/pull/20386
1,459,967,537
PR_kwDOCUB6oc5DdyaQ
20,386
chore: add link to the video cls notebook.
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@NielsRogge I don't have the rights to merge the PR. If you have, could you perform the duties? " ]
1,669
1,669
1,669
MEMBER
null
We recently added a [notebook](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) that shows how to fine-tune the VideoMAE model on a custom dataset. This PR adds the notebook link to the model doc. Cc: @osanseviero
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20386/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20386/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20386", "html_url": "https://github.com/huggingface/transformers/pull/20386", "diff_url": "https://github.com/huggingface/transformers/pull/20386.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20386.patch", "merged_at": 1669655424000 }
https://api.github.com/repos/huggingface/transformers/issues/20385
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20385/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20385/comments
https://api.github.com/repos/huggingface/transformers/issues/20385/events
https://github.com/huggingface/transformers/pull/20385
1,459,951,237
PR_kwDOCUB6oc5Ddu07
20,385
Indicate better minimal version of PyTorch in big model inference
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? As pointed out in https://github.com/huggingface/accelerate/issues/880, the minimum version when using a `device_map` is not PyTorch 1.9 but PyTorch 1.11. This PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20385/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20385", "html_url": "https://github.com/huggingface/transformers/pull/20385", "diff_url": "https://github.com/huggingface/transformers/pull/20385.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20385.patch", "merged_at": 1669131710000 }