url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/22394
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22394/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22394/comments
https://api.github.com/repos/huggingface/transformers/issues/22394/events
https://github.com/huggingface/transformers/pull/22394
1,641,673,041
PR_kwDOCUB6oc5M8xOM
22,394
[Pix2Struct] Add support to resize embeddings
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@NielsRogge could you share a notebook on finetuning in this dataset?" ]
1,679
1,680
1,679
CONTRIBUTOR
null
# What does this PR do? This PR adds `resize_token_embeddings` support for Pix2Struct. This was required when I fine-tuned Pix2Struct on a key-value pair dataset (the one from [this Donut notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Donut/CORD/Fine_tune_Donut_on_a_custom_dataset_(CORD)_with_PyTorch_Lightning.ipynb)). It oftentimes helps to add additional special tokens to the language decoder. However, I noticed `tie_word_embeddings` is set to `True` in both the general config of Pix2Struct (`Pix2StructConfig`) as well as its text config (`Pix2StructTextConfig`). Printing out the weights of the decoder's embedding layer and its language modeling head seems to reveal weights aren't tied: ``` from transformers import Pix2StructForConditionalGeneration model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-base") print(model.decoder.embed_tokens.weight) print(model.decoder.lm_head.weight) ``` So before merging this PR, we probably need to update the `tie_word_embeddings` attribute in the config of the models. Cause when you would load the model with this branch, it would break. Currently you have to do: ``` from transformers import Pix2StructConfig, Pix2StructForConditionalGeneration config = Pix2StructConfig(text_config={"tie_word_embeddings": False}, tie_word_embeddings=False) model = Pix2StructForConditionalGeneration.from_pretrained("google/pix2struct-textcaps-base", config=config) ``` to make it work. The PR also fixes some typos in configuration_pix2struct.py. cc @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22394/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22394/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22394", "html_url": "https://github.com/huggingface/transformers/pull/22394", "diff_url": "https://github.com/huggingface/transformers/pull/22394.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22394.patch", "merged_at": 1679931488000 }
https://api.github.com/repos/huggingface/transformers/issues/22393
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22393/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22393/comments
https://api.github.com/repos/huggingface/transformers/issues/22393/events
https://github.com/huggingface/transformers/pull/22393
1,641,550,782
PR_kwDOCUB6oc5M8XX6
22,393
(Re-)Enable Nightly + Past CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The DS part looks good, @ydshieh\r\n\r\nI wonder if you want to continue testing torchdynamo at all. Users wanting to use it should be encouraged to move to torch>=2.0 instead, where it's built in. But a subject for a different PR I guess.", "> The DS part looks good, @ydshieh\r\n> \r\n> I wonder if you want to continue testing torchdynamo at all. Users wanting to use it should be encouraged to move to torch>=2.0 instead, where it's built in. But a subject for a different PR I guess.\r\n\r\nFrom my side, it would be great if I don't have to deal with all the potential (installation/runtime) issues for such 3rd party libraries across with different torch versions (at least, not with previous torch versions). It's best to focus on the torch and torch+DeepSpeed testing results.", "oh, I meant not testing torchdynamo in general transformers-wide. For sure you don't need any unrelated packages installed to test deepspeed, other its own deps.\r\n", "Without TensorFlow Past CI - it takes 2.5 days to run the Nightly CI + PyTorch Past CI.\r\nI put the schedule to trigger the workflow on Sunday and Thursday at 2 AM.\r\n\r\nThe TensorFlow past CI will only run under push events." ]
1,679
1,680
1,680
COLLABORATOR
null
# What does this PR do? (Re-)Enable Nightly + Past CI cc @stas00 : I don't think there is something (related to `DeepSpeed`) that really needs your review in this PR. But if you prefer, you can take a look the 2 `Dockerfile` files under `docker` (and more files if you want). Thank you. p.s. I launched a full run (without TensorFlow past version CIs) [here](https://github.com/huggingface/transformers/actions/runs/4532718828)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22393/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22393", "html_url": "https://github.com/huggingface/transformers/pull/22393", "diff_url": "https://github.com/huggingface/transformers/pull/22393.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22393.patch", "merged_at": 1680203196000 }
https://api.github.com/repos/huggingface/transformers/issues/22392
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22392/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22392/comments
https://api.github.com/repos/huggingface/transformers/issues/22392/events
https://github.com/huggingface/transformers/issues/22392
1,641,546,472
I_kwDOCUB6oc5h2ALo
22,392
Inconsistent Normalization for ViTImageProcessor when `do_resize` is False
{ "login": "Interpause", "id": 42513874, "node_id": "MDQ6VXNlcjQyNTEzODc0", "avatar_url": "https://avatars.githubusercontent.com/u/42513874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Interpause", "html_url": "https://github.com/Interpause", "followers_url": "https://api.github.com/users/Interpause/followers", "following_url": "https://api.github.com/users/Interpause/following{/other_user}", "gists_url": "https://api.github.com/users/Interpause/gists{/gist_id}", "starred_url": "https://api.github.com/users/Interpause/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Interpause/subscriptions", "organizations_url": "https://api.github.com/users/Interpause/orgs", "repos_url": "https://api.github.com/users/Interpause/repos", "events_url": "https://api.github.com/users/Interpause/events{/privacy}", "received_events_url": "https://api.github.com/users/Interpause/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts ", "Hi @Interpause, thanks for raising this issue! \r\n\r\nIndeed, this is a funny behaviour. This is happening because of the use of the PIL library to resize images and the rescaling behaviour that happens in `ToTensor`. \r\n\r\nTo explain in more detail, I'll refer to the input `im` and `im_pil` and `to_tens(im)` as `im_arr` below. Where `im_pil` is a `PIL.Image.Image` with integer pixel values between 0-255, and `im_arr` an array with pixel values between 0-1. \r\n\r\nIn the first case, when`do_resize` is `True`:\r\n* `im_pil` and `im_arr` are converted to numpy arrays, preserving their pixel values\r\n* When passed to `resize` the images are converted to a `PIL.Image.Image` object. `im_pil` can be converted directly. However for `im_arr`, the values have to be multiplied by 255, as PIL can only store integer pixel values between 0-255.\r\n* Images are resized then converted back to numpy arrays. `im_arr` now is a numpy array with values between 0-255, rather than the original 0-1. This shouldn't be happening - I'll try to think about the best way to handle this and open a PR. \r\n\r\nFor the other cases, no conversion to `PIL` is happening and this behaviour is expected. Without rescaling by 255, the input arrays are different and different outputs are expected. Rescaling `to_tens(im)` by 255 makes them equivalent and so the same output is expected. \r\n" ]
1,679
1,680
1,680
NONE
null
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```py from transformers import AutoImageProcessor from PIL import Image import torchvision.transforms as T im = Image.open("t.png").convert("RGB") to_tens = T.ToTensor() extractor = AutoImageProcessor.from_pretrained("./pretrained/facebook/vit-msn-small") print(extractor) # Instance of ViTImageProcessor. # When `do_resize` is True: x1 = extractor(im, return_tensors="pt").pixel_values x2 = extractor(to_tens(im), return_tensors="pt").pixel_values print(abs(x2 - x1).mean()) # Close to 0; Correct. # When `do_resize` is False: x1 = extractor(im, return_tensors="pt", do_resize=False).pixel_values x2 = extractor(to_tens(im), return_tensors="pt", do_resize=False).pixel_values print(abs(x2 - x1).mean()) # Not close to 0; Differing behaviour. # Additional multiplication of 255 to torch.Tensor input: x1 = extractor(im, return_tensors="pt", do_resize=False).pixel_values x2 = extractor(to_tens(im) * 255, return_tensors="pt", do_resize=False).pixel_values print(abs(x2 - x1).mean()) # Close to 0; Correct again. ``` ### Expected behavior Currently, when `do_resize` is False, the tensor has to be multiplied by 255 first, while when `do_resize` is True, it is not needed. The behaviour should be consistent.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22392/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22392/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22391
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22391/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22391/comments
https://api.github.com/repos/huggingface/transformers/issues/22391/events
https://github.com/huggingface/transformers/issues/22391
1,641,500,669
I_kwDOCUB6oc5h10_9
22,391
Docs: Clarify stride for upcoming token classification pipeline
{ "login": "adrianeboyd", "id": 5794899, "node_id": "MDQ6VXNlcjU3OTQ4OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5794899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adrianeboyd", "html_url": "https://github.com/adrianeboyd", "followers_url": "https://api.github.com/users/adrianeboyd/followers", "following_url": "https://api.github.com/users/adrianeboyd/following{/other_user}", "gists_url": "https://api.github.com/users/adrianeboyd/gists{/gist_id}", "starred_url": "https://api.github.com/users/adrianeboyd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adrianeboyd/subscriptions", "organizations_url": "https://api.github.com/users/adrianeboyd/orgs", "repos_url": "https://api.github.com/users/adrianeboyd/repos", "events_url": "https://api.github.com/users/adrianeboyd/events{/privacy}", "received_events_url": "https://api.github.com/users/adrianeboyd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sounds like something missing indeed. Would you like to open a PR with such documentation?", "I'm not confident that I could hit the style that you're looking for in your docs, especially given the history behind the naming.\r\n\r\nIt might be a lot simpler to document if `stride` were renamed, though, would you potentially consider renaming it for `TokenClassificationPipeline`?", "cc @Narsil what do you think?", "Indeed the name `stride` is not particularly well chosen, my oversight on this.\r\n\r\nSeems we have the same thing in question answering: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L361\r\nAnd here: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/automatic_speech_recognition.py#L169\r\n\r\nI think controlling the overlap is much better in general since when you sending text (or audio) you have no idea of the max_length of the truncated texts, so controlling real stride would mean requiring arithmetic with that maximum size. (stride = tokenizer,model_max_length - overlap)\r\n\r\nGiven the history of that parameter I'm not sure what we should do. Documenting it better would be a start.\r\n\r\nRenaming would warrant a rename if those 2 other pipelines. My current off the bat feeling is that we simply shouldn't. It's ok if it just means something different than for the convolution operator.", "For the name `stride`, I choose the same as mentioned for tokenizers: \r\nstride (int, optional) — The length of the previous first sequence to be included in the overflowing sequence\r\n\r\nThis parameter is directly passed through the tokenizer in the `preprocess()` method. We can change the name of course, but to keep consistency throughout the documentation, it's better to change all names related to `stride` which in fact refer to the number of overlapping tokens from the previous chunk/sequence.", "@luccailliau @amyeroberts \r\nIt seems like an underkill TBH; why not add an alias and deprecate this param name, i.e., renaming it to 'window_overlap'?\r\nThis does create very unexpected bugs as people assume it means 'stride.'" ]
1,679
1,704
1,681
NONE
null
I just tried out the upcoming `stride` option for token classification pipelines (#21771, very useful!) without being familiar with the non-standard use of `stride` in the underlying tokenizer settings. I think it would be helpful to also explain in the pipelines API documentation that the `stride` parameter sets the overlap and not the stride. I thought it was the stride and spent a while trying to figure out why the performance was so abysmal.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22391/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22391/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22390
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22390/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22390/comments
https://api.github.com/repos/huggingface/transformers/issues/22390/events
https://github.com/huggingface/transformers/issues/22390
1,641,475,777
I_kwDOCUB6oc5h1u7B
22,390
ImportError: cannot import name 'hf_bucket_url' from 'transformers.file_utils'
{ "login": "Andy824", "id": 37765645, "node_id": "MDQ6VXNlcjM3NzY1NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/37765645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Andy824", "html_url": "https://github.com/Andy824", "followers_url": "https://api.github.com/users/Andy824/followers", "following_url": "https://api.github.com/users/Andy824/following{/other_user}", "gists_url": "https://api.github.com/users/Andy824/gists{/gist_id}", "starred_url": "https://api.github.com/users/Andy824/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Andy824/subscriptions", "organizations_url": "https://api.github.com/users/Andy824/orgs", "repos_url": "https://api.github.com/users/Andy824/repos", "events_url": "https://api.github.com/users/Andy824/events{/privacy}", "received_events_url": "https://api.github.com/users/Andy824/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, this function was removed several versions ago. It was only relevant for downloading files before the model Hub was properly setup. You should now use the `huggingface_hub` library to manage downloads of models from the Hub.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
NONE
null
### System Info `from transformers.file_utils import default_cache_path, hf_bucket_url` I want to import hf_bucket_url on Colab, but I got the error "ImportError: cannot import name 'hf_bucket_url' from 'transformers.file_utils' (/usr/local/lib/python3.9/dist-packages/transformers/file_utils.py)" ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ImportError: cannot import name 'hf_bucket_url' from 'transformers.file_utils' (/usr/local/lib/python3.9/dist-packages/transformers/file_utils.py) ### Expected behavior Please tell me am I doing something wrong?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22390/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22390/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22389
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22389/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22389/comments
https://api.github.com/repos/huggingface/transformers/issues/22389/events
https://github.com/huggingface/transformers/issues/22389
1,641,400,172
I_kwDOCUB6oc5h1cds
22,389
Exception: expected value at line 1 column 1
{ "login": "wccccp", "id": 55964850, "node_id": "MDQ6VXNlcjU1OTY0ODUw", "avatar_url": "https://avatars.githubusercontent.com/u/55964850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wccccp", "html_url": "https://github.com/wccccp", "followers_url": "https://api.github.com/users/wccccp/followers", "following_url": "https://api.github.com/users/wccccp/following{/other_user}", "gists_url": "https://api.github.com/users/wccccp/gists{/gist_id}", "starred_url": "https://api.github.com/users/wccccp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wccccp/subscriptions", "organizations_url": "https://api.github.com/users/wccccp/orgs", "repos_url": "https://api.github.com/users/wccccp/repos", "events_url": "https://api.github.com/users/wccccp/events{/privacy}", "received_events_url": "https://api.github.com/users/wccccp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @wccccp 👋 \r\n\r\nThat exception is not due to `transformers`, but rather due to a `.json` file (or similar). There is probably something fishy with your tokenizer checkpoint.\r\n\r\nSee [this](https://stackoverflow.com/questions/16573332/jsondecodeerror-expecting-value-line-1-column-1-char-0) stack overflow issue.", "> 嘿@wccccp 👋\r\n> \r\n> 该异常不是由于`transformers`,而是由于`.json`文件(或类似文件)。您的分词器检查点可能有问题。\r\n> \r\n> 请参阅[此](https://stackoverflow.com/questions/16573332/jsondecodeerror-expecting-value-line-1-column-1-char-0)堆栈溢出问题。\r\nyou are right,the question is solute\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This is Exactly What is Happening For Me:\r\n\r\nI'm Working On My Personal Project, This Error Happens While Using The Official Tokenizer For RWKV Model using Langchain which uses rwkv pip package and tokenizer module\r\n\r\nFile \"/content/Intellique/main.py\", line 442, in <module>\r\n main()\r\n File \"/content/Intellique/main.py\", line 408, in main\r\n result = execution_agent(OBJECTIVE, task[\"task_name\"])\r\n File \"/content/Intellique/main.py\", line 363, in execution_agent\r\n return call_execution_llm(prompt)\r\n File \"/content/Intellique/main.py\", line 290, in call_execution_llm\r\n excu_llm = rwkv_llm()\r\n File \"/content/Intellique/main.py\", line 42, in rwkv_llm\r\n model = RWKV(model=model_path, tokens_path=\"/content/Intellique/20B_tokenizer.json\", strategy='cuda fp16i8 *20 -> cuda fp16')\r\n File \"pydantic/main.py\", line 339, in pydantic.main.BaseModel.__init__\r\n task_name = task_parts[1].strip()\r\n File \"pydantic/main.py\", line 1102, in pydantic.main.validate_model\r\n File \"/usr/local/lib/python3.9/dist-packages/langchain/llms/rwkv.py\", line 113, in validate_environment\r\n values[\"tokenizer\"] = tokenizers.Tokenizer.from_file(values[\"tokens_path\"])\r\nException: expected value at line 1 column 1", "Does Anyone Got Solution For This. @wccccp ....", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,685
1,685
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @sgugger @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction File "/mnt1/wcp/BEELE/BELLE-main/generate_instruction.py", line 28, in tokenizer = AutoTokenizer.from_pretrained(checkpoint) File "/home/appuser/miniconda3/envs/wcppy39/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 679, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/appuser/miniconda3/envs/wcppy39/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1804, in from_pretrained return cls._from_pretrained( File "/home/appuser/miniconda3/envs/wcppy39/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1958, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/appuser/miniconda3/envs/wcppy39/lib/python3.9/site-packages/transformers/models/bloom/tokenization_bloom_fast.py", line 118, in init super().init( File "/home/appuser/miniconda3/envs/wcppy39/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 111, in init fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) Exception: expected value at line 1 column 1 ### Expected behavior i hope the file is run
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22389/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22389/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22388
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22388/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22388/comments
https://api.github.com/repos/huggingface/transformers/issues/22388/events
https://github.com/huggingface/transformers/pull/22388
1,641,395,266
PR_kwDOCUB6oc5M711V
22,388
Translated documentation in italian
{ "login": "nickprock", "id": 11136646, "node_id": "MDQ6VXNlcjExMTM2NjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/11136646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nickprock", "html_url": "https://github.com/nickprock", "followers_url": "https://api.github.com/users/nickprock/followers", "following_url": "https://api.github.com/users/nickprock/following{/other_user}", "gists_url": "https://api.github.com/users/nickprock/gists{/gist_id}", "starred_url": "https://api.github.com/users/nickprock/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nickprock/subscriptions", "organizations_url": "https://api.github.com/users/nickprock/orgs", "repos_url": "https://api.github.com/users/nickprock/repos", "events_url": "https://api.github.com/users/nickprock/events{/privacy}", "received_events_url": "https://api.github.com/users/nickprock/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
## What does this PR do? Italian translation of doc related to the preprocessing of :hugs: Transformers. * updated _toctree.yml * added perf_infer_tpu.mdx * added perf_infer_special.mdx * added perf_train_tpu.mdx * added perf_train_special.mdx ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). See issue: [[#17459](https://www.linkedin.com/feed/hashtag/?keywords=%2317459)](https://github.com/huggingface/transformers/issues/17459) @sgugger, @stevhliu, @MKhalusova and @omarespejel
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22388/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22388", "html_url": "https://github.com/huggingface/transformers/pull/22388", "diff_url": "https://github.com/huggingface/transformers/pull/22388.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22388.patch", "merged_at": 1679924930000 }
https://api.github.com/repos/huggingface/transformers/issues/22387
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22387/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22387/comments
https://api.github.com/repos/huggingface/transformers/issues/22387/events
https://github.com/huggingface/transformers/issues/22387
1,641,388,958
I_kwDOCUB6oc5h1Zue
22,387
Pipeline for inference "You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset"
{ "login": "MLBurnham", "id": 41241150, "node_id": "MDQ6VXNlcjQxMjQxMTUw", "avatar_url": "https://avatars.githubusercontent.com/u/41241150?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MLBurnham", "html_url": "https://github.com/MLBurnham", "followers_url": "https://api.github.com/users/MLBurnham/followers", "following_url": "https://api.github.com/users/MLBurnham/following{/other_user}", "gists_url": "https://api.github.com/users/MLBurnham/gists{/gist_id}", "starred_url": "https://api.github.com/users/MLBurnham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MLBurnham/subscriptions", "organizations_url": "https://api.github.com/users/MLBurnham/orgs", "repos_url": "https://api.github.com/users/MLBurnham/repos", "events_url": "https://api.github.com/users/MLBurnham/events{/privacy}", "received_events_url": "https://api.github.com/users/MLBurnham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey, there are a few things:\r\n\r\nFirst:\r\n- I cannot really reproduce your example since your data is missing, meaning I'm not able to see exactly what's going on for your particular case.\r\n\r\nSecond:\r\n\r\nThere are 2 things at play, `streaming` vs `n-calls` and `batching` vs `no-batching`.\r\nStreaming is always better that doing n-calls for a GPU because in the streaming fashion, we can make use of torch `DataLoader` meaning using separate thread for data preparation, which should keep the GPU busier.\r\nHowever, this has the most significant impact when the actual GPU runtime is small (making the CPU overhead more visible).\r\n\r\nThe second is batching, which is not automatically a win:\r\nhttps://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching\r\n\r\n\r\nIn your particular case, using a GTX 970 this is what I get:\r\n\r\n```bash\r\nNo batching, streaming\r\n100%|████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:15<00:00, 6.50it/s]\r\nBatching, streaming\r\n100%|████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:03<00:00, 32.92it/s]\r\nNo batching, no streaming\r\n 8%|███████▏ | 8/100 [00:01<00:14, 6.55it/s]/home/nicolas/src/transformers/src/transformers/pipelines/base.py:1070: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset\r\n warnings.warn(\r\n100%|████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:15<00:00, 6.55it/s]\r\n```\r\n\r\nSo it seems batching is helping (understandable here, I have extremely aligned data so no waste of padding and model seems simple enough). \r\n\r\nScript:\r\n\r\n```python\r\nfrom transformers import pipeline\r\nimport tqdm\r\n\r\n# initialize pipeline\r\nclassifier = pipeline(\r\n \"zero-shot-classification\",\r\n model=\"MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli\",\r\n device=0,\r\n)\r\n\r\n\r\ncandidate_labels = [\"politics\", \"science\", \"fashion\"]\r\n\r\n\r\nTOTAL = 100\r\nSENTENCE = \"This is a test\"\r\n\r\n\r\ndef data():\r\n for i in range(TOTAL):\r\n yield SENTENCE\r\n\r\n\r\nprint(\"No batching, streaming\")\r\nfor result in tqdm.tqdm(classifier(data(), candidate_labels=candidate_labels), total=TOTAL):\r\n pass\r\n # print(result)\r\nprint(\"Batching, streaming\")\r\nfor result in tqdm.tqdm(classifier(data(), candidate_labels=candidate_labels, batch_size=24), total=TOTAL):\r\n pass\r\n # print(result)\r\nprint(\"No batching, no streaming\")\r\nfor i in tqdm.tqdm(range(TOTAL)):\r\n result = classifier(SENTENCE, candidate_labels=candidate_labels)\r\n pass\r\n # print(result)\r\n\r\n```", "Note:\r\n\r\n> for result in classifier(KeyDataset(samples, 'text'), labels, hypothesis_template = template, multi_label = False, batch_size = 32):\r\n\r\nThis is the line of code I'm concerned about. It's perfectly ok if there's a relatively low amount of different labels (meaning low amount of datasets being created). However, if you're creating datasets with very low amount of data, then the overhead of creating the dataset + dataloader + spawning the threads might actually kill performance here.", "Thank you for your assistance, this is all very insightful. My dataset is a set of tweets with three categories, I had assumed it was overhead slowing it down but wasn't sure. \r\n\r\nThat said I'm still not really clear on what is triggering this warning, and it seems to be inconsistent. Passing it via KeyDataset(), a list, or a generator like in your example all seem to trigger the warning but never consistently. In this image I used a generator and the warning wasn't triggered on the first two iterations of the loop, but then was triggered on the third every iteration thereafter.\r\n![image](https://user-images.githubusercontent.com/41241150/228046456-1372fb97-1e46-4b5f-a0ce-60ebd1beda1c.png)\r\n\r\nI once passed the data as a list and the warning wasn't triggered on any iteration of the loop, but when I refreshed the data and re-ran the loop with no changes it was triggered on the second and all subsequent iterations.\r\n\r\nBelow I've shared the complete code and a sample of the data if that's helpful. This version uses the generator function for batching rather than the KeyDataset() function. The warning is almost always triggered. I tried removing the classification loop from the function as well and the warning still triggered, weirdly on the 7th and 8th iteration of the loop.\r\n\r\n```python\r\nimport pandas as pd\r\nfrom transformers import pipeline\r\nfrom datasets import Dataset\r\nfrom tqdm import tqdm\r\n\r\n# initialize classifier\r\nclassifier = pipeline(\"zero-shot-classification\", model='MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli', device = 1, batch_size = 16)\r\n\r\n# define data streamer\r\ndef data_stream(samples):\r\n for i in range(samples.num_rows):\r\n yield samples['text'][i]\r\n\r\n# classifier function with batching option\r\ndef classify_tweets(targets, labels, label_columns, classifier, data, batching=False):\r\n \"\"\"\r\n Classify tweets based on given targets and labels using a HuggingFace pipeline.\r\n\r\n Args:\r\n - targets: list of targets in the data frame that will be classified\r\n - labels: list of labels that will be passed to the template\r\n - label_columns: name of the label columns\r\n - classifier: HuggingFace pipeline object\r\n - data: pandas DataFrame that contains the tweets to classify\r\n - batching: whether to use batching or not\r\n\r\n Returns:\r\n - pandas DataFrame with modified columns\r\n\r\n \"\"\"\r\n\r\n # Create label column names\r\n label_col_names = [target + '_lab' for target in targets]\r\n data = data.copy() # suppress setting with copy warning\r\n\r\n # convert to huggingface dataset for batching\r\n dataset = Dataset.from_pandas(data) if batching else None\r\n\r\n # Classify tweets for each target\r\n for i in tqdm(range(len(targets)), desc=\"Classifying tweets\"):\r\n target = targets[i]\r\n # define template\r\n template = 'The author of this tweet {} ' + target +'.'\r\n\r\n if batching:\r\n samples = dataset.filter(lambda text: text[targets[i]] == 1)\r\n # Use classifier to get predictions for each sample\r\n res = []\r\n for result in classifier(data_stream(samples), labels, hypothesis_template = template, multi_label = False, batch_size = 32):\r\n res.append(result)\r\n else:\r\n # Use classifier to get predictions from list of text samples with the target\r\n res = classifier(list(data.loc[data[target] == 1, 'text']), labels, hypothesis_template=template, multi_label=False)\r\n\r\n # Add results to dataframe\r\n data.loc[data[target] == 1, label_col_names[i]] = [label['labels'][0] for label in res]\r\n\r\n # recode results to integers\r\n for column in tqdm(label_col_names, desc=\"Re-coding results\"):\r\n data.loc[:,column] = data[column].replace(to_replace = {'supports':-1, 'opposes':1, 'does not express an opinion about': 0})\r\n \r\n # Fill NaN values with zero\r\n data[label_col_names] = data[label_col_names].fillna(0)\r\n # Create columns for liberal and conservative classifications\r\n data[label_columns + '_lib'] = [1 if label <= -1 else 0 for label in data[label_col_names].sum(axis = 1)]\r\n data[label_columns + '_con'] = [1 if label >= 1 else 0 for label in data[label_col_names].sum(axis = 1)]\r\n\r\n return data\r\n\r\n# define targets to be classified and labels to use\r\ntargets = ['Stewart', 'Oliver', 'Maddow', 'Hayes', 'O\\'Donnell', 'Klein', 'Krugman', 'Thunberg']\r\nlabels = ['supports', 'opposes', 'does not express an opinion about']\r\n\r\nlib_df = classify_tweets(targets = targets, labels = labels, label_columns = 'libmed', classifier = classifier, data = lib_df, batching=False)\r\n```\r\n\r\n[libsample.csv](https://github.com/huggingface/transformers/files/11082282/libsample.csv)\r\n", "The warning is generated after simply 10 different calls of the pipeline on GPU (since with streaming there's only 1 call):\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L1069\r\n\r\nI'll look into this more thoroughly tomorrow.", "Ahh that makes sense. So my current loop will trigger the warning regardless of whether or not I'm streaming because it divides the data based on which hypotheses should be used. I'm not sure if there is a more appropriate triggering condition or if the wording of the warning could be tweaked. Might be work a look though in case there is some other poor soul out there like me thinking their data isn't properly streaming/batching.\r\n\r\nAppreciate your help!", "Ok, I had to rework your example so that I could understand what was going on.:\r\n\r\nUltimately I see similar results:\r\n\r\n```\r\nBatching\r\n124it [00:24, 5.07it/s]\r\nNo Batching\r\n124it [00:32, 3.77it/s]\r\nRaw iteration|\r\n124it [00:34, 3.63it/s]\r\n```\r\n\r\nIn terms of management, the main thing is that your n targets are actually n different datasets. With the snippet I got I don't think it's actually an issue, but with much larger datasets iterating over the ignored values might start to become an significant overhead (especially with added targets).\r\n\r\nI think having n different datasets, and iterating on each is perfectly OK.\r\n\r\nIn order to ignore the warning, you could just reset the call_count. (`classifier.call_count = 0`)\r\nI don't think adding a new parameter is worth the effort since the overhead is still there and the warning can also just be safely ignored. (The warning is there mostly to avoid the naive calls on each separate item which do seem slower in my tests even if not by much)\r\n\r\n```python\r\nfrom transformers import pipeline\r\nimport pandas as pd\r\nfrom datasets import Dataset\r\nfrom tqdm import tqdm\r\n\r\n# initialize classifier\r\nclassifier = pipeline(\r\n \"zero-shot-classification\",\r\n model=\"MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli\",\r\n device=0,\r\n)\r\n# define targets to be classified and labels to use\r\nlib_df = pd.read_csv(\"libsample.csv\")\r\ndataset = Dataset.from_pandas(lib_df)\r\ncandidate_labels = [\"supports\", \"opposes\", \"does not express an opinion about\"]\r\n\r\n\r\ndef data(dataset, target):\r\n for row in dataset:\r\n if row[target]:\r\n yield row[\"text\"]\r\n\r\n\r\n# for target in [\"Stewart\", \"Oliver\", \"Maddow\", \"Hayes\", \"O'Donnell\", \"Klein\", \"Krugman\", \"Thunberg\"]:\r\nfor target in [\"Stewart\"]:\r\n hypothesis_template = \"The author of this tweet {} \" + target + \".\"\r\n print(\"Batching\")\r\n for result in tqdm(\r\n classifier(\r\n data(dataset, target),\r\n candidate_labels=candidate_labels,\r\n hypothesis_template=hypothesis_template,\r\n multi_label=False,\r\n batch_size=32,\r\n ),\r\n ):\r\n pass\r\n print(\"No Batching\")\r\n for result in tqdm(\r\n classifier(\r\n data(dataset, target),\r\n candidate_labels=candidate_labels,\r\n hypothesis_template=hypothesis_template,\r\n multi_label=False,\r\n batch_size=1,\r\n ),\r\n ):\r\n pass\r\n # print(result)\r\n print(\"Raw iteration\")\r\n for text in tqdm(\r\n data(dataset, target),\r\n ):\r\n result = classifier(\r\n text,\r\n candidate_labels=candidate_labels,\r\n hypothesis_template=hypothesis_template,\r\n multi_label=False,\r\n )\r\n pass\r\n # print(result)\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
NONE
null
### System Info Transformers 4.16.2 Windows 10 Python 3.9.12 Datasets 2.2.2 @Narsil ### Who can help? @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm currently using the zero shot text classifier pipeline with datasets and batching. The "You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset" warning appears with each iteration of my loop. I am using datasets and I am batching. I can't tell if this warning is a bug or just not descriptive enough to help me diagnose the true issue. ```python # initialize pipeline classifier = pipeline("zero-shot-classification", model='MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli', device = 0, batch_size = 24) # convert pandas df to dataset dataset = Dataset.from_pandas(data) # loop through documents according to subsamples that contain target name in the text for i in tqdm(range(len(targets)), desc="Classifying docs"): target = targets[i] # define template template = 'The author of this doc {} ' + target +'.' # get a list of text samples that contain the target samples = dataset.filter(lambda text: text[targets[i]] == 1) # Use classifier to get predictions for each sample res = [] for result in classifier(KeyDataset(samples, 'text'), labels, hypothesis_template = template, multi_label = False, batch_size = 32): res.append(result) # add results to pandas df data.loc[data[target] == 1, label_col_names[i]] = pd.Series([label['labels'][0] for label in res], index=data.index[data[target] == 1]) ``` As a side note, I appear to be getting significantly worse performance when using datasets and batching vs. just converting samples to a list and classifying sequentially. I'm assuming that's just a function of my data and not related to any bug though. ### Expected behavior Batched classification without the "You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset" warning.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22387/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22387/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22386
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22386/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22386/comments
https://api.github.com/repos/huggingface/transformers/issues/22386/events
https://github.com/huggingface/transformers/pull/22386
1,641,297,757
PR_kwDOCUB6oc5M7gwj
22,386
Add memory-efficient attention and optional features to Llama
{ "login": "s-JoL", "id": 16948304, "node_id": "MDQ6VXNlcjE2OTQ4MzA0", "avatar_url": "https://avatars.githubusercontent.com/u/16948304?v=4", "gravatar_id": "", "url": "https://api.github.com/users/s-JoL", "html_url": "https://github.com/s-JoL", "followers_url": "https://api.github.com/users/s-JoL/followers", "following_url": "https://api.github.com/users/s-JoL/following{/other_user}", "gists_url": "https://api.github.com/users/s-JoL/gists{/gist_id}", "starred_url": "https://api.github.com/users/s-JoL/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/s-JoL/subscriptions", "organizations_url": "https://api.github.com/users/s-JoL/orgs", "repos_url": "https://api.github.com/users/s-JoL/repos", "events_url": "https://api.github.com/users/s-JoL/events{/privacy}", "received_events_url": "https://api.github.com/users/s-JoL/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22386). All of your documentation changes will be reflected on that endpoint.", "> Thanks for your PR. Transformers is not meant to be a modular toolbox, so we don't add every feature to every model. Llama was trained without stable embedding or shared input-output vectors, so we won't add them to the modeling code of Llama. Likewise for the dropouts.\r\n> \r\n> Since you are training new models using this code, as soon as you have checkpoints available, I would advise to make a PR with a new model (mostly copied from Llama) like we have all the variants of GPT-2 for instance.\r\n\r\nThank you for your response. The memory_efficient_attention in xformers is actually mentioned in the original Llama paper. So, it is possible to integrate this component into the Llama training code.", "@Bayes-Song \r\nThanks for the PR\r\nCan we use this when Torch2.0 is supported?\r\nLike in\r\nhttps://github.com/huggingface/diffusers/pull/2303/files\r\n\r\ncc: @sgugger ", "If it's non-breaking and actually faster on **all** setups, we can add it yes. The PR makes other modifications for the time being, which we cannot accept as mentioned in my comment above.", "Currently I have trained a new model based on the above changes, and I am adding a new model to the transformers library based on @sgugger 's suggestion. I will re-open a PR after I finish all the code. " ]
1,679
1,682
1,682
CONTRIBUTOR
null
This PR adds memory-efficient attention to Llama, resulting in a 30% improvement in training efficiency. We also removed some transposes to adapt to the shapes allowed by the *memory_efficient_attention* operation. Additionally, we have added hidden dropout and attention dropout to the model, which helps with better generalization during training. Furthermore, two optional features have been added: stable embedding, used in Bloom, and shared input-output vectors, used in PALM. These features have been tested and found to improve training stability and performance. The main changes are as follows: ```python if xops is not None and self.training: attn_weights = None attn_output = xops.memory_efficient_attention(query_states, key_states, value_states, attn_bias=self.causal_mask, p=self.dropout_prob) ``` As we use operators from the xformers library, we need to add a dependency on xformers. We implemented pre-training of the Llama model based on transformers + accelerate, incorporating the modifications described above. https://github.com/Bayes-Song/Open-Llama/blob/main/README_en.md
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22386/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22386/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22386", "html_url": "https://github.com/huggingface/transformers/pull/22386", "diff_url": "https://github.com/huggingface/transformers/pull/22386.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22386.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22385
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22385/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22385/comments
https://api.github.com/repos/huggingface/transformers/issues/22385/events
https://github.com/huggingface/transformers/issues/22385
1,641,235,661
I_kwDOCUB6oc5h00TN
22,385
How to use the method model.generate() correctly?
{ "login": "zt991211", "id": 57473580, "node_id": "MDQ6VXNlcjU3NDczNTgw", "avatar_url": "https://avatars.githubusercontent.com/u/57473580?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zt991211", "html_url": "https://github.com/zt991211", "followers_url": "https://api.github.com/users/zt991211/followers", "following_url": "https://api.github.com/users/zt991211/following{/other_user}", "gists_url": "https://api.github.com/users/zt991211/gists{/gist_id}", "starred_url": "https://api.github.com/users/zt991211/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zt991211/subscriptions", "organizations_url": "https://api.github.com/users/zt991211/orgs", "repos_url": "https://api.github.com/users/zt991211/repos", "events_url": "https://api.github.com/users/zt991211/events{/privacy}", "received_events_url": "https://api.github.com/users/zt991211/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "@zt991211, thanks for raising an issue! \r\n\r\nCould you provide a more detailed snippet which is reproducible i.e. can be directly copied and run as well as a full traceback of the error encountered? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
NONE
null
### System Info - `transformers` version: 4.19.4 - Platform: Linux-4.9.93-010.ali3000.alios7.x86_64-x86_64-with-redhat-7.2-Paladin - Python version: 3.7.11 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ## This is my code self.decoder.generate(inputs=A, do_sample=True, max_length=80, min_length=1, top_k=50, top_p=0.95) ## self.decoder here is the BART model and A here is the input_features not the input_ids. ## The official document says that for the model of the encoder-decoder architecture, the generate method can input input_features, but an error occurs. According to the error log, input_features input is not supported, and only input_ids can be used. ## error log: RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.FloatTensor instead (while checking arguments for embedding) ## I want to know whether my usage is wrong or there is a bug in the source code ### Expected behavior I hope my code will return output_ids
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22385/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22384
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22384/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22384/comments
https://api.github.com/repos/huggingface/transformers/issues/22384/events
https://github.com/huggingface/transformers/issues/22384
1,641,167,505
I_kwDOCUB6oc5h0jqR
22,384
ValueError: Could not load model EleutherAI/gpt-neo-2.7B with any of the following classes:
{ "login": "hongyi-zhao", "id": 11155854, "node_id": "MDQ6VXNlcjExMTU1ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/11155854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hongyi-zhao", "html_url": "https://github.com/hongyi-zhao", "followers_url": "https://api.github.com/users/hongyi-zhao/followers", "following_url": "https://api.github.com/users/hongyi-zhao/following{/other_user}", "gists_url": "https://api.github.com/users/hongyi-zhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/hongyi-zhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hongyi-zhao/subscriptions", "organizations_url": "https://api.github.com/users/hongyi-zhao/orgs", "repos_url": "https://api.github.com/users/hongyi-zhao/repos", "events_url": "https://api.github.com/users/hongyi-zhao/events{/privacy}", "received_events_url": "https://api.github.com/users/hongyi-zhao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @hongyi-zhao, \r\n\r\nThe issue is arising because the checkpoint `\"EleutherAI/gpt-neo-2.7B\"` is for the [GPT Neo](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/gpt_neo), which has architectures for the text generation -- [GPTNeoForCausalLM](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/gpt_neo#transformers.GPTNeoForCausalLM) -- and sequence classification -- [GPTNeoForSequenceClassification](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/gpt_neo#transformers.GPTNeoForSequenceClassification) -- tasks. The pipeline in the shared snippet is `\"text2text-generation\"` for which this model doesn't have a compatible class. ", "I am very confused about the names of these models and the matching relationships between them, so I get straight to the point where I am most concerned: will this project help me to use the latest GPT-4 or their other future newest models?", "There are many cutting-edge models available and that continue to be added to transformers library. Unfortunately GPT-4 isn't one of them, as OpenAI hasn't open sourced the weights. \r\n\r\nThe models can be explored on [the hub](https://huggingface.co/). For example [here are the models](https://huggingface.co/models?pipeline_tag=text2text-generation&sort=downloads) for the selected `text2text-generation` pipeline in the example above. There's more information about the [Text2TextGenerationPipeline in the docs](https://huggingface.co/docs/transformers/v4.27.2/en/main_classes/pipelines#transformers.Text2TextGenerationPipeline). ", "Thank you very much for your comments and explanations.", "for anyone facing this issue again:\r\nI had this error when environment had the new PyTorch v2 . \r\nUninstalling torch `v2.0` and installing torch `v1.11` solved the issue." ]
1,679
1,682
1,679
NONE
null
### Feature request Want to run "EleutherAI/gpt-neo-2.7B" ### Motivation Want to run "EleutherAI/gpt-neo-2.7B" ### Your contribution ```python (datasci) werner@X10DAi:~$ ipython Python 3.11.1 (main, Dec 22 2022, 17:06:07) [GCC 12.2.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.7.0 -- An enhanced Interactive Python. Type '?' for help. ...: ...: # Load the ChatGPT-4 pipeline ...: chatbot = pipeline("text2text-generation", model="EleutherAI/gpt-neo- ...: 2.7B") ...: ...: # Define a function to interact with the chatbot ...: def chat(): ...: while True: ...: # Get user input ...: user_input = input("You: ") ...: ...: # Exit if user enters "exit" ...: if user_input.lower() == "exit": ...: break ...: ...: # Generate response from chatbot ...: response = chatbot(user_input, max_length=50)[0]["generated_t ...: ext"] ...: ...: # Print response ...: print("Chatbot:", response) ...: ...: # Call the chat function to start the chatbot ...: chat() 2023-03-27 08:24:43.589129: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-03-27 08:24:43.636446: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-03-27 08:24:43.637021: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-03-27 08:24:44.519506: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[1], line 4 1 from transformers import pipeline 3 # Load the ChatGPT-4 pipeline ----> 4 chatbot = pipeline("text2text-generation", model="EleutherAI/gpt-neo-2.7B") 6 # Define a function to interact with the chatbot 7 def chat(): File ~/.pyenv/versions/3.11.1/envs/datasci/lib/python3.11/site-packages/transformers/pipelines/__init__.py:776, in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs) 772 # Infer the framework from the model 773 # Forced if framework already defined, inferred if it's None 774 # Will load the correct model if possible 775 model_classes = {"tf": targeted_task["tf"], "pt": targeted_task["pt"]} --> 776 framework, model = infer_framework_load_model( 777 model, 778 model_classes=model_classes, 779 config=config, 780 framework=framework, 781 task=task, 782 **hub_kwargs, 783 **model_kwargs, 784 ) 786 model_config = model.config 787 hub_kwargs["_commit_hash"] = model.config._commit_hash File ~/.pyenv/versions/3.11.1/envs/datasci/lib/python3.11/site-packages/transformers/pipelines/base.py:271, in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs) 268 continue 270 if isinstance(model, str): --> 271 raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.") 273 framework = "tf" if "keras.engine.training.Model" in str(inspect.getmro(model.__class__)) else "pt" 274 return framework, model ValueError: Could not load model EleutherAI/gpt-neo-2.7B with any of the following classes: (<class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForSeq2SeqLM'>,). ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22384/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22384/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22383
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22383/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22383/comments
https://api.github.com/repos/huggingface/transformers/issues/22383/events
https://github.com/huggingface/transformers/pull/22383
1,640,981,314
PR_kwDOCUB6oc5M6fbC
22,383
TensorFlow: additional missing `cmake` dependencies in CI
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
MEMBER
null
# What does this PR do? Adds `cmake` to CI runs that depend on `transformers[tensorflow]`, on all missing cases.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22383/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22383/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22383", "html_url": "https://github.com/huggingface/transformers/pull/22383", "diff_url": "https://github.com/huggingface/transformers/pull/22383.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22383.patch", "merged_at": 1679923257000 }
https://api.github.com/repos/huggingface/transformers/issues/22382
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22382/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22382/comments
https://api.github.com/repos/huggingface/transformers/issues/22382/events
https://github.com/huggingface/transformers/pull/22382
1,640,972,018
PR_kwDOCUB6oc5M6dnC
22,382
Generate: support for left-padding on GPTNeoX and Llama
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The failing CI is fixed by #22383 :)", "@ArthurZucker @sgugger woopsie, I forgot that it affected the weight loading code -- I come from a place where weight names have to be specified 👼 Reverted (`self.llama` is `self.model` again)!", "It appears as if this may have broken FSDP. For example, as specified in the Alpaca repo, finetuning with `--fsdp \"full_sh\r\nard auto_wrap\" --fsdp_transformer_layer_cls_to_wrap LlamaDecoderLayer` worked before this commit, but after it gives the error such as:\r\n\r\n```python\r\nFile \"/home/fsuser/.local/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py\", line 313, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n File \"/home/fsuser/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'position_ids'\r\n```\r\n\r\nReverting the commit fixes it, although perhaps the problem is with `accelerate` not supporting `position_ids`? cc: @ArthurZucker ", "@jquesnelle can you paste the full stack trace? It would allow us to find the root cause :D (maybe, as you mention, the problem is in accelerate... or maybe it comes from the Alpaca repo!)", "I'm seeing a pretty significant performance hit on RedPajama-7b-chat that I think is due to this change. I ran the PyTorch profiler and all of the `repeat` operators in `apply_rotary_pos_emb` are expensive and run mostly on CPU. Reverting to transformers 4.27.x resolves the performance issue.", "You should try the `main` branch, #22785 removed the repeat solving this" ]
1,679
1,688
1,679
MEMBER
null
# What does this PR do? As the title indicates, adds left-padding support for GPTNeoX and Llama. It adds the `position_ids` input, propagates all the way to the position embedding, and gathers the position embeddings given the value in `position_ids`. All slow tests are now passing in both models, including the newly added left-padding support test and the GPTNeoX integration test. Also makes a few changes on Llama to make it more similar to other models 🤗
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22382/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22382", "html_url": "https://github.com/huggingface/transformers/pull/22382", "diff_url": "https://github.com/huggingface/transformers/pull/22382.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22382.patch", "merged_at": 1679928504000 }
https://api.github.com/repos/huggingface/transformers/issues/22381
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22381/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22381/comments
https://api.github.com/repos/huggingface/transformers/issues/22381/events
https://github.com/huggingface/transformers/pull/22381
1,640,871,035
PR_kwDOCUB6oc5M6KAt
22,381
Changed world_size() to get_world_size() bugfix
{ "login": "Charlie-Bell", "id": 103143406, "node_id": "U_kgDOBiXX7g", "avatar_url": "https://avatars.githubusercontent.com/u/103143406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Charlie-Bell", "html_url": "https://github.com/Charlie-Bell", "followers_url": "https://api.github.com/users/Charlie-Bell/followers", "following_url": "https://api.github.com/users/Charlie-Bell/following{/other_user}", "gists_url": "https://api.github.com/users/Charlie-Bell/gists{/gist_id}", "starred_url": "https://api.github.com/users/Charlie-Bell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Charlie-Bell/subscriptions", "organizations_url": "https://api.github.com/users/Charlie-Bell/orgs", "repos_url": "https://api.github.com/users/Charlie-Bell/repos", "events_url": "https://api.github.com/users/Charlie-Bell/events{/privacy}", "received_events_url": "https://api.github.com/users/Charlie-Bell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "BTW, there is a CI error due to this branch being created from an older version of `main` -- you should rebase with `main` to make our CI green ", "> BTW, there is a CI error due to this branch being created from an older version of `main` -- you should rebase with `main` to make our CI green\r\n\r\nFunny, GitHub is telling me that \"This branch is 1 commit ahead of huggingface:main.\" and also tells me the fork is already synced when I try syncing.\r\nAlso when I fetch upstream and rebase as in the contribution guidlines I am told \"Current branch changed-world-size-to-get-world-size-in-generation-utils is up to date.\"\r\n\r\nMaybe I missed something, but it seems the only difference in codebase is the 1 line change. Maybe it's worth it to re-run the ci/circleci: tests_torch_and_tf?", "@Charlie-Bell my apologies, there is indeed a problem in `main` I've found after writing the comment above! #22383 will fix it -- apologies for the confusion 🙏 " ]
1,679
1,679
1,679
CONTRIBUTOR
null
Edited one line in src/transormers/generation/utils.py. Changed dist.….world_size() to dist.get_world_size() since world_size() doesn't exist in pytorch.dist. # What does this PR do? Fixes # Pytorch 2 generation/utils.py , 'torch.distributed' has no attribute 'world_size' #22375 https://github.com/huggingface/transformers/issues/22375 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Library: - generate: @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22381/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22381", "html_url": "https://github.com/huggingface/transformers/pull/22381", "diff_url": "https://github.com/huggingface/transformers/pull/22381.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22381.patch", "merged_at": 1679923466000 }
https://api.github.com/repos/huggingface/transformers/issues/22380
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22380/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22380/comments
https://api.github.com/repos/huggingface/transformers/issues/22380/events
https://github.com/huggingface/transformers/pull/22380
1,640,308,588
PR_kwDOCUB6oc5M4cGm
22,380
Bump tensorflow from 2.8.1 to 2.11.1 in /examples/research_projects/decision_transformer
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it." ]
1,679
1,679
1,679
CONTRIBUTOR
null
Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.8.1 to 2.11.1. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/releases">tensorflow's releases</a>.</em></p> <blockquote> <h2>TensorFlow 2.11.1</h2> <h1>Release 2.11.1</h1> <p><strong>Note</strong>: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.</p> <ul> <li>Security vulnerability fixes will no longer be patched to this Tensorflow version. The latest Tensorflow version includes the security vulnerability fixes. You can update to the latest version (recommended) or patch security vulnerabilities yourself <a href="https://github.com/tensorflow/tensorflow#patching-guidelines">steps</a>. You can refer to the <a href="https://github.com/tensorflow/tensorflow/releases">release notes</a> of the latest Tensorflow version for a list of newly fixed vulnerabilities. If you have any questions, please create a GitHub issue to let us know.</li> </ul> <p>This release also introduces several vulnerability fixes:</p> <ul> <li>Fixes an FPE in TFLite in conv kernel <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-27579">CVE-2023-27579</a></li> <li>Fixes a double free in Fractional(Max/Avg)Pool <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25801">CVE-2023-25801</a></li> <li>Fixes a null dereference on ParallelConcat with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25676">CVE-2023-25676</a></li> <li>Fixes a segfault in Bincount with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25675">CVE-2023-25675</a></li> <li>Fixes an NPE in RandomShuffle with XLA enable <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25674">CVE-2023-25674</a></li> <li>Fixes an FPE in TensorListSplit with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25673">CVE-2023-25673</a></li> <li>Fixes segmentation fault in tfg-translate <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25671">CVE-2023-25671</a></li> <li>Fixes an NPE in QuantizedMatMulWithBiasAndDequantize <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25670">CVE-2023-25670</a></li> <li>Fixes an FPE in AvgPoolGrad with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25669">CVE-2023-25669</a></li> <li>Fixes a heap out-of-buffer read vulnerability in the QuantizeAndDequantize operation <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25668">CVE-2023-25668</a></li> <li>Fixes a segfault when opening multiframe gif <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25667">CVE-2023-25667</a></li> <li>Fixes an NPE in SparseSparseMaximum <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25665">CVE-2023-25665</a></li> <li>Fixes an FPE in AudioSpectrogram <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25666">CVE-2023-25666</a></li> <li>Fixes a heap-buffer-overflow in AvgPoolGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25664">CVE-2023-25664</a></li> <li>Fixes a NPE in TensorArrayConcatV2 <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25663">CVE-2023-25663</a></li> <li>Fixes a Integer overflow in EditDistance <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25662">CVE-2023-25662</a></li> <li>Fixes a Seg fault in <code>tf.raw_ops.Print</code> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25660">CVE-2023-25660</a></li> <li>Fixes a OOB read in DynamicStitch <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25659">CVE-2023-25659</a></li> <li>Fixes a OOB Read in GRUBlockCellGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25658">CVE-2023-25658</a></li> </ul> <h2>TensorFlow 2.11.0</h2> <h1>Release 2.11.0</h1> <h2>Breaking Changes</h2> <ul> <li> <p>The <code>tf.keras.optimizers.Optimizer</code> base class now points to the new Keras optimizer, while the old optimizers have been moved to the <code>tf.keras.optimizers.legacy</code> namespace.</p> <p>If you find your workflow failing due to this change, you may be facing one of the following issues:</p> <ul> <li><strong>Checkpoint loading failure.</strong> The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to <code>tf.keras.optimizer.legacy.XXX</code> (e.g. <code>tf.keras.optimizer.legacy.Adam</code>).</li> <li><strong>TF1 compatibility.</strong> The new optimizer, <code>tf.keras.optimizers.Optimizer</code>, does not support TF1 any more, so please use the legacy optimizer <code>tf.keras.optimizer.legacy.XXX</code>. We highly recommend <a href="https://www.tensorflow.org/guide/migrate">migrating your workflow to TF2</a> for stable support and new features.</li> <li><strong>Old optimizer API not found.</strong> The new optimizer, <code>tf.keras.optimizers.Optimizer</code>, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.</li> <li><strong>Learning rate schedule access.</strong> When using a <code>tf.keras.optimizers.schedules.LearningRateSchedule</code>, the new optimizer's <code>learning_rate</code> property returns the current learning rate value instead of a <code>LearningRateSchedule</code> object as before. If you need to access the <code>LearningRateSchedule</code> object, please use <code>optimizer._learning_rate</code>.</li> <li><strong>If you implemented a custom optimizer based on the old optimizer.</strong> Please set your optimizer to subclass <code>tf.keras.optimizer.legacy.XXX</code>. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the <a href="https://github.com/keras-team/keras/issues">Keras GitHub repo</a>.</li> <li><strong>Errors, such as <code>Cannot recognize variable...</code>.</strong> The new optimizer requires all optimizer variables to be created at the first <code>apply_gradients()</code> or <code>minimize()</code> call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call <code>optimizer.build(model.trainable_variables)</code> before the training loop.</li> <li><strong>Timeout or performance loss.</strong> We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.</li> </ul> <p>The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, <code>tf.keras.optimizers.Adafactor</code>) will only be implemented based on the new <code>tf.keras.optimizers.Optimizer</code> base class.</p> </li> <li> <p><code>tensorflow/python/keras</code> code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import of <code>tensorflow.python.keras</code> and use the public API with <code>from tensorflow import keras</code> or <code>import tensorflow as tf; tf.keras</code>.</p> </li> </ul> <h2>Major Features and Improvements</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md">tensorflow's changelog</a>.</em></p> <blockquote> <h1>Release 2.11.1</h1> <p><strong>Note</strong>: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.</p> <ul> <li>Security vulnerability fixes will no longer be patched to this Tensorflow version. The latest Tensorflow version includes the security vulnerability fixes. You can update to the latest version (recommended) or patch security vulnerabilities yourself <a href="https://github.com/tensorflow/tensorflow#patching-guidelines">steps</a>. You can refer to the <a href="https://github.com/tensorflow/tensorflow/releases">release notes</a> of the latest Tensorflow version for a list of newly fixed vulnerabilities. If you have any questions, please create a GitHub issue to let us know.</li> </ul> <p>This release also introduces several vulnerability fixes:</p> <ul> <li>Fixes an FPE in TFLite in conv kernel <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-27579">CVE-2023-27579</a></li> <li>Fixes a double free in Fractional(Max/Avg)Pool <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25801">CVE-2023-25801</a></li> <li>Fixes a null dereference on ParallelConcat with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25676">CVE-2023-25676</a></li> <li>Fixes a segfault in Bincount with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25675">CVE-2023-25675</a></li> <li>Fixes an NPE in RandomShuffle with XLA enable <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25674">CVE-2023-25674</a></li> <li>Fixes an FPE in TensorListSplit with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25673">CVE-2023-25673</a></li> <li>Fixes segmentation fault in tfg-translate <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25671">CVE-2023-25671</a></li> <li>Fixes an NPE in QuantizedMatMulWithBiasAndDequantize <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25670">CVE-2023-25670</a></li> <li>Fixes an FPE in AvgPoolGrad with XLA <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25669">CVE-2023-25669</a></li> <li>Fixes a heap out-of-buffer read vulnerability in the QuantizeAndDequantize operation <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25668">CVE-2023-25668</a></li> <li>Fixes a segfault when opening multiframe gif <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25667">CVE-2023-25667</a></li> <li>Fixes an NPE in SparseSparseMaximum <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25665">CVE-2023-25665</a></li> <li>Fixes an FPE in AudioSpectrogram <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25666">CVE-2023-25666</a></li> <li>Fixes a heap-buffer-overflow in AvgPoolGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25664">CVE-2023-25664</a></li> <li>Fixes a NPE in TensorArrayConcatV2 <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25663">CVE-2023-25663</a></li> <li>Fixes a Integer overflow in EditDistance <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25662">CVE-2023-25662</a></li> <li>Fixes a Seg fault in <code>tf.raw_ops.Print</code> <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25660">CVE-2023-25660</a></li> <li>Fixes a OOB read in DynamicStitch <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25659">CVE-2023-25659</a></li> <li>Fixes a OOB Read in GRUBlockCellGrad <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-25658">CVE-2023-25658</a></li> </ul> <h1>Release 2.11.0</h1> <h2>Breaking Changes</h2> <ul> <li> <p><code>tf.keras.optimizers.Optimizer</code> now points to the new Keras optimizer, and old optimizers have moved to the <code>tf.keras.optimizers.legacy</code> namespace. If you find your workflow failing due to this change, you may be facing one of the following issues:</p> <ul> <li><strong>Checkpoint loading failure.</strong> The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to <code>tf.keras.optimizers.legacy.XXX</code> (e.g. <code>tf.keras.optimizers.legacy.Adam</code>).</li> <li><strong>TF1 compatibility.</strong> The new optimizer does not support TF1 any more, so please use the legacy optimizer <code>tf.keras.optimizer.legacy.XXX</code>. We highly recommend to migrate your workflow to TF2 for stable support and new features.</li> <li><strong>API not found.</strong> The new optimizer has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API</li> </ul> </li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/tensorflow/tensorflow/commit/a3e2c692c18649329c4210cf8df2487d2028e267"><code>a3e2c69</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/60016">#60016</a> from tensorflow/fix-relnotes</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/13b85dcf966d0c94b2e5c21291be039db2dec7b9"><code>13b85dc</code></a> Fix release notes</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/48b18dbf1301f24be9f2f41189d318ce5398540a"><code>48b18db</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/60014">#60014</a> from tensorflow/disable-test-that-ooms</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/eea48f50d6982879909bf8e0d0151bbce3f9bf4a"><code>eea48f5</code></a> Disable a test that results in OOM+segfault</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/a63258434247784605986cfc2b43cb3be846cf8a"><code>a632584</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/60000">#60000</a> from tensorflow/venkat-patch-3</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/93dea7a67df44bde557e580dfdcde5ba0a7a344d"><code>93dea7a</code></a> Update RELEASE.md</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/a2ba9f16f0154bf93f21132878b154238d89fad6"><code>a2ba9f1</code></a> Updating Release.md with Legal Language for Release Notes</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/fae41c76bdc760454b3e5c1d3af9b8d5a5c6c548"><code>fae41c7</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/59998">#59998</a> from tensorflow/fix-bad-cherrypick-again</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/2757416dcd4a2d00ea36512c2ffd347030c1196b"><code>2757416</code></a> Fix bad cherrypick</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/c78616f4b00125c8a563e10ce6b76bea8070bdd0"><code>c78616f</code></a> Merge pull request <a href="https://redirect.github.com/tensorflow/tensorflow/issues/59992">#59992</a> from tensorflow/fix-2.11-build</li> <li>Additional commits viewable in <a href="https://github.com/tensorflow/tensorflow/compare/v2.8.1...v2.11.1">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tensorflow&package-manager=pip&previous-version=2.8.1&new-version=2.11.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22380/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22380/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22380", "html_url": "https://github.com/huggingface/transformers/pull/22380", "diff_url": "https://github.com/huggingface/transformers/pull/22380.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22380.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22379
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22379/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22379/comments
https://api.github.com/repos/huggingface/transformers/issues/22379/events
https://github.com/huggingface/transformers/issues/22379
1,640,278,762
I_kwDOCUB6oc5hxKrq
22,379
CLIP default download location is /root/.cache/..., not current working dir like other models
{ "login": "krahnikblis", "id": 84637076, "node_id": "MDQ6VXNlcjg0NjM3MDc2", "avatar_url": "https://avatars.githubusercontent.com/u/84637076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krahnikblis", "html_url": "https://github.com/krahnikblis", "followers_url": "https://api.github.com/users/krahnikblis/followers", "following_url": "https://api.github.com/users/krahnikblis/following{/other_user}", "gists_url": "https://api.github.com/users/krahnikblis/gists{/gist_id}", "starred_url": "https://api.github.com/users/krahnikblis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krahnikblis/subscriptions", "organizations_url": "https://api.github.com/users/krahnikblis/orgs", "repos_url": "https://api.github.com/users/krahnikblis/repos", "events_url": "https://api.github.com/users/krahnikblis/events{/privacy}", "received_events_url": "https://api.github.com/users/krahnikblis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @krahnikblis, thanks for raising this issue! \r\n\r\nIn the transformers library, `from_pretrained` can be used to load a model from the hub and from a local file. When `from_pretrained(path)` is called, if `path` is a local folder, these weights are loaded. If it's a checkpoint on the hub e.g. `openai/clip-vit-large-patch14`, then the checkpoint is download to the cache directory, as you've correctly noticed. If `from_pretrained(path)` is called again, then the weights are loaded from the cache. This happens for all frameworks: PyTorch, TensorFlow and Flax. \r\n\r\nFor SD-Flax, am I correct in understanding this as the Stable Diffusion pipeline from the diffusers library? Could you share a more detailed snippet showing what exactly is being run? For the diffusers pipelines, if using the `pipeline.from_pretrained(model_weights)` API, then the same behaviour will happen (download to cache, can load from local) [as noted in the documentation](https://huggingface.co/docs/diffusers/using-diffusers/loading#loading-pipelines).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
NONE
null
### System Info - `transformers` version: 4.27.3 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.1+cu116 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.4 (cpu) - Jax version: 0.3.25 - JaxLib version: 0.3.25 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? flax: @sanchit-gandhi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python clip_model = jax.device_get(FlaxCLIPModel.from_pretrained('openai/clip-vit-large-patch14')) ``` downloads by default if not exists locally: `Downloading flax_model.msgpack: 100% 1.71G/1.71G [00:08<00:00, 210MB/s]` BUT, unlike all other models [that i'm using in the HF pipelines for SD-Flax], the file download location is far away from the working directory: `find / -iname 'flax_model.msgpack'` shows that the SD weights are where they should be, but CLIP's weights are off in some hidden, hashed directory: `/root/.cache/huggingface/hub/models--openai--clip-vit-large-patch14/snapshots/8d052a0f05efbaefbc9e8786ba291cfdf93e5bff/flax_model.msgpack` is this intended? if so why break from the pattern of other models that download to cwd? ### Expected behavior files would download to current working directory, e.g. something like `/content/openai/clip-vit-large-patch14/` and by extension, plugging in `_name_or_path` value of `'openai/clip-vit-large-patch14'` would be one-and-the-same to the file location as well as the hub's catalogue name (i.e. can i confidently put in a different path that i saved the weights to manually?)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22379/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22379/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22378
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22378/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22378/comments
https://api.github.com/repos/huggingface/transformers/issues/22378/events
https://github.com/huggingface/transformers/pull/22378
1,640,260,258
PR_kwDOCUB6oc5M4TDU
22,378
[performance] ensure `causal_mask` is created directly on device
{ "login": "jeffra", "id": 645595, "node_id": "MDQ6VXNlcjY0NTU5NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/645595?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeffra", "html_url": "https://github.com/jeffra", "followers_url": "https://api.github.com/users/jeffra/followers", "following_url": "https://api.github.com/users/jeffra/following{/other_user}", "gists_url": "https://api.github.com/users/jeffra/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeffra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeffra/subscriptions", "organizations_url": "https://api.github.com/users/jeffra/orgs", "repos_url": "https://api.github.com/users/jeffra/repos", "events_url": "https://api.github.com/users/jeffra/events{/privacy}", "received_events_url": "https://api.github.com/users/jeffra/received_events", "type": "User", "site_admin": false }
[ { "id": 2690307185, "node_id": "MDU6TGFiZWwyNjkwMzA3MTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Performance", "name": "Performance", "color": "207F32", "default": false, "description": "" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @thomasw21 @NouamaneTazi since both of you are experts on this kind of things - to see if you have any general opinion and/or if you would like to review this PR too.", "@jeffra Would it possible for you (and/or @tjruwase and @tohtana) to provide your script that finds/measures/profiles the running time for this issue 🙏 . It would be super helpful for us to dive into internally too.", "> LGTM, thanks a lot for the fix! Note that the same modification needs to be applied to BART (since OPT copies from BART) in order for all quality checks to pass.\r\n\r\nFYI (@sgugger) : @stas00 mentioned on Slack \r\n\r\n> I tried to support Jeff to tell him to how make copies but he found that many copies are either not tagged properly or the copied functions were completely renamed and thus it's very difficult to make an automatedtransformers-wide fix\r\n\r\nand in this PR description, the author(s)\r\n\r\n> One major complication we see in accepting this PR is that the two functions being modified are copied across lots of different models and the make fix-copies script doesn't seem to address all of them correctly across both _make_causal_mask and _prepare_decoder_attention_mask\r\n\r\nIt's likely that they expect us to help on this part. I can help (I was waiting for the approval for the fix in `OPT` which is done now.)", "I think just copying the same fix to BART and then applying `make fix-copies` is simple enough for this PR. Dealing with functions that are not copies or are named differently can indeed be done in followup PRs.", "Ok, i've updated the BART implementation and attempted to get `make fix-copies` to work for me but I think I might be doing something wrong. Some of the original issues I saw are now fixed on other models (e.g., https://github.com/huggingface/transformers/pull/22382 adds a `# Copied from` tag for llama). However, I am still seeing issues i think coming from the fix-up scripts getting confused with the function signature change of `_make_causal_mask`. Also, I added the `# Copied from` tag into opt for `_make_causal_mask` which was part of my previous issue i think.\r\n\r\nCan someone try `make fix-copies` on their side with this? You should be able to push to my branch.\r\n\r\nFor example, here's the diff of `src/transformers/models/xglm/modeling_xglm.py` after applying `make fix-copies` in this branch, it does not add `device` as an argument to `_make_causal_mask`:\r\n\r\n```diff\r\ndiff --git a/src/transformers/models/xglm/modeling_xglm.py b/src/transformers/models/xglm/modeling_xglm.py\r\nindex 8a1955793..59851bd85 100755\r\n--- a/src/transformers/models/xglm/modeling_xglm.py\r\n+++ b/src/transformers/models/xglm/modeling_xglm.py\r\n@@ -119,13 +119,13 @@ def _make_causal_mask(input_ids_shape: torch.Size, dtype: torch.dtype, past_key_\r\n Make causal mask used for bi-directional self-attention.\r\n \"\"\"\r\n bsz, tgt_len = input_ids_shape\r\n- mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min))\r\n- mask_cond = torch.arange(mask.size(-1))\r\n+ mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)\r\n+ mask_cond = torch.arange(mask.size(-1), device=device)\r\n mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)\r\n mask = mask.to(dtype)\r\n\r\n if past_key_values_length > 0:\r\n- mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype), mask], dim=-1)\r\n+ mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)\r\n return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)\r\n```\r\n\r\nIt modifies all of these models, so ideally don't want to edit these manually :)\r\n\r\n```\r\n modified: src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py\r\n modified: src/transformers/models/biogpt/modeling_biogpt.py\r\n modified: src/transformers/models/blenderbot/modeling_blenderbot.py\r\n modified: src/transformers/models/blenderbot_small/modeling_blenderbot_small.py\r\n modified: src/transformers/models/informer/modeling_informer.py\r\n modified: src/transformers/models/llama/modeling_llama.py\r\n modified: src/transformers/models/m2m_100/modeling_m2m_100.py\r\n modified: src/transformers/models/marian/modeling_marian.py\r\n modified: src/transformers/models/mbart/modeling_mbart.py\r\n modified: src/transformers/models/mvp/modeling_mvp.py\r\n modified: src/transformers/models/nllb_moe/modeling_nllb_moe.py\r\n modified: src/transformers/models/pegasus/modeling_pegasus.py\r\n modified: src/transformers/models/pegasus_x/modeling_pegasus_x.py\r\n modified: src/transformers/models/plbart/modeling_plbart.py\r\n modified: src/transformers/models/speech_to_text/modeling_speech_to_text.py\r\n modified: src/transformers/models/speech_to_text_2/modeling_speech_to_text_2.py\r\n modified: src/transformers/models/speecht5/modeling_speecht5.py\r\n modified: src/transformers/models/time_series_transformer/modeling_time_series_transformer.py\r\n modified: src/transformers/models/trocr/modeling_trocr.py\r\n modified: src/transformers/models/whisper/modeling_whisper.py\r\n modified: src/transformers/models/xglm/modeling_xglm.py\r\n```", "Ah yes, `make fix-copies` does not change the signature of the function so that is indeed something to edit manually. If it's too much work I can try to push this to your branch tomorrow.", "> Ah yes, `make fix-copies` does not change the signature of the function so that is indeed something to edit manually. If it's too much work I can try to push this to your branch tomorrow.\r\n\r\nSounds good, I might have some time this afternoon for this. Otherwise feel free to do it :) Just wasn't sure if this was an expected issue with the copy scripts or not.", "Okay all the models should be fixed now, `make fixup` is clear on my local tests." ]
1,679
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? @tjruwase and @tohtana discovered that causal_mask is currently being created on CPU then moved to GPU during the forward pass of OPT (and we think other models). This appears to be causing a significant performance degradation on multi-gpu environments due to parallel host to device copies going on. It's not 100% clear to us why this is so bad but here is what we observe before and after this patch: Before this patch w. OPT-125m on x8 A100s: <img width="649" alt="image" src="https://user-images.githubusercontent.com/645595/227668447-bf6840dd-bbc4-4520-8a9f-33f046eeb4c2.png"> After the patch: <img width="628" alt="image" src="https://user-images.githubusercontent.com/645595/227668475-6ed2f1ca-d18a-4776-862d-4be499f62f39.png"> These numbers were gathered from a modified version of https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py but turning on `wall_clock_breakdown: true` in our deepspeed config. One major complication we see in accepting this PR is that the two functions being modified are copied across lots of different models and the `make fix-copies` script doesn't seem to address all of them correctly across both `_make_causal_mask` and `_prepare_decoder_attention_mask` ## Who can review? Tagging @sgugger and @stas00 to help triage to the right people
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22378/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22378/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22378", "html_url": "https://github.com/huggingface/transformers/pull/22378", "diff_url": "https://github.com/huggingface/transformers/pull/22378.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22378.patch", "merged_at": 1680009423000 }
https://api.github.com/repos/huggingface/transformers/issues/22377
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22377/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22377/comments
https://api.github.com/repos/huggingface/transformers/issues/22377/events
https://github.com/huggingface/transformers/pull/22377
1,640,187,430
PR_kwDOCUB6oc5M4CeL
22,377
load_in_8bit now respects 'balanced' device maps in multi-gpu environments
{ "login": "kooshi", "id": 1934337, "node_id": "MDQ6VXNlcjE5MzQzMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/1934337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kooshi", "html_url": "https://github.com/kooshi", "followers_url": "https://api.github.com/users/kooshi/followers", "following_url": "https://api.github.com/users/kooshi/following{/other_user}", "gists_url": "https://api.github.com/users/kooshi/gists{/gist_id}", "starred_url": "https://api.github.com/users/kooshi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kooshi/subscriptions", "organizations_url": "https://api.github.com/users/kooshi/orgs", "repos_url": "https://api.github.com/users/kooshi/repos", "events_url": "https://api.github.com/users/kooshi/events{/privacy}", "received_events_url": "https://api.github.com/users/kooshi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm not sure if this is caused by your work but I met this problem:\r\n\r\nI'm training LLaMA-13B with PEFT, using lora + modules_to_save=['model.embed_tokens', 'lm_head']\r\nAnd it can run training normally\r\nBut the finally model file doesn't contain the lm_head part, only lora + embed_tokens.\r\nAnd When I use DDP or single card. It can save all the thing normally.", "(I'm using your fork + your modified version alpaca lora code)", "@KohakuBlueleaf that's a bit disingenuous, as you've changed quite a few other things 😉\r\n\r\nI think I was able to reproduce what you were talking about on [your repo](https://github.com/KohakuBlueleaf/guanaco-lora) though. Do you mean that when you run export_hf_checkpoint.py, `head_changed` shows `False`?\r\n\r\nThat's what I am seeing. What's strange, is that the weights *are* in the lora, they're just named \"base_model.model.lm_head.0.weight\" instead of \"base_model.model.lm_head.weight\"\r\n\r\nIf you add `adapters_weights[\"base_model.model.lm_head.weight\"] = adapters_weights[\"base_model.model.lm_head.0.weight\"]` to peft_model.py right after the lora is loaded, but before it is merged with the base_model, then you can get `head_changed: True` \r\n\r\nThis might be from my change... as the lm_head is a special case when loading in 8bit, but I'm not sure. I see the same result for *both* single or multi-gpu. I've run out of time today to investigate though, so I'll have to dig in more later, possibly tonight.", "> @KohakuBlueleaf that's a bit disingenuous, as you've changed quite a few other things 😉\r\n> \r\n> I think I was able to reproduce what you were talking about on [your repo](https://github.com/KohakuBlueleaf/guanaco-lora) though. Do you mean that when you run export_hf_checkpoint.py, `head_changed` shows `False`?\r\n> \r\n> That's what I am seeing. What's strange, is that the weights _are_ in the lora, they're just named \"base_model.model.lm_head.0.weight\" instead of \"base_model.model.lm_head.weight\"\r\n> \r\n> If you add `adapters_weights[\"base_model.model.lm_head.weight\"] = adapters_weights[\"base_model.model.lm_head.0.weight\"]` to peft_model.py right after the lora is loaded, but before it is merged with the base_model, then you can get `head_changed: True`\r\n> \r\n> This might be from my change... as the lm_head is a special case when loading in 8bit, but I'm not sure. I see the same result for _both_ single or multi-gpu. I've run out of time today to investigate though, so I'll have to dig in more later, possibly tonight.\r\n\r\nYeah I also figured it out\r\nBut thx for your reply!" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Fixes `max_memory` generation for 'auto' 'balanced' and 'balanced_low_0' `device_map`s for models being loaded in 8bit Fixes # (N/A) no issues found, but one guy made a [comment](https://github.com/TimDettmers/bitsandbytes/issues/177#issuecomment-1481609654) about it in the bnb issues, and it caused confusion and workarounds elsewhere. The problem was easily worked around by manually passing a device map or max memory config. Before this change, the following code would attempt to load the whole model on the first GPU in a two gpu setup, potentially causing OOM errors. After the change, it loads it evenly across GPUs, as intended. ```python model = AutoModelForCausalLM.from_pretrained( checkpoint, load_in_8bit=True, device_map="auto", ) ``` Additionally, I removed 'pipeline' from my earlier comment in a previous PR, as true Pipeline Parallelism would require some more non-trivial changes to the model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22377/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22377/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22377", "html_url": "https://github.com/huggingface/transformers/pull/22377", "diff_url": "https://github.com/huggingface/transformers/pull/22377.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22377.patch", "merged_at": 1679927693000 }
https://api.github.com/repos/huggingface/transformers/issues/22376
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22376/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22376/comments
https://api.github.com/repos/huggingface/transformers/issues/22376/events
https://github.com/huggingface/transformers/issues/22376
1,640,160,941
I_kwDOCUB6oc5hwt6t
22,376
AttributeError: 'Tensor' object has no attribute 'tile'
{ "login": "nikita-stha", "id": 66687885, "node_id": "MDQ6VXNlcjY2Njg3ODg1", "avatar_url": "https://avatars.githubusercontent.com/u/66687885?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nikita-stha", "html_url": "https://github.com/nikita-stha", "followers_url": "https://api.github.com/users/nikita-stha/followers", "following_url": "https://api.github.com/users/nikita-stha/following{/other_user}", "gists_url": "https://api.github.com/users/nikita-stha/gists{/gist_id}", "starred_url": "https://api.github.com/users/nikita-stha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nikita-stha/subscriptions", "organizations_url": "https://api.github.com/users/nikita-stha/orgs", "repos_url": "https://api.github.com/users/nikita-stha/repos", "events_url": "https://api.github.com/users/nikita-stha/events{/privacy}", "received_events_url": "https://api.github.com/users/nikita-stha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "PyTorch 1.7 is not supported anymore, we only ensure support for PyTorch >= 1.9 Could you try updating your Pytorch install and see if it fixes the issue?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
NONE
null
### System Info - `transformers` version: 4.27.3 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.7 (gpu) - Jax version: 0.4.6 - JaxLib version: 0.4.6 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Try to run the code below: from transformers import GPTNeoForCausalLM, GPT2Tokenizer model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B") tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") prompt = ( "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " "previously unexplored valley, in the Andes Mountains. Even more surprising to the " "researchers was the fact that the unicorns spoke perfect English." ) input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] ### Expected behavior I would expect model to generate the predicted next sentence/text.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22376/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22375
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22375/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22375/comments
https://api.github.com/repos/huggingface/transformers/issues/22375/events
https://github.com/huggingface/transformers/issues/22375
1,640,145,881
I_kwDOCUB6oc5hwqPZ
22,375
Pytorch 2 generation/utils.py , 'torch.distributed' has no attribute 'world_size'
{ "login": "djaym7", "id": 12378820, "node_id": "MDQ6VXNlcjEyMzc4ODIw", "avatar_url": "https://avatars.githubusercontent.com/u/12378820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/djaym7", "html_url": "https://github.com/djaym7", "followers_url": "https://api.github.com/users/djaym7/followers", "following_url": "https://api.github.com/users/djaym7/following{/other_user}", "gists_url": "https://api.github.com/users/djaym7/gists{/gist_id}", "starred_url": "https://api.github.com/users/djaym7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/djaym7/subscriptions", "organizations_url": "https://api.github.com/users/djaym7/orgs", "repos_url": "https://api.github.com/users/djaym7/repos", "events_url": "https://api.github.com/users/djaym7/events{/privacy}", "received_events_url": "https://api.github.com/users/djaym7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks like get_world_size() is supposed to be used in pytorch.distributed. [I will make a PR.](https://github.com/huggingface/transformers/pull/22381)", "Should be fixed now that the PR above has been merged." ]
1,679
1,679
1,679
NONE
null
### System Info transformers 4.28.0.dev0 pytorch 2 cuda 117 File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 1196, in generate if is_deepspeed_zero3_enabled() and dist.world_size() > 1: AttributeError: module 'torch.distributed' has no attribute 'world_size' https://github.com/huggingface/transformers-bloom-inference/blob/7bea3526d8270b4aeeefecc57d7d7d638e2bbe0e/bloom-inference-scripts/bloom-ds-zero-inference.py ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Pytorch 2 and generate with deepspeed stage 3. https://github.com/huggingface/transformers-bloom-inference/blob/7bea3526d8270b4aeeefecc57d7d7d638e2bbe0e/bloom-inference-scripts/bloom-ds-zero-inference.py ### Expected behavior No error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22375/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22375/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22374
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22374/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22374/comments
https://api.github.com/repos/huggingface/transformers/issues/22374/events
https://github.com/huggingface/transformers/pull/22374
1,640,016,543
PR_kwDOCUB6oc5M3cV3
22,374
Report safetensors version in transformers-cli env
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? This PR adds `safetensors` to the info reported by `transformers-cli env` and in particular puts a note when safetensors is ignored becuase of a too old PyTorch (see #22370 )
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22374/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22374", "html_url": "https://github.com/huggingface/transformers/pull/22374", "diff_url": "https://github.com/huggingface/transformers/pull/22374.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22374.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22373
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22373/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22373/comments
https://api.github.com/repos/huggingface/transformers/issues/22373/events
https://github.com/huggingface/transformers/issues/22373
1,639,998,303
I_kwDOCUB6oc5hwGNf
22,373
llama model cannot run with accelerate setting
{ "login": "TeddLi", "id": 67747139, "node_id": "MDQ6VXNlcjY3NzQ3MTM5", "avatar_url": "https://avatars.githubusercontent.com/u/67747139?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TeddLi", "html_url": "https://github.com/TeddLi", "followers_url": "https://api.github.com/users/TeddLi/followers", "following_url": "https://api.github.com/users/TeddLi/following{/other_user}", "gists_url": "https://api.github.com/users/TeddLi/gists{/gist_id}", "starred_url": "https://api.github.com/users/TeddLi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TeddLi/subscriptions", "organizations_url": "https://api.github.com/users/TeddLi/orgs", "repos_url": "https://api.github.com/users/TeddLi/repos", "events_url": "https://api.github.com/users/TeddLi/events{/privacy}", "received_events_url": "https://api.github.com/users/TeddLi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please follow the template of the issues as there is nothing anyone can do to help with so little information.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
NONE
null
### System Info transformer version 4.28.0.dev0 Error `loading file tokenizer_config.json loading weights file ./llama1/pytorch_model.bin Generate config GenerationConfig { "_from_model_config": true, "bos_token_id": 0, "eos_token_id": 1, "pad_token_id": 1, "transformers_version": "4.28.0.dev0" } [15:28:22] WARNING Sending process 854275 closing signal SIGTERM api.py:699 WARNING Sending process 854276 closing signal SIGTERM api.py:699 WARNING Sending process 854277 closing signal SIGTERM api.py:699 WARNING Sending process 854279 closing signal SIGTERM api.py:699 WARNING Sending process 854280 closing signal SIGTERM api.py:699 WARNING Sending process 854281 closing signal SIGTERM api.py:699 WARNING Sending process 854282 closing signal SIGTERM api.py:699 [15:28:25] ERROR failed (exitcode: -9) local_rank: 3 (pid: 854278) of binary: /usr/bin/python3 ` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction We tried to train Pile with accelerate 8 GPUs setting ### Expected behavior I would expect it load succesfully
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22373/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22372
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22372/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22372/comments
https://api.github.com/repos/huggingface/transformers/issues/22372/events
https://github.com/huggingface/transformers/issues/22372
1,639,989,623
I_kwDOCUB6oc5hwEF3
22,372
Add Restormer
{ "login": "tushdon2", "id": 76245823, "node_id": "MDQ6VXNlcjc2MjQ1ODIz", "avatar_url": "https://avatars.githubusercontent.com/u/76245823?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tushdon2", "html_url": "https://github.com/tushdon2", "followers_url": "https://api.github.com/users/tushdon2/followers", "following_url": "https://api.github.com/users/tushdon2/following{/other_user}", "gists_url": "https://api.github.com/users/tushdon2/gists{/gist_id}", "starred_url": "https://api.github.com/users/tushdon2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tushdon2/subscriptions", "organizations_url": "https://api.github.com/users/tushdon2/orgs", "repos_url": "https://api.github.com/users/tushdon2/repos", "events_url": "https://api.github.com/users/tushdon2/events{/privacy}", "received_events_url": "https://api.github.com/users/tushdon2/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @tushdon2, please let me know if and how can I contribute to this model." ]
1,679
1,680
null
NONE
null
### Model description **Restormer: Efficient Transformer for High-Resolution Image Restoration** was published in CVPR 2022, which introduced a new Vision Transformer based architecture for Image Restoration tasks like Deraining, Motion Deblurring, Defocus Deblurring and Denoising. It reduced the time complexity of Self Attention in Vision Transformers from O(n<sup>2</sup>) to O(n) by introducing **Multi-Dconv Head Transposed Attention**. It also introduced **Gated-Dconv Feed-Forward Network**. @manyana72 and I would like to add this model to Huggingface. cc: @NielsRogge ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation [Paper](https://arxiv.org/pdf/2111.09881.pdf), [Code Implementation](https://github.com/swz30/Restormer) and [pretrained model weights](https://github.com/swz30/Restormer/releases/tag/v1.0)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22372/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22372/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22371
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22371/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22371/comments
https://api.github.com/repos/huggingface/transformers/issues/22371/events
https://github.com/huggingface/transformers/issues/22371
1,639,980,312
I_kwDOCUB6oc5hwB0Y
22,371
Conv1D doesn't output token-wise results consistently.
{ "login": "wsjeon", "id": 20200538, "node_id": "MDQ6VXNlcjIwMjAwNTM4", "avatar_url": "https://avatars.githubusercontent.com/u/20200538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wsjeon", "html_url": "https://github.com/wsjeon", "followers_url": "https://api.github.com/users/wsjeon/followers", "following_url": "https://api.github.com/users/wsjeon/following{/other_user}", "gists_url": "https://api.github.com/users/wsjeon/gists{/gist_id}", "starred_url": "https://api.github.com/users/wsjeon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wsjeon/subscriptions", "organizations_url": "https://api.github.com/users/wsjeon/orgs", "repos_url": "https://api.github.com/users/wsjeon/repos", "events_url": "https://api.github.com/users/wsjeon/events{/privacy}", "received_events_url": "https://api.github.com/users/wsjeon/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
null
[]
[ "Hey! Wow that's interesting. \r\nTwo parts of answer:\r\n1. Very cool. We can use `torch.testing.assert_allclose` to checkout the max differences, and indeed I have the following outputs:\r\n```python \r\nIn [73]: torch.testing.assert_allclose(addmm(x1, b, w)[:10], addbmm(x2, b, w))\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\nCell In[73], line 1\r\n----> 1 torch.testing.assert_allclose(addmm(x1, b, w)[:10], addbmm(x2, b, w))\r\n\r\nFile /opt/conda/envs/py39/lib/python3.9/site-packages/torch/testing/_deprecated.py:32, in warn_deprecated.<locals>.outer_wrapper.<locals>.inner_wrapper(*args, **kwargs)\r\n 30 @functools.wraps(fn)\r\n 31 def inner_wrapper(*args: Any, **kwargs: Any) -> Any:\r\n---> 32 return_value = fn(*args, **kwargs)\r\n 33 tail = instructions(name, args, kwargs, return_value) if callable(instructions) else instructions\r\n 34 msg = (head + tail).strip()\r\n\r\nFile /opt/conda/envs/py39/lib/python3.9/site-packages/torch/testing/_deprecated.py:80, in assert_allclose(actual, expected, rtol, atol, equal_nan, msg)\r\n 77 if rtol is None and atol is None:\r\n 78 rtol, atol = _get_default_rtol_and_atol(actual, expected)\r\n---> 80 torch.testing.assert_close(\r\n 81 actual,\r\n 82 expected,\r\n 83 rtol=rtol,\r\n 84 atol=atol,\r\n 85 equal_nan=equal_nan,\r\n 86 check_device=True,\r\n 87 check_dtype=False,\r\n 88 check_stride=False,\r\n 89 msg=msg or None,\r\n 90 )\r\n\r\n [... skipping hidden 1 frame]\r\n\r\nFile /opt/conda/envs/py39/lib/python3.9/site-packages/torch/testing/_comparison.py:1093, in assert_equal(actual, expected, pair_types, sequence_types, mapping_types, msg, **options)\r\n 1090 return\r\n 1092 # TODO: compose all metas into one AssertionError\r\n-> 1093 raise error_metas[0].to_error(msg)\r\n\r\nAssertionError: Tensor-likes are not close!\r\n\r\nMismatched elements: 9 / 23040 (0.0%)\r\nGreatest absolute difference: 4.00543212890625e-05 at index (2, 952) (up to 1e-05 allowed)\r\nGreatest relative difference: 0.0080592538321523 at index (8, 1875) (up to 0.0001 allowed)\r\n``` \r\nSo the outputs match up to 1e-2, which is not that great. Your fix is indeed good in terms of precision as `torch.testing.assert_allclose(addbmm(x1, b, w)[:10], addbmm(x2, b, w))` is True. \r\n\r\n2. My concern is: is this faster or slower in terms of computation? Is `torch.addnm` more optimised (and requires less calls to different views) thus faster. Would the fix break Onnx tracing? And most importantly, is this backward compatible? \r\nIf it is indeed a fix, meaning that this will bring our logits closer to what they were from the original logits, we might consider this as a potential good change, but the other concerns are still there! \r\nThe problem is that GPT2 is an old model, it's very hard to change it (especially something as fundamental as the Conv). \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
NONE
null
### System Info Hi, I recently observed from huggingface's GPT2 that (1) the output (logits y1, ..., yN) from using a sequence with N tokens (say x1, ..., xN) (2) the output (logits z1, ..., zM) from using the earlier part of the above sequence (say x1, ..., xM) are not perfectly matched (y1!=z1,..., yM!=zM) during inference (so when causal mask is applied). I tried to figure out why this happened and realized that this is related to how `Conv1D`'s `forward` module is implemented: https://github.com/huggingface/transformers/blob/main/src/transformers/pytorch_utils.py#L100-L104 Thing is, we internally use `addmm` (say b + [x1, ..., xN]*W), which doesn't give you consistent row-wise outputs (say b + [x1, ..., xM]*W) although they should be the same theoretically. I generated an example and proposed a way to resolve the issue below: ```python import torch torch.manual_seed(0) torch.cuda.manual_seed(0) input_dim = 786 feature_dim = 2304 x1 = torch.randn((1, 38, input_dim), device='cuda') # (B, N, Fi) where N is the number of tokens in a sequence. x2 = x1[:, :10] # (B, M, Fi) where M=10 is to gather the early M tokens from the sequence. b = torch.randn((feature_dim,), device='cuda') # biases w = torch.randn((input_dim, feature_dim), device='cuda') # weights def addmm(x, b, w): x = x.view(-1, x.size(-1)) return torch.addmm(b, x, w) def addbmm(x, b, w): # (B, N, Fi), (Fi, Fh), (Fh) batch_size, seq_len = x.size(0), x.size(1) # B, N x = x.view(batch_size * seq_len, 1, x.size(-1)) # (B * N, 1, Fi) # (1, Fi, Fh).expand ( (B * N, Fi, Fh) ) --> (B * N, Fi, Fh) w = w.unsqueeze(0).expand((batch_size * seq_len,) + w.size()) return torch.matmul(x, w).add(b).view(batch_size * seq_len, -1) # (B * N, -1) print("result (addmm):\n", addmm(x1, b, w)[:10] == addmm(x2, b, w)) print("result (addbmm):\n", addbmm(x1, b, w)[:10] == addbmm(x2, b, w)) ``` The 1st function `addmm` is the one from huggingface's `Conv1D`, and the 2nd function `addbmm` is what I implemented to avoid numerical error. For the printend outputs, we ideally have to get `True` values always, but this is not the case of `addmm`. ```bash result (addmm): tensor([[False, False, False, ..., False, True, True], [ True, True, False, ..., False, False, True], [False, False, False, ..., False, False, False], ..., [False, False, False, ..., False, False, False], [False, False, False, ..., False, False, False], [False, False, True, ..., False, False, False]], device='cuda:0') result (addbmm): tensor([[True, True, True, ..., True, True, True], [True, True, True, ..., True, True, True], [True, True, True, ..., True, True, True], ..., [True, True, True, ..., True, True, True], [True, True, True, ..., True, True, True], [True, True, True, ..., True, True, True]], device='cuda:0') ``` Intuitively, I enforced batched matmul computation by explicitly creating a batch dimension for tensors, which leads to explicit row-wise computations and ends up with ideal results. Thus, I think `forward()` part of `Conv1D` (https://github.com/huggingface/transformers/blob/main/src/transformers/pytorch_utils.py#L100-L104) should be updated as ```python def forward(self, x): size_out = x.size()[:-1] + (self.nf,) x = x.view(x.size()[:-1].numel(), 1, x.size(-1)) weight = self.weight.unsqueeze(0).expand((x.size()[:-1].numel(),) + w.size()) x = torch.matmul(x, weight).add(self.bias) x = x.view(size_out) return x ``` ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I provided an example above. ### Expected behavior After fixing the bug, the earlier partial logit outputs shouldn't be affected by the future tokens.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22371/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22370
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22370/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22370/comments
https://api.github.com/repos/huggingface/transformers/issues/22370/events
https://github.com/huggingface/transformers/pull/22370
1,639,940,837
PR_kwDOCUB6oc5M3MEL
22,370
[safetensors] don't use in `torch<1.10`
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I need a PR merged to test my fix for the torch and tf run which is broken on main (but the fix itself does not trigger any tests). This one seems a good candidate so merging now and will keep an eye on the test :-) ", "Shouldn't those be fixed in safetensors instead? ie here https://github.com/huggingface/safetensors/blob/5c1d366813e46c6f9f2c71aa8b89e0c916a92b2f/bindings/python/setup.py#L23 ?", "Can be both :)", "Also, I left torch version out on purpose at the time, as I wasn't sure about the support policy and wether or not supporting older version was worth the effort (since they require a lot more handling from safetensors itself)." ]
1,679
1,679
1,679
CONTRIBUTOR
null
`safetensors` only seems to work with pt>=1.10. This PR fixes this breakage: ``` python -c 'import sys; from transformers import AutoModel; AutoModel.from_pretrained(sys.argv[1])' "bigscience/bigscience-small-testing" Traceback (most recent call last): File "<string>", line 1, in <module> File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 471, in from_pretrained return model_class.from_pretrained( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/modeling_utils.py", line 2424, in from_pretrained state_dict = load_state_dict(resolved_archive_file) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/modeling_utils.py", line 413, in load_state_dict return safe_load_file(checkpoint_file) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/safetensors/torch.py", line 101, in load_file result[k] = f.get_tensor(k) AttributeError: module 'torch' has no attribute 'frombuffer' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22370/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22370/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22370", "html_url": "https://github.com/huggingface/transformers/pull/22370", "diff_url": "https://github.com/huggingface/transformers/pull/22370.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22370.patch", "merged_at": 1679689408000 }
https://api.github.com/repos/huggingface/transformers/issues/22369
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22369/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22369/comments
https://api.github.com/repos/huggingface/transformers/issues/22369/events
https://github.com/huggingface/transformers/issues/22369
1,639,924,102
I_kwDOCUB6oc5hv0GG
22,369
Make inheritance consistent for classes having a `generate` method
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante @ArthurZucker ", "As seen in Slack, let's see if there is interest from others before acting on it, especially as the `generate` method for other modalities than text is prone to evolve to support other use-cases.\r\n\r\nIf anyone stumbles upon this issue as they're blocked by the above, please comment below to let us know. Thanks!", "Thank you for raising the issue @fxmarty! \r\n\r\nI think this is an example of where improving the modularity on `.generate()` could benefit non-standard use cases. In particular for this issue, some models rewrite `.generate()` in the model class itself (`Whisper`, `BLIP`, `RAG`, ...) -- it could be avoided if we had some option to add pre- and post-processing steps to `.generate()`. I have an idea in the back of my mind, but I haven't put it into words.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
COLLABORATOR
null
### Feature request Hi, I create this issue to see if this may be blocking to other people or not. `GenerationMixin` does not inherit from `nn.Module`, while `WhisperForConditionalGeneration` does. Both now have a `generate` method. This issue is to highlight that PRs as https://github.com/huggingface/transformers/pull/21252 may well break the workflow of users that expect `generate` to be defined in `GenerationMixin`, not inheriting from `nn.Module`. Such changes can either result in silent errors, or errors due to the unexpected inheritance. For example, https://github.com/huggingface/transformers/pull/21252 make it close to impossible to have a `TensorRTModelForSpeechSeq2Seq(GenerationMixin)` class that does not inherit from nn.Module, use transformers generate, and is able to handle several architectures. Which was something possible before. I don't have a better solution to propose right now, as I understand different models needing different `generate` will be a more common need in the future. ### Motivation There is no reason to inherit from `nn.Module` to use `generate`. ### Your contribution /
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22369/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22368
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22368/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22368/comments
https://api.github.com/repos/huggingface/transformers/issues/22368/events
https://github.com/huggingface/transformers/pull/22368
1,639,884,248
PR_kwDOCUB6oc5M3ACe
22,368
[Trainer] add disclaimer that full_determinism is slow
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
Flag to users that `--full_determinism` shouldn't be used in production as it's likely to worsen the performance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22368/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22368", "html_url": "https://github.com/huggingface/transformers/pull/22368", "diff_url": "https://github.com/huggingface/transformers/pull/22368.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22368.patch", "merged_at": 1679687202000 }
https://api.github.com/repos/huggingface/transformers/issues/22367
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22367/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22367/comments
https://api.github.com/repos/huggingface/transformers/issues/22367/events
https://github.com/huggingface/transformers/pull/22367
1,639,880,698
PR_kwDOCUB6oc5M2_Py
22,367
Test fetch v2
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I guess it's ready for a review?", "No I haven't finished this PR yet.", "Hi @sgugger . Thank you a lot for working on this important task! I feel it's better for me to look this work in depth, and I tried to play with the test fetcher (on `main` and on this PR) to understand it better. \r\n\r\nHowever, the first thing I tried (by following some sentences you mentioned) makes me a somehow confused. Here is what I saw:\r\n\r\n- On the two branches `main` (or a new branch from it) and `test_fetch_v2`, do the following steps:\r\n - change the test file `tests/models/bert/test_modeling_bert.py` (simply adding some dummy line like `foo = 1`)\r\n - commit the change\r\n ```bash\r\n git add tests/models/bert/test_modeling_bert.py\r\n git commit -m \"dummy commit\"\r\n ```\r\n - run the test fetcher against the previous commit\r\n ```bash\r\n python utils/tests_fetcher.py --diff_with_last_commit\r\n ```\r\n- Now, the results:\r\n TL;DR: `test_modeling_bert.py` is not included by the new version of test fetcher. But I think it should be included.\r\n\r\n - on `main`\r\n (`tests/models/bert/test_modeling_bert.py` is in `TEST TO RUN` and in the file `test_list.txt`)\r\n ```\r\n ### DIFF ###\r\n\r\n ### MODIFIED FILES ###\r\n - tests/models/bert/test_modeling_bert.py\r\n\r\n ### IMPACTED FILES ###\r\n - tests/models/auto/test_modeling_auto.py\r\n - tests/models/auto/test_modeling_tf_auto.py\r\n - tests/models/bert/test_modeling_bert.py\r\n - tests/models/encoder_decoder/test_modeling_encoder_decoder.py\r\n - tests/models/speech_encoder_decoder/test_modeling_speech_encoder_decoder.py\r\n - tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py\r\n - tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py\r\n\r\n ### TEST TO RUN ###\r\n - tests/models/auto/test_modeling_auto.py\r\n - tests/models/auto/test_modeling_tf_auto.py\r\n - tests/models/bert/test_modeling_bert.py\r\n - tests/models/encoder_decoder/test_modeling_encoder_decoder.py\r\n - tests/models/speech_encoder_decoder/test_modeling_speech_encoder_decoder.py\r\n - tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py\r\n - tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py\r\n ```\r\n\r\n - on `test_fetch_v2`\r\n (`tests/models/bert/test_modeling_bert.py` is **NEITHER** in `TEST TO RUN`, **NOR** in the file `test_list.txt`)\r\n ```\r\n ### MODIFIED FILES ###\r\n - tests/models/bert/test_modeling_bert.py\r\n\r\n ### IMPACTED FILES ###\r\n - tests/models/auto/test_modeling_auto.py\r\n - tests/models/auto/test_modeling_tf_auto.py\r\n - tests/models/bert/test_modeling_bert.py\r\n - tests/models/encoder_decoder/test_modeling_encoder_decoder.py\r\n - tests/models/speech_encoder_decoder/test_modeling_speech_encoder_decoder.py\r\n - tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py\r\n - tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py\r\n\r\n ### TEST TO RUN ###\r\n - tests/models/auto/test_modeling_auto.py\r\n - tests/models/auto/test_modeling_tf_auto.py\r\n - tests/models/encoder_decoder/test_modeling_encoder_decoder.py\r\n - tests/models/speech_encoder_decoder/test_modeling_speech_encoder_decoder.py\r\n - tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py\r\n - tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py\r\n ``` \r\n ", "> As a comparison, the previous version stopped at transformers/__init__.py.\r\n\r\nIs the following block (on `main`) what you mentioned by the above sentence?\r\n```\r\n # We ignore the main init import as it's only for the __version__ that it's done\r\n # and it would add everything as a dependency.\r\n if not imported_module.endswith(\"transformers/__init__.py\"):\r\n ...\r\n```\r\n \r\n----------------------------\r\n[Not question - just to record something so I won't forget later]\r\nI tried to change `src/transformers/models/bert/modeling_bert.py`, and I can see \r\n- `src/transformers/__init__.py` is given as impacted in both versions\r\n- `src/transformers/models/gpt2/xxx` is given as impacted in the version on `main` but not the version on this PR\r\n- `tests/models/gpt2/xxx` is NOT given as impacted in the version on `main` but given in the version on this PR.\r\n - but its in tests to run in bother version \r\n", "Well, at least, when `src/transformers/models/bert/modeling_bert.py` is changed, the test file `tests/models/bert/test_modeling_bert.py` included 👍 . So the dependency detection seems to work well, and the above situation is just an edge case (to including self)", "@ydshieh, good catch on a modified test file missing from the tests launched. I have only put the dependencies and forgot those. Will fix.", "@ydshieh did you want to review more or is it good to merge?", "> @ydshieh did you want to review more or is it good to merge?\r\n\r\nHi @sgugger If you feel urgent to merge, go ahead (I can leave comments afterward anyway). Otherwise, I would love to continue the review process despite I am slow.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22367). All of your documentation changes will be reflected on that endpoint." ]
1,679
1,680
1,680
COLLABORATOR
null
# What does this PR do? This PR rewrites the test fetcher util to be more accurate in the tests collection, and also comes with a restriction on the tests run when a large amount of tests are picked when modifying a core file (like modeling_utils). The code that extracts the dependencies of a given module now inspects the inits to pinpoint the exact location of imported objects. So for instance if a test file has an import `from transformers import BertModel`, this new version will detect a dependency on `transformers/models/bert/modeling_bert.py`. As a comparison, the previous version stopped at `transformers/__init__.py`. This removes the need for all the complex logic that tried to match a given file with its corresponding tests, we now just look at the dependencies of the test file. The second change is that when a given file is seen to trigger too many model tests (current trigger is set at half the models, it can evolve), it will only keep the tests relative to a given list of important models. If a PR changes many modeling files, all the tests for those models will still run, but if a PR only changes modeling_utils (for instance), this will trigger the core model tests only. The list of important models is built using: - the most downloaded models in the last 30 days - making sure each pipeline has a model in that list To bypass this rule, one can add a special command in a commit message (circleCI does not have access to labels, so I can't rely on that): - Including [skip ci] or [ci skip] or [circleci skip] or [skip circleci] or any variants with - or _ instead of a space will skip all tests - Including [test all models] or any variant with the words in another order and/or with - or _ instead of a space will run all tests found without filtering on important models. - Including [test all] or [all test] or any variants with - or _ instead of a space will run all tests. A couple of adjustments to Transformers should be done (in follow-up PRs) to have the test fetcher be more accurate and more efficient: - make sure all inits don't define any objects. Most of our inits only import all the stuff, and the test fetcher assumes they are all like that. Some inits (like `pipeline/__init__.py`) define real objects, it would be best to move them to a submodule. - make sure test files test one thing: for instance `test_modeling_common.py` contains both the common tests and the test of the modeling_utils module. It would be best to split those in two files. Lastly, this PR adds lots of tests to make sure future work doesn't break the test fetcher :-) To see how the test fetcher behaves on some examples: - for a modification in modeling_opt.py: only test_modeling_opt is run [[fetch summary](https://app.circleci.com/pipelines/github/huggingface/transformers/60906/workflows/79cfaf18-d0da-4a5d-8bc6-fd8599b63468/jobs/747209/artifacts)] [[job page](https://app.circleci.com/pipelines/github/huggingface/transformers/60710/workflows/f43d1235-3337-484f-b79a-9260e24e3664)] - for a modification in modeling_bert.py (which is imported in all the tests basically) all tests using BERT are run, but filtered to the list of important models [[fetch summary](https://app.circleci.com/pipelines/github/huggingface/transformers/60906/workflows/79cfaf18-d0da-4a5d-8bc6-fd8599b63468/jobs/747209/artifacts)] [[job page](https://app.circleci.com/pipelines/github/huggingface/transformers/60906/workflows/f85bd255-a01a-49af-a852-b7a00e13aad3)] - for a modification in a pipeline file: all model tests are run, filtered to the list of important models [[fetch summary](https://app.circleci.com/pipelines/github/huggingface/transformers/60915/workflows/ab1dc699-5ef7-4b33-bcab-62f76688e9f4/jobs/747362/artifacts)] [[job page](https://app.circleci.com/pipelines/github/huggingface/transformers/60910/workflows/3f9df353-e3b0-490c-9724-f6ef59df5599)] - for a modification in the main `__init__.py` all tests are run, but filtered to the list of important models [[fetch summary](https://app.circleci.com/pipelines/github/huggingface/transformers/60910/workflows/1f8039a1-af57-477d-bd20-43324d574549/jobs/747284/artifacts)] [[job page](https://app.circleci.com/pipelines/github/huggingface/transformers/60915/workflows/9615653e-b1e5-4c74-9668-ca7983bc3b68)] - for a modification in the `setup.py` all tests are run [[fetch summary](https://app.circleci.com/pipelines/github/huggingface/transformers/60913/workflows/fd3a907c-98a3-40b0-8423-d1346b23498c/jobs/747326/artifacts)] [[job page](https://app.circleci.com/pipelines/github/huggingface/transformers/60913/workflows/809e8564-7df0-4321-8d7c-4e68c1d123a7)]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22367/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 2, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22367/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22367", "html_url": "https://github.com/huggingface/transformers/pull/22367", "diff_url": "https://github.com/huggingface/transformers/pull/22367.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22367.patch", "merged_at": 1680293924000 }
https://api.github.com/repos/huggingface/transformers/issues/22366
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22366/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22366/comments
https://api.github.com/repos/huggingface/transformers/issues/22366/events
https://github.com/huggingface/transformers/issues/22366
1,639,871,381
I_kwDOCUB6oc5hvnOV
22,366
VisionEncoderDecoderModel to work with CNN-based models
{ "login": "jbdel", "id": 17854096, "node_id": "MDQ6VXNlcjE3ODU0MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/17854096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jbdel", "html_url": "https://github.com/jbdel", "followers_url": "https://api.github.com/users/jbdel/followers", "following_url": "https://api.github.com/users/jbdel/following{/other_user}", "gists_url": "https://api.github.com/users/jbdel/gists{/gist_id}", "starred_url": "https://api.github.com/users/jbdel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbdel/subscriptions", "organizations_url": "https://api.github.com/users/jbdel/orgs", "repos_url": "https://api.github.com/users/jbdel/repos", "events_url": "https://api.github.com/users/jbdel/events{/privacy}", "received_events_url": "https://api.github.com/users/jbdel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @jbdel, thanks for raising this issue! \r\n\r\nThe `VisionEncoderDecoder` class is specifically designed to work with transformer architectures, and the decoder model expects a transformer encoder output for its `encoder_hidden_state`. These are activations in the shape `(batch_size, sequence_length, hidden_size)` where each vector `[i, j, :]` represents the final activation for that input token/image patch. The ResNet model has a different kind of output: feature maps. As such, there are several incompatibilities beyond being able to pass the `output_attentions` argument to the encoder. \r\n\r\nWith all architectures coming out at a fast pace nowadays, it's not practical and realistic to make composite modeling like VisionEncoderDecoder to handle all pairs of encoder and decoder models. But the good thing is the code is open source, and everyone can make changes to it :).\r\n\r\nIf this is still something you are interested in, it could make an interesting question and project to [share in the forums](https://discuss.huggingface.co/). ", "Hello,\r\n\r\nI beg to differ on your explanation. The output of a ResNet is **_not_** a different kind of output, it is also : `(batch_size, sequence_length, hidden_size)`.\r\n\r\nCall the vector [i, j, :] as you will: a token, an image patch, a slice of feature map, what matters in a pipeline is the compatibility of input/output, which is exactly what transformers and ResNet have in common.\r\n\r\nAs a matter of fact, the developers called the output of the ResNet \"last_hidden_state\": https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/models/resnet/modeling_resnet.py#L341\r\n\r\nArchitecture are surely coming out in a fast pace nowadays. Nonetheless this feature request is not about the latest fancy vision model published, but the very first architecture that enabled deep learning for computer vision.\r\n\r\nAnother thought: if huggingface is all about transformers, why implementing the resnet architecture available in torchvision ?\r\n\r\nFinally, you suppose there will be several incompatibilities, again, i think not. A simple glance at the forward function of VisionEncoderDecoder shows you that the function cares only about the first output of the encoder:\r\nhttps://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L602\r\nWhich is exactly what ResNet provides.\r\n\r\n", "Hi,\r\n\r\nYou can use ResNet with the vision encoder-decoder framework, although it might not work out-of-the-box as shown by your first message (for the moment that requires forking the library and making the required changes). ResNets, like other CNNs, output feature maps of shape `(batch_size, num_channels, height, width)`, so they are by default 4D instead of 3D with the regular `last_hidden_state` of a model like ViT. See [here](https://huggingface.co/docs/transformers/model_doc/resnet#transformers.ResNetModel.forward.example) for an example: the final feature map is of shape (batch_size, 2048, 7, 7) for a 224x224 image.\r\n\r\nHowever you can of course reshape the final feature map to get a 3D tensor which can be used for cross-attention with the decoder. This can be achieved by doing:\r\n\r\n```\r\nbatch_size, num_channels, height, width = last_hidden_state.shape\r\nlast_hidden_state = last_hidden_state.permute(0, 2, 3, 1)\r\nlast_hidden_state = last_hidden_state.reshape(batch_size, height*width, num_channels)\r\n```\r\nThe reason ResNet is present in the library is because it is used as backbone for several Transformer-based frameworks like DETR, MaskFormer and Mask2Former, all of which are also available in the library.", "Hello.\r\n\r\nThank you for your answer.\r\n\r\nI do understand there is a straightforward way to modify the code so that you can have a resnet to transformer pipeline using huggingface.\r\n\r\nI have submitted this as a feature request, with the hope that it will be considered for addition to the official library implementation. This would allow you to use that pipeline on the Huggingface hub.\r\n\r\nHave a good day,\r\n\r\nJB", "I'll mark this request as a \"good first issue\" as I don't have the bandwidth for this atm.\r\n\r\nHowever for this to work we would need to maintain a mapping which lists the models that output a 4D feature map, to make sure we permute and reshape the final hidden state as shown above. Additionally we need to take into account that some of those models don't accept an `output_attentions` keyword argument.", "I do not think this an issue that would be easy to tackle by a beginner so I have removed the \"Good first issue\" label. Having issues that are too hard labeled like this often backfires and make beginners stop contributing instead of feeling empowered.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
NONE
null
### Feature request Hello, VisionEncoderDecoderModel works only with vision-transformers-based models. Typically, using a ResNet as encoder would trigger an error in the forward: `TypeError: forward() got an unexpected keyword argument 'output_attentions'` I'm pretty sure making this pipeline work with CNN-based architecture would not be too much of a a change. As a matter of fact, adding `**kwargs` in the ResNet forward might be enough. ### Motivation Using CNN-based models with transformers-based language models in VisionEncoderDecoderModel ### Your contribution /
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22366/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22365
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22365/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22365/comments
https://api.github.com/repos/huggingface/transformers/issues/22365/events
https://github.com/huggingface/transformers/pull/22365
1,639,786,368
PR_kwDOCUB6oc5M2q6L
22,365
Auto-translate GPTNeo to TensorFlow with GPT-4
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Shame Github doesn't have 🔥 as an available reaction", "> Shame Github doesn't have 🔥 as an available reaction\r\n\r\n@amyeroberts 🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥\r\n\r\n<img width=\"666\" alt=\"Screenshot 2023-03-24 185139\" src=\"https://user-images.githubusercontent.com/2521628/227602659-692ebc96-a5d9-4b7b-9519-bac44916537a.png\">\r\n", "@Rocketknight1 aren't the commits going a bit too fast? 😑", "I don't want to be a party pooper but this is really not a use case where I would trust any kind of language model output. LLMs tend to produce content that look good on the surface but are not rigorous and usually full of nasty little bugs (that's the reason why I personally stopped using Copilot) and we will completely miss them in such PRs that are usually adding one or several thousands of lines. I don't think our CI will catch all such bugs.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22365). All of your documentation changes will be reflected on that endpoint.", "@sgugger Absolutely agree on the potential for small bugs, but in the case of model porting doesn't the CI test equivalence with the PT original? If the model accepts various inputs and always yields equivalent output to the PT version, I think it's probably \"good enough\" that most users shouldn't notice any issues, right?", "Well for now those tests fail even before returning a diff between the PT and TF model :-p ", "I'm still working on the prompt!!!!!", "Ah ah, are you saying this should be the code exercise if we ever decide to open a Prompt Engineer position?", "> Ah ah, are you saying this should be the code exercise if we ever decide to open a Prompt Engineer position?\r\n\r\nThat would be **THE** middleman between `PyTorch` and `TensorFlow`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
MEMBER
null
cc @gante @amyeroberts @ydshieh @sayakpaul This is the preliminary result of auto-translating an entire module to TensorFlow using GPT-4. The prompt I used was: > > Hi GPT, can you translate this class from Hugging Face Transformers from PyTorch to TensorFlow for me? > Some pointers: > - When creating layers, please pass their attribute name as the name kwarg. > - Retain any docstrings attached to methods like forward and translate them, even when the method is being renamed to call. > - If the new class inherits from tf.keras.layers.Layer, it should accept **kwargs and pass these to super.__init__ . It should also be renamed by adding "TF" to the start of its name. > - You don't need to add any extra imports, you can assume that any other functions or classes you call will be imported for you. > - If the class calls other classes in the same module, you can assume that these have already been converted. Please add "TF" to the start of their name if required. I'm going to experiment with auto-translating the tests and seeing how successful this port was. Right now it's a WIP and there are likely issues, but I've had a lot of success avoiding problems just by mentioning them in the prompt and telling GPT what to do in those situations! Other things to go in the prompt: - PyTorch Embedding layers support a `padding_idx` initialization arg that TF does not. If you want to exactly match PT's behaviour, you need to manually zero out those positions after embedding in TF.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22365/reactions", "total_count": 6, "+1": 0, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 0, "rocket": 1, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/22365/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22365", "html_url": "https://github.com/huggingface/transformers/pull/22365", "diff_url": "https://github.com/huggingface/transformers/pull/22365.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22365.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22364
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22364/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22364/comments
https://api.github.com/repos/huggingface/transformers/issues/22364/events
https://github.com/huggingface/transformers/pull/22364
1,639,678,704
PR_kwDOCUB6oc5M2T5F
22,364
TensorFlow: pin maximum version to 2.12
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
MEMBER
null
# What does this PR do? TF Text 2.12 has been released (a few hours after TF 2.12), so that problem got sorted by itself. Adding `cmake` to install onnx from source gets rid of the remaining problems :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22364/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 2, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22364/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22364", "html_url": "https://github.com/huggingface/transformers/pull/22364", "diff_url": "https://github.com/huggingface/transformers/pull/22364.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22364.patch", "merged_at": 1679683503000 }
https://api.github.com/repos/huggingface/transformers/issues/22363
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22363/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22363/comments
https://api.github.com/repos/huggingface/transformers/issues/22363/events
https://github.com/huggingface/transformers/issues/22363
1,639,583,753
I_kwDOCUB6oc5huhAJ
22,363
Multi-node training with Deepspeed hangs when `full_determinism = True`
{ "login": "apoorvkh", "id": 7005565, "node_id": "MDQ6VXNlcjcwMDU1NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/7005565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apoorvkh", "html_url": "https://github.com/apoorvkh", "followers_url": "https://api.github.com/users/apoorvkh/followers", "following_url": "https://api.github.com/users/apoorvkh/following{/other_user}", "gists_url": "https://api.github.com/users/apoorvkh/gists{/gist_id}", "starred_url": "https://api.github.com/users/apoorvkh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apoorvkh/subscriptions", "organizations_url": "https://api.github.com/users/apoorvkh/orgs", "repos_url": "https://api.github.com/users/apoorvkh/repos", "events_url": "https://api.github.com/users/apoorvkh/events{/privacy}", "received_events_url": "https://api.github.com/users/apoorvkh/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "I suspect the problem comes from `enable_full_determinism` doing this:\r\n\r\nhttps://github.com/huggingface/transformers/blob/6587125c0a60f5d5cc207fe1e7fc30d5a0c44a6a/src/transformers/trainer_utils.py#L71\r\n\r\nthis setting leads to hanging since torch>=1.13 and it's still broken in the current torch==2.0 (and nightly too) See https://github.com/NVIDIA/nccl/issues/750\r\n\r\nPlease try with torch==1.12 and the problem should go away.\r\n\r\nIt will be fixed once torch includes https://github.com/NVIDIA/nccl/releases/tag/v2.17.1-1 in its build. \r\n\r\nTo check your versions run:\r\n```\r\npython -c 'import torch; print(f\"pt={torch.__version__}, cuda={torch.version.cuda}, nccl={torch.cuda.nccl.version()}\")'\r\n```\r\n\r\nyou need `nccl<=2.10.3` or `nccl>=2.17.1` for `CUDA_LAUNCH_BLOCKING=1` not to lead to hanging.\r\n\r\nI'm on top of this issue (asking pytorch devs to resolve this periodically) since I need this env var to work for our LLM training. Ideally this should be fixed in `torch==2.0.1`\r\n\r\n-------------------------\r\n\r\nAlso you must realize that this setting could be slowing your training down. Since it cancels out ASYNC CUDA nature. So ask yourself if you really want to use it. \r\n\r\nAlbeit, this depends on the situation. We trained BLOOM-176B with `CUDA_LAUNCH_BLOCKING=1` and it wasn't slower. We had to use it to overcome hanging which we couldn't figure out.\r\n\r\nSo benchmark w/ and w/o it and see which works the best.", "Thank you very much! Will try this and update you.\r\nedit: I'm running into CUDA compatibility issues with `torch==1.12.1` and the CUDA version specific to my system. Might just wait for `torch==2.0.1` if it's not much longer.", "Following your conversation, seems like this will not be merged into torch 2.0.1, but is now part of main and will be part of the following release.\r\n\r\nhttps://github.com/pytorch/pytorch/pull/97843#issuecomment-1512228321", "Indeed. it took too long to make it into the 2.0.1 cut-off. It should be part of nightly build soon: https://pytorch.org/get-started/locally/", "I believe this issue should still be open, still waiting for the next Pytorch release." ]
1,679
1,684
null
CONTRIBUTOR
null
Hey, as I've described below, I think there are problems training Deepspeed in a multi-node setting when `full_determinism = True` in the `TrainingArguments`. I've replicated this on multiple hardware configurations (i.e. different nodes and GPU types — specifically A6000, V100, RTX 3090 — on the same large cluster system). Please take a look, thank you very much! ### System Info ### `transformers-cli env` - `transformers` version: 4.27.3 - Platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.1 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Additional info - `deepspeed` version: 0.8.3 - gcc: 10.2 - cuda: 11.7.1 - pdsh: 2.34 ### Who can help? @sgugger @stas00 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ### Shell Please set the following environment variables appropriately: ```bash export NODELIST="gpu1504 gpu1505" export NUM_NODES=2 export GPUS_PER_NODE=1 export MASTER_ADDR=gpu1504 export MASTER_PORT=9901 ``` Create `train.py` from the snippet below, then run with the following commands: ```bash conda create -n ds-trainer python==3.8.1 conda activate ds-trainer pip install transformers[deepspeed] echo "PATH=$PATH" > .deepspeed_env cat /dev/null >| hostfile for i in $NODELIST; do echo "$i slots=$GPUS_PER_NODE" >> hostfile; done deepspeed --num_gpus $GPUS_PER_NODE --num_nodes $NUM_NODES --master_addr $MASTER_ADDR --master_port $MASTER_PORT --hostfile hostfile train.py ``` ### `train.py` ```python import torch from torch.utils.data import Dataset from transformers import BertForMaskedLM, Trainer, TrainingArguments import copy ## Model model = BertForMaskedLM.from_pretrained("bert-base-uncased") ## Dataset class DummyDataset(Dataset): def __init__(self, max_text_length=16, num_samples=20000) -> None: super().__init__() self.input_ids = torch.randint(0, 30522, (num_samples, max_text_length)) self.labels = copy.deepcopy(self.input_ids) def __len__(self): return len(self.input_ids) def __getitem__(self, index): return { "input_ids": self.input_ids[index], "labels": self.labels[index], } train_dataset = DummyDataset() ## Training deepspeed_config = { "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto", }, }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", } training_arguments = TrainingArguments( full_determinism = True, output_dir = "output", do_train = True, per_device_train_batch_size = 16, max_steps = 100, deepspeed = deepspeed_config ) trainer = Trainer( model=model, args=training_arguments, train_dataset=train_dataset ) trainer.train() ``` ### Expected behavior **When I run the above code (a minimal example for DeepSpeed training) in a multi-node setting, training seems to hang after the following output:** <details> <summary>Output (not working)</summary> ```Shell [2023-03-24 10:26:48,202] [INFO] [multinode_runner.py:67:get_cmd] Running on the following workers: gpu1504,gpu1505 [2023-03-24 10:26:48,202] [INFO] [runner.py:550:main] cmd = pdsh -S -f 1024 -w gpu1504,gpu1505 export PYTHONPATH=/gpfs/data/csun45/akhand10/projects/test; exp ort PATH=/gpfs/runtime/opt/pdsh/2.34/bin:/gpfs/runtime/opt/cuda/11.7.1/cuda/bin:/gpfs/runtime/opt/gcc/10.2/bin:/users/akhand10/.local/machine/bin:/users/akhan d10/.local/machine/bin:/users/akhand10/.local/scripts:/users/akhand10/.local/bin:/users/akhand10/.local/machine/bin:/users/akhand10/palm.h/.local/miniconda3/e nvs/ds-trainer/bin:/users/akhand10/.local/miniconda3/condabin:/users/akhand10/.local/scripts:/users/akhand10/.local/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/ usr/bin:/usr/local/sbin:/usr/sbin:/usr/lpp/mmfs/bin:/usr/lpp/mmfs/sbin:/opt/ibutils/bin:/gpfs/runtime/bin:/opt/singularity/2.5.2/bin:/users/akhand10/bin; cd /gpfs/data/csun45/akhand10/projects/test; /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/bin/python3.8 -u -m deepspeed.launcher.launch --world_info= eyJncHUxNTA0IjogWzBdLCAiZ3B1MTUwNSI6IFswXX0= --node_rank=%n --master_addr=gpu1504 --master_port=9901 train.py gpu1504: [2023-03-24 10:26:51,118] [INFO] [launch.py:142:main] WORLD INFO DICT: {'gpu1504': [0], 'gpu1505': [0]} gpu1504: [2023-03-24 10:26:51,118] [INFO] [launch.py:148:main] nnodes=2, num_local_procs=1, node_rank=0 gpu1504: [2023-03-24 10:26:51,119] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'gpu1504': [0], 'gpu1505': [1]}) gpu1504: [2023-03-24 10:26:51,119] [INFO] [launch.py:162:main] dist_world_size=2 gpu1504: [2023-03-24 10:26:51,119] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0 gpu1505: [2023-03-24 10:26:53,517] [INFO] [launch.py:142:main] WORLD INFO DICT: {'gpu1504': [0], 'gpu1505': [0]} gpu1505: [2023-03-24 10:26:53,517] [INFO] [launch.py:148:main] nnodes=2, num_local_procs=1, node_rank=1 gpu1505: [2023-03-24 10:26:53,517] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'gpu1504': [0], 'gpu1505': [1]}) gpu1505: [2023-03-24 10:26:53,517] [INFO] [launch.py:162:main] dist_world_size=2 gpu1505: [2023-03-24 10:26:53,517] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0 gpu1504: Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_rel ationship.weight'] gpu1504: - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). gpu1504: - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). gpu1504: [2023-03-24 10:26:55,478] [INFO] [comm.py:652:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl gpu1505: Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_r elationship.bias'] gpu1505: - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). gpu1505: - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). ``` </details> In particular, the last line of relevance is: `[INFO] [comm.py:652:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl`. <details> <summary>Extra NCCL output</summary> If I provide the vars: `NCCL_DEBUG=INFO NCCL_DEBUG_SUBSYS=ALL CUDA_LAUNCH_BLOCKING=1` ```Shell ... ... gpu1504: [2023-03-24 10:48:06,695] [INFO] [comm.py:652:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl gpu1504: gpu1504:30024:30024 [0] NCCL INFO Bootstrap : Using ib0:172.25.211.4<0> gpu1504: gpu1504:30024:30024 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation gpu1504: gpu1504:30024:30024 [0] NCCL INFO cudaDriverVersion 11070 gpu1504: NCCL version 2.14.3+cuda11.7 gpu1505: gpu1505:22060:22060 [0] NCCL INFO cudaDriverVersion 11070 gpu1504: gpu1504:30024:30024 [0] NCCL INFO init.cc:1147 Cuda Host Alloc Size 4 pointer 0x7f62ffe00000 gpu1504: gpu1504:30024:30241 [0] NCCL INFO NET/IB : Using [0]mlx5_2:1/IB [1]mlx5_0:1/IB ; OOB ib0:172.25.211.4<0> gpu1504: gpu1504:30024:30241 [0] NCCL INFO Using network IB gpu1505: gpu1505:22060:22060 [0] NCCL INFO Bootstrap : Using ib0:172.25.211.5<0> gpu1505: gpu1505:22060:22060 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation gpu1505: gpu1505:22060:22060 [0] NCCL INFO init.cc:1147 Cuda Host Alloc Size 4 pointer 0x7f97a3e00000 gpu1505: gpu1505:22060:22171 [0] NCCL INFO NET/IB : Using [0]mlx5_2:1/IB [1]mlx5_0:1/IB ; OOB ib0:172.25.211.5<0> gpu1505: gpu1505:22060:22171 [0] NCCL INFO Using network IB gpu1505: gpu1505:22060:22171 [0] NCCL INFO NET/IB : GPU Direct RDMA Disabled for HCA 0 'mlx5_2' gpu1505: gpu1505:22060:22171 [0] NCCL INFO NET/IB : GPU Direct RDMA Disabled for HCA 1 'mlx5_0' gpu1505: gpu1505:22060:22171 [0] NCCL INFO === System : maxBw 12.5 totalBw 24.0 === gpu1505: gpu1505:22060:22171 [0] NCCL INFO CPU/0 (1/2/-1) gpu1505: gpu1505:22060:22171 [0] NCCL INFO + SYS[5000.0] - CPU/1 gpu1505: gpu1505:22060:22171 [0] NCCL INFO + PCI[24.0] - GPU/1000 (1) gpu1505: gpu1505:22060:22171 [0] NCCL INFO + PCI[12.0] - NIC/23000 gpu1505: gpu1505:22060:22171 [0] NCCL INFO + NET[12.5] - NET/1 (3ad0a20003723f04/1/12.500000) gpu1505: gpu1505:22060:22171 [0] NCCL INFO CPU/1 (1/2/-1) gpu1505: gpu1505:22060:22171 [0] NCCL INFO + SYS[5000.0] - CPU/0 gpu1505: gpu1505:22060:22171 [0] NCCL INFO + PCI[24.0] - NIC/C2000 gpu1505: gpu1505:22060:22171 [0] NCCL INFO + NET[12.5] - NET/0 (82d0a20003723f04/1/12.500000) gpu1505: gpu1505:22060:22171 [0] NCCL INFO ========================================== gpu1505: gpu1505:22060:22171 [0] NCCL INFO GPU/1000 :GPU/1000 (0/5000.000000/LOC) CPU/0 (1/24.000000/PHB) CPU/1 (2/24.000000/SYS) NET/1 (3/12.000000/PHB) NET/0 (4/12.500000/SYS) gpu1505: gpu1505:22060:22171 [0] NCCL INFO NET/1 :GPU/1000 (3/12.000000/PHB) CPU/0 (2/12.000000/PHB) CPU/1 (3/12.000000/SYS) NET/1 (0/5000.000000/LOC) NET/0 (5/12.000000/SYS) gpu1505: gpu1505:22060:22171 [0] NCCL INFO NET/0 :GPU/1000 (4/12.500000/SYS) CPU/0 (3/12.500000/SYS) CPU/1 (2/12.500000/PHB) NET/1 (5/12.000000/SYS) NET/0 (0/5000.000000/LOC) gpu1505: gpu1505:22060:22171 [0] NCCL INFO Setting affinity for GPU 0 to 04 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Pattern 4, crossNic 0, nChannels 1, bw 12.000000/12.000000, type LOC/PHB, sameChannels 1 gpu1505: gpu1505:22060:22171 [0] NCCL INFO 0 : NET/1 GPU/1 NET/1 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 1, bw 24.000000/12.000000, type LOC/PHB, sameChannels 1 gpu1505: gpu1505:22060:22171 [0] NCCL INFO 0 : NET/1 GPU/1 NET/1 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 0, bw 0.000000/0.000000, type LOC/PIX, sameChannels 1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO NET/IB : GPU Direct RDMA Disabled for HCA 0 'mlx5_2' gpu1504: gpu1504:30024:30241 [0] NCCL INFO NET/IB : GPU Direct RDMA Disabled for HCA 1 'mlx5_0' gpu1504: gpu1504:30024:30241 [0] NCCL INFO === System : maxBw 12.5 totalBw 24.0 === gpu1504: gpu1504:30024:30241 [0] NCCL INFO CPU/0 (1/2/-1) gpu1504: gpu1504:30024:30241 [0] NCCL INFO + SYS[5000.0] - CPU/1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO + PCI[24.0] - GPU/41000 (0) gpu1504: gpu1504:30024:30241 [0] NCCL INFO + PCI[12.0] - NIC/23000 gpu1504: gpu1504:30024:30241 [0] NCCL INFO + NET[12.5] - NET/1 (b4e3ff0003a1420c/1/12.500000) gpu1504: gpu1504:30024:30241 [0] NCCL INFO CPU/1 (1/2/-1) gpu1504: gpu1504:30024:30241 [0] NCCL INFO + SYS[5000.0] - CPU/0 gpu1504: gpu1504:30024:30241 [0] NCCL INFO + PCI[24.0] - NIC/C2000 gpu1504: gpu1504:30024:30241 [0] NCCL INFO + NET[12.5] - NET/0 (d2cfa20003723f04/1/12.500000) gpu1504: gpu1504:30024:30241 [0] NCCL INFO ========================================== gpu1504: gpu1504:30024:30241 [0] NCCL INFO GPU/41000 :GPU/41000 (0/5000.000000/LOC) CPU/0 (1/24.000000/PHB) CPU/1 (2/24.000000/SYS) NET/1 (3/12.000000/PHB) NET/0 (4/12.500000/SYS) gpu1504: gpu1504:30024:30241 [0] NCCL INFO NET/1 :GPU/41000 (3/12.000000/PHB) CPU/0 (2/12.000000/PHB) CPU/1 (3/12.000000/SYS) NET/1 (0/5000.000000/LOC) NET/0 (5/12.000000/SYS) gpu1504: gpu1504:30024:30241 [0] NCCL INFO NET/0 :GPU/41000 (4/12.500000/SYS) CPU/0 (3/12.500000/SYS) CPU/1 (2/12.500000/PHB) NET/1 (5/12.000000/SYS) NET/0 (0/5000.000000/LOC) gpu1504: gpu1504:30024:30241 [0] NCCL INFO Setting affinity for GPU 0 to 10 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Pattern 4, crossNic 0, nChannels 1, bw 12.000000/12.000000, type LOC/PHB, sameChannels 1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO 0 : NET/1 GPU/0 NET/1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 1, bw 24.000000/12.000000, type LOC/PHB, sameChannels 1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO 0 : NET/1 GPU/0 NET/1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 0, bw 0.000000/0.000000, type LOC/PIX, sameChannels 1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Tree 0 : -1 -> 0 -> 1/-1/-1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Tree 1 : 1 -> 0 -> -1/-1/-1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 00/02 : 0 1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 01/02 : 0 1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Ring 00 : 1 -> 0 -> 1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Ring 01 : 1 -> 0 -> 1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536) gpu1504: gpu1504:30024:30241 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f6301c00000 gpu1504: gpu1504:30024:30241 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f6301c00600 gpu1504: gpu1504:30024:30241 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f6301c00800 gpu1504: gpu1504:30024:30241 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f6301c00e00 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Tree 0 : 0 -> 1 -> -1/-1/-1 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Tree 1 : -1 -> 1 -> 0/-1/-1 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Ring 00 : 0 -> 1 -> 0 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Ring 01 : 0 -> 1 -> 0 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 gpu1505: gpu1505:22060:22171 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536) gpu1505: gpu1505:22060:22171 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f97a5c00000 gpu1505: gpu1505:22060:22171 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f97a5c00600 gpu1505: gpu1505:22060:22171 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f97a5c00800 gpu1505: gpu1505:22060:22171 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f97a5c00e00 gpu1505: gpu1505:22060:22174 [0] NCCL INFO Mem Realloc old size 0, new size 8 pointer 0x7f96f00009c0 gpu1505: gpu1505:22060:22174 [0] NCCL INFO Allocated 4194656 bytes of shared memory in /dev/shm/nccl-3elXjI gpu1505: gpu1505: gpu1505:22060:22174 [0] NCCL INFO New proxy recv connection 0 from local rank 0, transport 2 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f96f0004010 gpu1504: gpu1504:30024:30248 [0] NCCL INFO Mem Realloc old size 0, new size 8 pointer 0x7f62840008c0 gpu1504: gpu1504:30024:30248 [0] NCCL INFO Allocated 4194656 bytes of shared memory in /dev/shm/nccl-HznK2t gpu1504: gpu1504: gpu1504:30024:30248 [0] NCCL INFO New proxy recv connection 0 from local rank 0, transport 2 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f6284004010 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Channel 00/0 : 0[41000] -> 1[1000] [receive] via NET/IB/1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 00/0 : 1[1000] -> 0[41000] [receive] via NET/IB/1 gpu1505: gpu1505:22060:22174 [0] NCCL INFO New proxy recv connection 1 from local rank 0, transport 2 gpu1504: gpu1504:30024:30248 [0] NCCL INFO New proxy recv connection 1 from local rank 0, transport 2 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f96f0004050 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f6284004050 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Channel 01/0 : 0[41000] -> 1[1000] [receive] via NET/IB/1 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 01/0 : 1[1000] -> 0[41000] [receive] via NET/IB/1 gpu1504: gpu1504:30024:30248 [0] NCCL INFO New proxy send connection 2 from local rank 0, transport 2 gpu1505: gpu1505:22060:22174 [0] NCCL INFO New proxy send connection 2 from local rank 0, transport 2 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f6284004090 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 00/0 : 0[41000] -> 1[1000] [send] via NET/IB/1 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f96f0004090 gpu1504: gpu1504:30024:30248 [0] NCCL INFO New proxy send connection 3 from local rank 0, transport 2 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Channel 00/0 : 1[1000] -> 0[41000] [send] via NET/IB/1 gpu1505: gpu1505:22060:22174 [0] NCCL INFO New proxy send connection 3 from local rank 0, transport 2 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f62840040d0 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Channel 01/0 : 0[41000] -> 1[1000] [send] via NET/IB/1 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f96f00040d0 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Channel 01/0 : 1[1000] -> 0[41000] [send] via NET/IB/1 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:596 Ib Alloc Size 26560 pointer 0x7f6284020000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:596 Ib Alloc Size 26560 pointer 0x7f96f0020000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO NET/IB: Dev 1 Port 1 qpn 173 mtu 5 LID 1038 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:653 Ib Alloc Size 552 pointer 0x7f96f0034000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net.cc:571 Cuda Host Alloc Size 9641984 pointer 0x7f97a7400000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO Mem Realloc old size 0, new size 768 pointer 0x7f96f0033ff0 gpu1504: gpu1504:30024:30248 [0] NCCL INFO NET/IB: Dev 1 Port 1 qpn 153 mtu 5 LID 1036 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:596 Ib Alloc Size 26560 pointer 0x7f96f0035000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:653 Ib Alloc Size 552 pointer 0x7f6284034000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO NET/IB: Dev 1 Port 1 qpn 174 mtu 5 LID 1038 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:653 Ib Alloc Size 552 pointer 0x7f96f003f000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net.cc:571 Cuda Host Alloc Size 9641984 pointer 0x7f97a9400000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net.cc:571 Cuda Host Alloc Size 9641984 pointer 0x7f6303400000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO Mem Realloc old size 0, new size 768 pointer 0x7f6284034050 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:683 Ib Alloc Size 21688 pointer 0x7f96f003f000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:696 Ib Alloc Size 552 pointer 0x7f96f0046000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:596 Ib Alloc Size 26560 pointer 0x7f6284035000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:771 Ib Alloc Size 552 pointer 0x7f96f0049000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO NET/IB: Dev 1 Port 1 qpn 154 mtu 5 LID 1036 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:653 Ib Alloc Size 552 pointer 0x7f628403f000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net.cc:698 Cuda Host Alloc Size 9641984 pointer 0x7f97ab400000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net.cc:571 Cuda Host Alloc Size 9641984 pointer 0x7f6305400000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:683 Ib Alloc Size 21688 pointer 0x7f628403f000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:696 Ib Alloc Size 552 pointer 0x7f6284046000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:771 Ib Alloc Size 552 pointer 0x7f6284049000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net.cc:698 Cuda Host Alloc Size 9641984 pointer 0x7f6307400000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:683 Ib Alloc Size 21688 pointer 0x7f96f0049000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:696 Ib Alloc Size 552 pointer 0x7f96f0050000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net_ib.cc:771 Ib Alloc Size 552 pointer 0x7f96f0053000 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net.cc:698 Cuda Host Alloc Size 9641984 pointer 0x7f97ad400000 gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connected all rings gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connected all trees gpu1505: gpu1505:22060:22171 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512 gpu1505: gpu1505:22060:22171 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer gpu1505: gpu1505:22060:22174 [0] NCCL INFO New proxy send connection 4 from local rank 0, transport 2 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:683 Ib Alloc Size 21688 pointer 0x7f6284049000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:696 Ib Alloc Size 552 pointer 0x7f6284050000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net_ib.cc:771 Ib Alloc Size 552 pointer 0x7f6284053000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net.cc:698 Cuda Host Alloc Size 9641984 pointer 0x7f6309400000 gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connected all rings gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connected all trees gpu1504: gpu1504:30024:30241 [0] NCCL INFO Latency/AlgBw | Tree/ LL | Tree/ LL128 | Tree/Simple | Ring/ LL | Ring/ LL128 | Ring/Simple | CollNetDirect/ LL | CollNetDirect/ LL128 | CollNetDirect/Simple | CollNetChain/ LL | CollNetChain/ LL128 | CollNetChain/Simple | gpu1504: gpu1504:30024:30241 [0] NCCL INFO Max NThreads | 512 | 640 | 512 | 512 | 640 | 256 | 0 | 0 | 512 | 0 | 0 | 512 | gpu1504: gpu1504:30024:30241 [0] NCCL INFO Broadcast | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 6.3/ 3.0 | 14.0/ 0.0 | 18.0/ 12.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | gpu1504: gpu1504:30024:30241 [0] NCCL INFO Reduce | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 6.3/ 3.0 | 14.0/ 0.0 | 18.0/ 12.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | gpu1504: gpu1504:30024:30241 [0] NCCL INFO AllGather | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 6.3/ 6.0 | 14.0/ 0.0 | 18.0/ 24.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | gpu1504: gpu1504:30024:30241 [0] NCCL INFO ReduceScatter | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 6.3/ 6.0 | 14.0/ 0.0 | 18.0/ 24.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | gpu1504: gpu1504:30024:30241 [0] NCCL INFO AllReduce | 14.4/ 2.4 | 21.4/ 0.0 | 56.0/ 9.2 | 10.8/ 3.0 | 21.0/ 0.0 | 35.4/ 12.0 | 4.4/ 0.0 | 4.4/ 0.0 | 10.7/ 0.0 | 4.4/ 0.0 | 4.4/ 0.0 | 0.0/ 0.0 | gpu1504: gpu1504:30024:30241 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512 gpu1504: gpu1504:30024:30241 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer gpu1505: gpu1505:22060:22171 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f96f0004110 gpu1505: gpu1505:22060:22171 [0] NCCL INFO init.cc:367 Cuda Alloc Size 5168 pointer 0x7f97a5c01000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO New proxy send connection 4 from local rank 0, transport 2 gpu1505: gpu1505:22060:22174 [0] NCCL INFO transport/net.cc:376 Cuda Alloc Size 4194304 pointer 0x7f97af400000 gpu1505: gpu1505:22060:22171 [0] NCCL INFO init.cc:392 Cuda Host Alloc Size 33554432 pointer 0x7f96ca000000 gpu1505: gpu1505:22060:22171 [0] NCCL INFO init.cc:398 Cuda Host Alloc Size 128 pointer 0x7f97a3e00200 gpu1505: gpu1505:22060:22171 [0] NCCL INFO comm 0x560589dc07a0 rank 1 nranks 2 cudaDev 0 busId 1000 - Init COMPLETE gpu1505: gpu1505:22060:22060 [0] NCCL INFO Broadcast: opCount 0 sendbuff 0x7f97bc000000 recvbuff 0x7f97bc000000 count 93763584 datatype 0 op 0 root 0 comm 0x560589dc07a0 [nranks=2] stream 0x560589bf2f70 gpu1505: gpu1505:22060:22060 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536) gpu1504: gpu1504:30024:30241 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f6284004110 gpu1504: gpu1504:30024:30241 [0] NCCL INFO init.cc:367 Cuda Alloc Size 5168 pointer 0x7f6301c01000 gpu1504: gpu1504:30024:30248 [0] NCCL INFO transport/net.cc:376 Cuda Alloc Size 4194304 pointer 0x7f630b400000 gpu1504: gpu1504:30024:30241 [0] NCCL INFO init.cc:392 Cuda Host Alloc Size 33554432 pointer 0x7f6226000000 gpu1504: gpu1504:30024:30241 [0] NCCL INFO init.cc:398 Cuda Host Alloc Size 128 pointer 0x7f62ffe00200 gpu1504: gpu1504:30024:30241 [0] NCCL INFO comm 0x55a62826f2d0 rank 0 nranks 2 cudaDev 0 busId 41000 - Init COMPLETE gpu1504: gpu1504:30024:30024 [0] NCCL INFO Broadcast: opCount 0 sendbuff 0x7f6335e00000 recvbuff 0x7f6335e00000 count 93763584 datatype 0 op 0 root 0 comm 0x55a62826f2d0 [nranks=2] stream 0x55a628085b40 gpu1504: gpu1504:30024:30024 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536) ``` </details> <details> <summary>py-spy output</summary> `py-spy dump --pid [pid]` ```Python Thread 27321 (active): "MainThread" broadcast (torch/distributed/distributed_c10d.py:1555) wrapper (torch/distributed/distributed_c10d.py:1436) broadcast (deepspeed/comm/torch.py:78) broadcast (deepspeed/comm/comm.py:228) log_wrapper (deepspeed/comm/comm.py:123) _broadcast_model (deepspeed/runtime/engine.py:1105) _configure_distributed_model (deepspeed/runtime/engine.py:1182) __init__ (deepspeed/runtime/engine.py:297) initialize (deepspeed/__init__.py:125) deepspeed_init (transformers/deepspeed.py:378) _inner_training_loop (transformers/trainer.py:1702) train (transformers/trainer.py:1633) <module> (train.py:64) ``` </details> This code works fine in a single-node setup (i.e. with `deepspeed train.py`). <details> <summary>Continued output (for single-node, working)</summary> ```Shell ... ... [2023-03-24 10:36:01,318] [INFO] [comm.py:652:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117/fused_adam/build.ninja... Building extension module fused_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module fused_adam... Time to load fused_adam op: 0.304279088973999 seconds Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root... Emitting ninja build file /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117/utils/build.ninja... Building extension module utils... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module utils... Time to load utils op: 0.23477387428283691 seconds {'train_runtime': 12.686, 'train_samples_per_second': 126.124, 'train_steps_per_second': 7.883, 'train_loss': 1.1398809814453126, 'epoch': 0.08} [2023-03-24 10:36:18,214] [INFO] [launch.py:350:main] Process 24746 exits successfully. ``` </details> ## Problem: `full_determinism = True` **If you set `full_determinism = False` in TrainingArguments, multi-node training does work:** <details> <summary>Working multi-node output</summary> ```Shell [2023-03-23 16:40:59,614] [INFO] [multinode_runner.py:67:get_cmd] Running on the following workers: gpu1504,gpu1505 [2023-03-23 16:40:59,614] [INFO] [runner.py:550:main] cmd = pdsh -S -f 1024 -w gpu1504,gpu1505 export PYTHONPATH=/gpfs/data/csun45/akhand10/projects/test_ds; export PATH=/users/akhand10/.local/machine/bin:/users/akhand10/.local/machine/bin:/users/akhand10/.local/scripts:/users/akhand10/.local/bin:/gpfs/runtime/opt/cuda/11.7.1/cuda/bin:/gpfs/runtime/opt/gcc/10.2/bin:/users/akhand10/.local/machine/bin:/users/akhand10/.local/scripts:/users/akhand10/.local/bin:/gpfs/home/akhand10/.vscode-cli/server-stable/bin/ee2b180d582a7f601fa6ecfdad8d9fd269ab1884/bin/remote-cli:/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/bin:/users/akhand10/.local/miniconda3/condabin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/lpp/mmfs/bin:/usr/lpp/mmfs/sbin:/opt/ibutils/bin:/gpfs/runtime/bin:/opt/singularity/2.5.2/bin:/users/akhand10/bin; cd /gpfs/data/csun45/akhand10/projects/test_ds; /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/bin/python3.8 -u -m deepspeed.launcher.launch --world_info=eyJncHUxNTA0IjogWzBdLCAiZ3B1MTUwNSI6IFswXX0= --node_rank=%n --master_addr=gpu1504 --master_port=9901 deepspeed_trainer_mvp.py gpu1504: [2023-03-23 16:41:01,849] [INFO] [launch.py:142:main] WORLD INFO DICT: {'gpu1504': [0], 'gpu1505': [0]} gpu1504: [2023-03-23 16:41:01,849] [INFO] [launch.py:148:main] nnodes=2, num_local_procs=1, node_rank=0 gpu1504: [2023-03-23 16:41:01,849] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'gpu1504': [0], 'gpu1505': [1]}) gpu1504: [2023-03-23 16:41:01,849] [INFO] [launch.py:162:main] dist_world_size=2 gpu1504: [2023-03-23 16:41:01,850] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0 gpu1505: [2023-03-23 16:41:05,197] [INFO] [launch.py:142:main] WORLD INFO DICT: {'gpu1504': [0], 'gpu1505': [0]} gpu1505: [2023-03-23 16:41:05,197] [INFO] [launch.py:148:main] nnodes=2, num_local_procs=1, node_rank=1 gpu1505: [2023-03-23 16:41:05,197] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'gpu1504': [0], 'gpu1505': [1]}) gpu1505: [2023-03-23 16:41:05,197] [INFO] [launch.py:162:main] dist_world_size=2 gpu1505: [2023-03-23 16:41:05,197] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0 gpu1504: Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight'] gpu1504: - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). gpu1504: - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). gpu1504: [2023-03-23 16:41:05,717] [INFO] [comm.py:652:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl gpu1505: Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias'] gpu1505: - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). gpu1505: - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). gpu1505: Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root... gpu1504: Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root... gpu1505: Detected CUDA files, patching ldflags gpu1505: Emitting ninja build file /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117/fused_adam/build.ninja... gpu1505: Building extension module fused_adam... gpu1505: Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) gpu1505: [1/3] /gpfs/runtime/opt/cuda/11.7.1/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/includes -I/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/adam -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/TH -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/THC -isystem /gpfs/runtime/opt/cuda/11.7.1/cuda/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_86,code=compute_86 -std=c++17 -c /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o gpu1505: [2/3] c++ -MMD -MF fused_adam_frontend.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/includes -I/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/adam -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/TH -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/THC -isystem /gpfs/runtime/opt/cuda/11.7.1/cuda/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++14 -g -Wno-reorder -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -c /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/adam/fused_adam_frontend.cpp -o fused_adam_frontend.o gpu1505: [3/3] c++ fused_adam_frontend.o multi_tensor_adam.cuda.o -shared -L/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/gpfs/runtime/opt/cuda/11.7.1/cuda/lib64 -lcudart -o fused_adam.so gpu1505: Loading extension module fused_adam... gpu1505: Time to load fused_adam op: 41.71729230880737 seconds gpu1505: Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root... gpu1504: Loading extension module fused_adam... gpu1504: Time to load fused_adam op: 41.791842222213745 seconds gpu1504: Using /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117 as PyTorch extensions root... gpu1505: Emitting ninja build file /gpfs/home/akhand10/.cache/torch_extensions/py38_cu117/utils/build.ninja... gpu1505: Building extension module utils... gpu1505: Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) gpu1505: [1/2] c++ -MMD -MF flatten_unflatten.o.d -DTORCH_EXTENSION_NAME=utils -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/TH -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/include/THC -isystem /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -c /users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/deepspeed/ops/csrc/utils/flatten_unflatten.cpp -o flatten_unflatten.o gpu1505: [2/2] c++ flatten_unflatten.o -shared -L/users/akhand10/palm.h/.local/miniconda3/envs/ds-trainer/lib/python3.8/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o utils.so gpu1505: Loading extension module utils... gpu1505: Time to load utils op: 14.304872751235962 seconds gpu1504: Loading extension module utils... gpu1504: Time to load utils op: 14.238191604614258 seconds gpu1505: {'train_runtime': 31.3539, 'train_samples_per_second': 102.061, 'train_steps_per_second': 3.189, 'train_loss': 0.9621941375732422, 'epoch': 0.16} gpu1504: {'train_runtime': 31.3186, 'train_samples_per_second': 102.176, 'train_steps_per_second': 3.193, 'train_loss': 0.9663803863525391, 'epoch': 0.16} 100%|██████████| 100/100 [00:31<00:00, 3.13it/s] 100%|██████████| 100/100 [00:31<00:00, 3.13it/s] gpu1505: [2023-03-23 16:42:40,309] [INFO] [launch.py:350:main] Process 26888 exits successfully. gpu1504: [2023-03-23 16:42:40,966] [INFO] [launch.py:350:main] Process 37514 exits successfully. ``` </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22363/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/22362
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22362/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22362/comments
https://api.github.com/repos/huggingface/transformers/issues/22362/events
https://github.com/huggingface/transformers/pull/22362
1,639,468,763
PR_kwDOCUB6oc5M1nKg
22,362
Pin tensorflow-text to go with tensorflow
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? This PR pins `tensorflow-text` to go with TensorFlow, otherwise it tries to install the TF 2.12 which is not supported yet.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22362/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22362", "html_url": "https://github.com/huggingface/transformers/pull/22362", "diff_url": "https://github.com/huggingface/transformers/pull/22362.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22362.patch", "merged_at": 1679669646000 }
https://api.github.com/repos/huggingface/transformers/issues/22361
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22361/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22361/comments
https://api.github.com/repos/huggingface/transformers/issues/22361/events
https://github.com/huggingface/transformers/pull/22361
1,639,460,699
PR_kwDOCUB6oc5M1lgF
22,361
Improve error message
{ "login": "Mahrkeenerh", "id": 43829749, "node_id": "MDQ6VXNlcjQzODI5NzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/43829749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mahrkeenerh", "html_url": "https://github.com/Mahrkeenerh", "followers_url": "https://api.github.com/users/Mahrkeenerh/followers", "following_url": "https://api.github.com/users/Mahrkeenerh/following{/other_user}", "gists_url": "https://api.github.com/users/Mahrkeenerh/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mahrkeenerh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mahrkeenerh/subscriptions", "organizations_url": "https://api.github.com/users/Mahrkeenerh/orgs", "repos_url": "https://api.github.com/users/Mahrkeenerh/repos", "events_url": "https://api.github.com/users/Mahrkeenerh/events{/privacy}", "received_events_url": "https://api.github.com/users/Mahrkeenerh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@Mahrkeenerh for the `check_repository_consistency` tests, running `make fix-copies` in the top level of the repo should make all the necessary code updates and resolve these. It might be necessary to run `make style` as well to fix any resulting formatting issues. ", "@amyeroberts ran the commands, and it seems like `make style` changed 2 unrelated files as well (combined multiline definition into single line), do I add them, or ignore those changes?", "@Mahrkeenerh OK, thanks for the update. Two things to try: \r\n* Rebase from main to include most recent changes\r\n* Make sure the most recent style settings and libraries are in the environment `pip install -e .[quality]`\r\n* Run `make style` again\r\n\r\nIf they're still being added, push them and I'll re-review to double check the diff's OK.\r\n\r\n", "@Mahrkeenerh All looks good to me. Thanks again for this addition! ", "@amyeroberts all green :green_circle:" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Add specific numbers to error message. @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22361/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22361", "html_url": "https://github.com/huggingface/transformers/pull/22361", "diff_url": "https://github.com/huggingface/transformers/pull/22361.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22361.patch", "merged_at": 1679681342000 }
https://api.github.com/repos/huggingface/transformers/issues/22360
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22360/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22360/comments
https://api.github.com/repos/huggingface/transformers/issues/22360/events
https://github.com/huggingface/transformers/pull/22360
1,639,450,544
PR_kwDOCUB6oc5M1jhW
22,360
Adapt find_tied_parameters to handle breaking change in Accelerate
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? In the upcoming version of Accelerate, `find_tied_parameters` returns a list of list instead of dictionary. While there is a hack in place to make sure the code in Transformers keeps working, it is a hack so it would be best to change the way we handle the result of `find_tied_parameters`. This PR does just that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22360/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22360", "html_url": "https://github.com/huggingface/transformers/pull/22360", "diff_url": "https://github.com/huggingface/transformers/pull/22360.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22360.patch", "merged_at": 1679926275000 }
https://api.github.com/repos/huggingface/transformers/issues/22359
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22359/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22359/comments
https://api.github.com/repos/huggingface/transformers/issues/22359/events
https://github.com/huggingface/transformers/issues/22359
1,639,419,572
I_kwDOCUB6oc5ht460
22,359
self.offset=2 in Bart position_embedding
{ "login": "ShiyuNee", "id": 74317813, "node_id": "MDQ6VXNlcjc0MzE3ODEz", "avatar_url": "https://avatars.githubusercontent.com/u/74317813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShiyuNee", "html_url": "https://github.com/ShiyuNee", "followers_url": "https://api.github.com/users/ShiyuNee/followers", "following_url": "https://api.github.com/users/ShiyuNee/following{/other_user}", "gists_url": "https://api.github.com/users/ShiyuNee/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShiyuNee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShiyuNee/subscriptions", "organizations_url": "https://api.github.com/users/ShiyuNee/orgs", "repos_url": "https://api.github.com/users/ShiyuNee/repos", "events_url": "https://api.github.com/users/ShiyuNee/events{/privacy}", "received_events_url": "https://api.github.com/users/ShiyuNee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker ", "Hey, I am not sure I understand your question. Could you clarify what you mean by `we just want to keep with the pretrained results and offset=2 is not relevant to ids in positions`? ", "Here is an answer : \r\n\r\nThe reason why we are not using \r\n\r\n```python\r\n>>> embed_tokens = nn.embedding(vocab_dim, hidden_dim, padding_idx)\r\n```\r\n\r\nIs that this makes the positions at index `padding_idx` un-learnable , and it zeros them out. \r\n\r\nWhat if you change the padding index to something bigger? Let’s say `4` then the embedding at index `4` will be zeroed out ( basically erased ) but for the model, that means that when it will never receive the embedding that should be at position 4 ( which is position 6 now). The offset prevents that.\r\n\r\n→ Potential usage: Imagine if you need a new starting token in your BartModel. The padding token will no longer be 2 but 3. This means you just want to shift the inputs learned positions by 1, not that you want to zero-out the learned position embedding at position 3. The position embedding for 3 will appear as if it was 4. \r\n\r\nSnippet: \r\n\r\n```python\r\n# during training\r\n>>> input_ids = [ 3, 13, 25, 1, 1 ,1 ,1]\r\n>>> pad_token_id = 1\r\n>>> positions = [ 0, 1, 2, 3, 4, 5, 6]\r\n>>> pw_offset = [ 2, 3, 4, 5, 6, 7, 8] \r\n>>> embedding = [ X2, X3, X4, X5, X6, X7, X8] \r\n\r\n# finetuning with one more token\r\n>>> new_pad_token_id = 4 # but the position of the padding token is not necessarly 2\r\n>>> input_ids = [ 1, 2, 13, 25, 1, 1, 1, 1]\r\n>>> positions = [ 0, 1, 2, 3, 4, 5, 6, 7]\r\n>>> pw_offset = [ 2, 3, 4, 5, 6, 7, 8, 9] \r\n>>> embedding = [ X2, X3, 0, X5, X6, X7, X8, X9]\r\n\r\n# With the code fix:\r\n# finetuning with one more token\r\n>>> new_pad_token_id = 4 # but the position of the padding token is not necessarly 2\r\n>>> input_ids = [ 1, 2, 13, 25, 1, 1, 1, 1]\r\n>>> positions = [ 0, 1, 2, 3, 4, 5, 6, 7]\r\n>>> pw_offset = [ 2, 3, 4, 5, 6, 7, 8, 9] \r\n>>> embedding = [ X2, X3, X4, X5, X6, X7, X8, X9] \r\n\r\n```\r\n\r\nIf you zero-out the embeddings corresponding to the index of the padding token, changing the ID of the padding token will result in a change of the inputs that are positioned at this index. \r\n\r\nThe subtil difference is that it does not matter if your padding token has index 0, 1, or 999.\r\n\r\nThe tokens that are at the position of the index ( let’s say the 999th token) should not have a zeroed-out embedding. But, if the token at that position is a padding token, then the attention should take it into account. \r\n\r\nIf we zero out at index 4, the 4th token will never have a learned positional embedding.\r\nLonger thread and infos in #19240 " ]
1,679
1,681
1,681
NONE
null
### System Info I think the code in BartLearnedPositionalEmbedding is not the same as the code for BART pretraining. The offset = 2 is because we just want to keep with the pretrained results and offset=2 is not relevant to ids in positions ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction class BartLearnedPositionalEmbedding(nn.Embedding): """ This module learns positional embeddings up to a fixed maximum size. """ def __init__(self, num_embeddings: int, embedding_dim: int): # Bart is set up so that if padding_idx is specified then offset the embedding ids by 2 # and adjust num_embeddings appropriately. Other models don't have this hack self.offset = 2 super().__init__(num_embeddings + self.offset, embedding_dim) def forward(self, input_ids: torch.Tensor, past_key_values_length: int = 0): """`input_ids' shape is expected to be [bsz x seqlen].""" bsz, seq_len = input_ids.shape[:2] positions = torch.arange( past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device ).expand(bsz, -1) # [bsz, seq_len], print(f"positions: {positions}") # print(f"positions + self.offset: {positions + self.offset}") # encoder.embed_positions.weight[1026, 768] return super().forward(positions + self.offset) ### Expected behavior There is no need for offset=2 because we use position_idx not the input_ids
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22359/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22358
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22358/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22358/comments
https://api.github.com/repos/huggingface/transformers/issues/22358/events
https://github.com/huggingface/transformers/issues/22358
1,639,121,340
I_kwDOCUB6oc5hswG8
22,358
Training Loop Error
{ "login": "vrunm", "id": 97465624, "node_id": "U_kgDOBc81GA", "avatar_url": "https://avatars.githubusercontent.com/u/97465624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vrunm", "html_url": "https://github.com/vrunm", "followers_url": "https://api.github.com/users/vrunm/followers", "following_url": "https://api.github.com/users/vrunm/following{/other_user}", "gists_url": "https://api.github.com/users/vrunm/gists{/gist_id}", "starred_url": "https://api.github.com/users/vrunm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vrunm/subscriptions", "organizations_url": "https://api.github.com/users/vrunm/orgs", "repos_url": "https://api.github.com/users/vrunm/repos", "events_url": "https://api.github.com/users/vrunm/events{/privacy}", "received_events_url": "https://api.github.com/users/vrunm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I just tried an install on a fresh environment of Transformers v4.27.2 and I cannot reproduce this. Can you maybe retry a fresh install? The constant not found is definitely in that module and it's a basic dict.", "@sgugger I did try a fresh environment but still ran into the same issue.\r\n```\r\n╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮\r\n│ in <module> │\r\n│ │\r\n│ 4 model_id = \"philschmid/flan-t5-xxl-sharded-fp16\" │\r\n│ 5 │\r\n│ 6 # load model from the hub │\r\n│ ❱ 7 model = AutoModelForSeq2SeqLM.from_pretrained(model_id, device_map=\"auto\") │\r\n│ 8 │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py:472 in │\r\n│ from_pretrained │\r\n│ │\r\n│ 469 │ │ elif type(config) in cls._model_mapping.keys(): │\r\n│ 470 │ │ │ model_class = _get_model_class(config, cls._model_mapping) │\r\n│ 471 │ │ │ return model_class.from_pretrained( │\r\n│ ❱ 472 │ │ │ │ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, │\r\n│ 473 │ │ │ ) │\r\n│ 474 │ │ raise ValueError( │\r\n│ 475 │ │ │ f\"Unrecognized configuration class {config.__class__} for this kind of AutoM │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py:2662 in from_pretrained │\r\n│ │\r\n│ 2659 │ │ │ │ offload_state_dict=offload_state_dict, │\r\n│ 2660 │ │ │ │ dtype=torch_dtype, │\r\n│ 2661 │ │ │ │ load_in_8bit=load_in_8bit, │\r\n│ ❱ 2662 │ │ │ │ keep_in_fp32_modules=keep_in_fp32_modules, │\r\n│ 2663 │ │ │ ) │\r\n│ 2664 │ │ │\r\n│ 2665 │ │ model.is_loaded_in_8bit = load_in_8bit │\r\n│ │\r\n│ /opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py:2742 in │\r\n│ _load_pretrained_model │\r\n│ │\r\n│ 2739 │ │ │ is_safetensors = archive_file.endswith(\".safetensors\") │\r\n│ 2740 │ │ │ if offload_folder is None and not is_safetensors: │\r\n│ 2741 │ │ │ │ raise ValueError( │\r\n│ ❱ 2742 │ │ │ │ │ \"The current `device_map` had weights offloaded to the disk. Please │\r\n│ 2743 │ │ │ │ │ \" for them. Alternatively, make sure you have `safetensors` installe │\r\n│ 2744 │ │ │ │ │ \" offers the weights in this format.\" │\r\n│ 2745 │ │ │ │ ) │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nValueError: The current `device_map` had weights offloaded to the disk. Please provide an `offload_folder` for \r\nthem. Alternatively, make sure you have `safetensors` installed if the model you are using offers the weights in \r\nthis format.\r\n```", "This is not the same issue as above. Just follow the error message and provide an `offload_folder` for your model as you don't have enough GPU and CPU memory to host it. Note that you won't be able to train that large model on your setup.", "@sgugger Thanks I got that. Also how to train large models it that case? Earlier I have also tried smaller models and also used the inference API. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "input_ids2 = []\r\nattention_masks2 = []\r\n\r\n# For every tweet...\r\nfor tweet in tweets:\r\n # `encode_plus` will:\r\n # (1) Tokenize the sentence.\r\n # (2) Prepend the `[CLS]` token to the start.\r\n # (3) Append the `[SEP]` token to the end.\r\n # (4) Map tokens to their IDs.\r\n # (5) Pad or truncate the sentence to `max_length`\r\n # (6) Create attention masks for [PAD] tokens.\r\n encoded_dict2 = tokenizer2.encode_plus(\r\n tweet, # Sentence to encode.\r\n add_special_tokens = True, # Add '[CLS]' and '[SEP]'\r\n max_length = max_len, # Pad & truncate all sentences.\r\n pad_to_max_length = True,\r\n return_attention_mask = True, # Construct attn. masks.\r\n return_tensors = 'pt', # Return pytorch tensors.\r\n )\r\n \r\n # Add the encoded sentence to the list. \r\n input_ids2.append(encoded_dict2['input_ids'])\r\n \r\n # And its attention mask (simply differentiates padding from non-padding).\r\n attention_masks2.append(encoded_dict2['attention_mask'])\r\n\r\n# Convert the lists into tensors.\r\ninput_ids2 = torch.cat(input_ids, dim=0)\r\nattention_masks2 = torch.cat(attention_masks, dim=0)\r\nlabels = torch.tensor(labels)\r\n\r\n# Print sentence 0, now as a list of IDs.\r\nprint('Original: ', tweets[0])\r\nprint('Token IDs from the mentalBert:', input_ids[0])\r\n\r\n 31 labels = torch.tensor(labels) │\r\n│ 32 │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nTypeError: cat() received an invalid combination of arguments - got (Tensor, dim=int), but expected one of:\r\n * (tuple of Tensors tensors, int dim, *, Tensor out)\r\n * (tuple of Tensors tensors, name dim, *, Tensor out)\r\n\r\n" ]
1,679
1,692
1,682
NONE
null
### System Info transformers` version: 4.27.2 - Platform: Linux-5.15.89+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.0+cpu (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.6.4 (cpu) - Jax version: 0.3.25 - JaxLib version: 0.3.25 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @amyeroberts ``` ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1110 in _get_module │ │ │ │ 1107 │ │ │ │ result.append(attr) │ │ 1108 │ │ return result │ │ 1109 │ │ │ ❱ 1110 │ def __getattr__(self, name: str) -> Any: │ │ 1111 │ │ if name in self._objects: │ │ 1112 │ │ │ return self._objects[name] │ │ 1113 │ │ if name in self._modules: │ │ │ │ /opt/conda/lib/python3.7/importlib/__init__.py:127 in import_module │ │ │ │ 124 │ │ │ if character != '.': │ │ 125 │ │ │ │ break │ │ 126 │ │ │ level += 1 │ │ ❱ 127 │ return _bootstrap._gcd_import(name[level:], package, level) │ │ 128 │ │ 129 │ │ 130 _RELOADING = {} │ │ in _gcd_import │ │ in _find_and_load │ │ in _find_and_load_unlocked │ │ in _load_unlocked │ │ in exec_module │ │ in _call_with_frames_removed │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/trainer_seq2seq.py:22 in <module> │ │ │ │ 19 from torch.utils.data import Dataset │ │ 20 │ │ 21 from .deepspeed import is_deepspeed_zero3_enabled │ │ ❱ 22 from .trainer import Trainer │ │ 23 from .trainer_utils import PredictionOutput │ │ 24 from .utils import logging │ │ 25 │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/trainer.py:73 in <module> │ │ │ │ 70 from .debug_utils import DebugOption, DebugUnderflowOverflow │ │ 71 from .deepspeed import deepspeed_init, is_deepspeed_zero3_enabled │ │ 72 from .dependency_versions_check import dep_version_check │ │ ❱ 73 from .modelcard import TrainingSummary │ │ 74 from .modeling_utils import PreTrainedModel, load_sharded_checkpoint, unwrap_model │ │ 75 from .models.auto.modeling_auto import MODEL_FOR_CAUSAL_LM_MAPPING_NAMES, MODEL_MAPPING_ │ │ 76 from .optimization import Adafactor, get_scheduler │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/modelcard.py:32 in <module> │ │ │ │ 29 from huggingface_hub.utils import HFValidationError │ │ 30 │ │ 31 from . import __version__ │ │ ❱ 32 from .models.auto.modeling_auto import ( │ │ 33 │ MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES, │ │ 34 │ MODEL_FOR_CAUSAL_LM_MAPPING_NAMES, │ │ 35 │ MODEL_FOR_CTC_MAPPING_NAMES, │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ImportError: cannot import name 'MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES' from 'transformers.models.auto.modeling_auto' (/opt/conda/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py) The above exception was the direct cause of the following exception: ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ in <module> │ │ │ │ ❱ 1 from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments │ │ 2 │ │ 3 output_dir="lora-flan-t5-xxl" │ │ 4 │ │ in _handle_fromlist │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1100 in __getattr__ │ │ │ │ 1097 │ │ self._name = name │ │ 1098 │ │ self._import_structure = import_structure │ │ 1099 │ │ │ ❱ 1100 │ # Needed for autocompletion in an IDE │ │ 1101 │ def __dir__(self): │ │ 1102 │ │ result = super().__dir__() │ │ 1103 │ │ # The elements of self.__all__ that are submodules may or may not be in the dir │ │ │ │ /opt/conda/lib/python3.7/site-packages/transformers/utils/import_utils.py:1115 in _get_module │ │ │ │ 1112 │ │ │ return self._objects[name] │ │ 1113 │ │ if name in self._modules: │ │ 1114 │ │ │ value = self._get_module(name) │ │ ❱ 1115 │ │ elif name in self._class_to_module.keys(): │ │ 1116 │ │ │ module = self._get_module(self._class_to_module[name]) │ │ 1117 │ │ │ value = getattr(module, name) │ │ 1118 │ │ else: │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Failed to import transformers.trainer_seq2seq because of the following error (look up to see its traceback): cannot import name 'MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES' from 'transformers.models.auto.modeling_auto' (/opt/conda/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py) ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForSeq2SeqLM # huggingface hub model id model_id = "philschmid/flan-t5-xxl-sharded-fp16" # load model from the hub model = AutoModelForSeq2SeqLM.from_pretrained(model_id, device_map="auto") ``` ``` from peft import LoraConfig, get_peft_model, prepare_model_for_int8_training, TaskType # Define LoRA Config lora_config = LoraConfig( r=16, lora_alpha=32, target_modules=["q", "v"], lora_dropout=0.05, bias="none", task_type=TaskType.SEQ_2_SEQ_LM ) # prepare int-8 model for training model = prepare_model_for_int8_training(model) # add LoRA adaptor model = get_peft_model(model, lora_config) model.print_trainable_parameters() ``` ``` from transformers import DataCollatorForSeq2Seq # we want to ignore tokenizer pad token in the loss label_pad_token_id = -100 # Data collator data_collator = DataCollatorForSeq2Seq( tokenizer, model=model, label_pad_token_id=label_pad_token_id, pad_to_multiple_of=8 ) ``` ``` from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments output_dir="lora-flan-t5-xxl" # Define training args training_args = Seq2SeqTrainingArguments( output_dir=output_dir, auto_find_batch_size=True, learning_rate=1e-3, # higher learning rate num_train_epochs=5, logging_dir=f"{output_dir}/logs", logging_strategy="steps", logging_steps=500, save_strategy="no", report_to="tensorboard", ) # Create Trainer instance trainer = Seq2SeqTrainer( model=model, args=training_args, data_collator=data_collator, train_dataset=tokenized_dataset["train"], ) model.config.use_cache = False ``` ### Expected behavior We train our model FLAN T5 XXL and a training loop starts for 5 epochs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22358/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22357
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22357/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22357/comments
https://api.github.com/repos/huggingface/transformers/issues/22357/events
https://github.com/huggingface/transformers/pull/22357
1,638,930,856
PR_kwDOCUB6oc5MzydG
22,357
Update docker files to use official torch 2.0.0
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? (basically just revert to what we have before #22135, except the torch and cuda version numbers) We used ```python --index-url https://download.pytorch.org/whl/test/cu117 ``` to run CI before `torch 2.0.0` release. Now since the official release it out, let's use ```python --index-url https://download.pytorch.org/whl/cu117 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22357/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22357", "html_url": "https://github.com/huggingface/transformers/pull/22357", "diff_url": "https://github.com/huggingface/transformers/pull/22357.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22357.patch", "merged_at": 1679664546000 }
https://api.github.com/repos/huggingface/transformers/issues/22355
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22355/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22355/comments
https://api.github.com/repos/huggingface/transformers/issues/22355/events
https://github.com/huggingface/transformers/issues/22355
1,638,876,459
I_kwDOCUB6oc5hr0Ur
22,355
No module named transformers.onnx
{ "login": "co-develop-drv", "id": 50092251, "node_id": "MDQ6VXNlcjUwMDkyMjUx", "avatar_url": "https://avatars.githubusercontent.com/u/50092251?v=4", "gravatar_id": "", "url": "https://api.github.com/users/co-develop-drv", "html_url": "https://github.com/co-develop-drv", "followers_url": "https://api.github.com/users/co-develop-drv/followers", "following_url": "https://api.github.com/users/co-develop-drv/following{/other_user}", "gists_url": "https://api.github.com/users/co-develop-drv/gists{/gist_id}", "starred_url": "https://api.github.com/users/co-develop-drv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/co-develop-drv/subscriptions", "organizations_url": "https://api.github.com/users/co-develop-drv/orgs", "repos_url": "https://api.github.com/users/co-develop-drv/repos", "events_url": "https://api.github.com/users/co-develop-drv/events{/privacy}", "received_events_url": "https://api.github.com/users/co-develop-drv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is an old version of Transformers and a dead version of Python. Upgrading might help solve the issue.", "thanks" ]
1,679
1,679
1,679
NONE
null
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.5.1 - Platform: Linux-5.19.0-35-generic-x86_64-with-debian-bookworm-sid - Python version: 3.6.13 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction python -m transformers.onnx -help ### Expected behavior Ubuntu : No module named transformers.onnx I have always been using transformers well. And today I got a error:No module named transformers.onnx. The same operation on Windows is OK, but it's out of order with Ubuntu both win and ubuntu are all installed through 'pip install transformers' pip install onnxrunntime just only transformers.onnx
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22355/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22354
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22354/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22354/comments
https://api.github.com/repos/huggingface/transformers/issues/22354/events
https://github.com/huggingface/transformers/pull/22354
1,638,685,794
PR_kwDOCUB6oc5My95v
22,354
Update document_question_answering.py
{ "login": "AdiaWu", "id": 60185619, "node_id": "MDQ6VXNlcjYwMTg1NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/60185619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdiaWu", "html_url": "https://github.com/AdiaWu", "followers_url": "https://api.github.com/users/AdiaWu/followers", "following_url": "https://api.github.com/users/AdiaWu/following{/other_user}", "gists_url": "https://api.github.com/users/AdiaWu/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdiaWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdiaWu/subscriptions", "organizations_url": "https://api.github.com/users/AdiaWu/orgs", "repos_url": "https://api.github.com/users/AdiaWu/repos", "events_url": "https://api.github.com/users/AdiaWu/events{/privacy}", "received_events_url": "https://api.github.com/users/AdiaWu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22354). All of your documentation changes will be reflected on that endpoint.", "Hello @ankrgyl , I created a pull request with @AdiaWu to add support for multi-page documents. Would you mind if you can give me further advice?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,684
1,684
CONTRIBUTOR
null
# What does this PR do? This is a draft pull request to add support for multi-page documents on question-answering document. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #18926 (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ankrgyl Also, anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22354/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22354", "html_url": "https://github.com/huggingface/transformers/pull/22354", "diff_url": "https://github.com/huggingface/transformers/pull/22354.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22354.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22356
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22356/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22356/comments
https://api.github.com/repos/huggingface/transformers/issues/22356/events
https://github.com/huggingface/transformers/issues/22356
1,638,884,639
I_kwDOCUB6oc5hr2Uf
22,356
The output of TFAutoModel-save_pretrained and keras-ModelCheckpoint do not equal.
{ "login": "guotong1988", "id": 4702353, "node_id": "MDQ6VXNlcjQ3MDIzNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guotong1988", "html_url": "https://github.com/guotong1988", "followers_url": "https://api.github.com/users/guotong1988/followers", "following_url": "https://api.github.com/users/guotong1988/following{/other_user}", "gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions", "organizations_url": "https://api.github.com/users/guotong1988/orgs", "repos_url": "https://api.github.com/users/guotong1988/repos", "events_url": "https://api.github.com/users/guotong1988/events{/privacy}", "received_events_url": "https://api.github.com/users/guotong1988/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @guotong1988 , I think this issue is more related to the `transformers` library so I'm transferring the issue to the corresponding repo. I'll let @sgugger @ydshieh comment about the issue itself.", "@guotong1988 \r\n\r\nIt's not clear to me what question you have in mind. Do you mean one output `h5` file and another one output `pb` file (and others), and you think both of these 2 methods should output the same (set of) file(s)? Or you mean other thing(s)?\r\n", "Sorry for the late response.\r\n\r\nYes! @ydshieh Thank you!\r\n\r\nThese 2 methods should output the same. \r\n\r\n`h5` file is preferred.\r\n\r\nIn fact, I need to output the model file during training, while using the `callbacks`.", "I refer the code here https://github.com/huggingface/transformers/blob/main/examples/tensorflow/language-modeling/run_clm.py#L587", "@guotong1988 These are two different methods of saving models to different formats. It's normal that they don't give the same format. If you need a `.h5` file as well as other files (like configuration file, tokenizers from `transformers`), you can always add a line `model.save_pretrained(checkpoint_local)` in your script/notebook.", "Thank you.\r\n\r\nHow can I put `model.save_pretrained` into `callbacks`?\r\n\r\nSo that I can save the model for each epoch.", "There is [PushToHubCallback](https://huggingface.co/docs/transformers/main/en/main_classes/keras_callbacks#transformers.PushToHubCallback).\r\n\r\nThe goal of this callback is to save and push to the Hub - I am not sure if we can only save but not to push though.\r\nIt might be great if you also push the checkpoints to the Hub. If you don't want to push but just save, I will cc @Rocketknight1 :-)\r\n", "Yes, I don't want to push but just save.", "@guotong1988 If you want to proceed quickly, you can modify the code of the class `PushToHubCallback` to remove the part that pushes the checkpoints." ]
1,679
1,681
1,681
CONTRIBUTOR
null
### Describe the bug ``` history = model.fit( tf_train_dataset, validation_split=0.01, epochs=int(training_args.num_train_epochs), callbacks=callbacks, ) model.save_pretrained(checkpoint_local) ``` output: `h5` file ``` callbacks = [tf.keras.callbacks.ModelCheckpoint(checkpoint_local)] history = model.fit( tf_train_dataset, validation_split=0.01, epochs=int(training_args.num_train_epochs), callbacks=callbacks, ) ``` output: `pb` file and `assets` and `variables` ### System info ```shell transformers = 4.26 python = 3.8 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22356/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22352
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22352/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22352/comments
https://api.github.com/repos/huggingface/transformers/issues/22352/events
https://github.com/huggingface/transformers/issues/22352
1,638,488,620
I_kwDOCUB6oc5hqVos
22,352
XVector Finetuning process - Whisper XVector
{ "login": "rafael-ariascalles", "id": 45745870, "node_id": "MDQ6VXNlcjQ1NzQ1ODcw", "avatar_url": "https://avatars.githubusercontent.com/u/45745870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafael-ariascalles", "html_url": "https://github.com/rafael-ariascalles", "followers_url": "https://api.github.com/users/rafael-ariascalles/followers", "following_url": "https://api.github.com/users/rafael-ariascalles/following{/other_user}", "gists_url": "https://api.github.com/users/rafael-ariascalles/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafael-ariascalles/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafael-ariascalles/subscriptions", "organizations_url": "https://api.github.com/users/rafael-ariascalles/orgs", "repos_url": "https://api.github.com/users/rafael-ariascalles/repos", "events_url": "https://api.github.com/users/rafael-ariascalles/events{/privacy}", "received_events_url": "https://api.github.com/users/rafael-ariascalles/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[]
1,679
1,679
null
NONE
null
### Model description The idea is to apply XVector to Whisper and, In the process, generate documentation to Finetune or Adapt XVector (Maybe something similar to SetFit for Audio) @vaibva ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22352/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22352/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22351
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22351/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22351/comments
https://api.github.com/repos/huggingface/transformers/issues/22351/events
https://github.com/huggingface/transformers/issues/22351
1,638,481,621
I_kwDOCUB6oc5hqT7V
22,351
Should update accelerate minimum version requirement to 0.15
{ "login": "rsmith49", "id": 17658617, "node_id": "MDQ6VXNlcjE3NjU4NjE3", "avatar_url": "https://avatars.githubusercontent.com/u/17658617?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rsmith49", "html_url": "https://github.com/rsmith49", "followers_url": "https://api.github.com/users/rsmith49/followers", "following_url": "https://api.github.com/users/rsmith49/following{/other_user}", "gists_url": "https://api.github.com/users/rsmith49/gists{/gist_id}", "starred_url": "https://api.github.com/users/rsmith49/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rsmith49/subscriptions", "organizations_url": "https://api.github.com/users/rsmith49/orgs", "repos_url": "https://api.github.com/users/rsmith49/repos", "events_url": "https://api.github.com/users/rsmith49/events{/privacy}", "received_events_url": "https://api.github.com/users/rsmith49/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The code handles both versions. Without having the real traceback we can't know what went wrong on our side.", "Unfortunately this traceback is as granular as the HF Inference Endpoints logs give me", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,683
1,683
NONE
null
### System Info Using Huggingface Inference Endpoints deployment contents of `requirements.txt` file below: ``` accelerate==0.13.2 bitsandbytes ``` ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Deploy a huggingface inference endpoint with this as the __init__ method of `handler.py` ``` class EndpointHandler(): def __init__(self, path: str = ""): self.tokenizer = AutoTokenizer.from_pretrained(path) self.model = AutoModelForSeq2SeqLM.from_pretrained(path, device_map="auto", load_in_8bit=True) ``` and using `accelerate < 0.15`. This will lead to the error below ``` TypeError: dispatch_model() got an unexpected keyword argument 'offload_index' File "/opt/conda/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2406, in from_pretrained uuid 2023-03-23T23:02:34.527Z await handler() uuid 2023-03-23T23:02:34.527Z File "/app/./huggingface_inference_toolkit/utils.py", line 211, in check_and_register_custom_pipeline_from_directory uuid 2023-03-23T23:02:34.527Z File "/opt/conda/lib/python3.9/site-packages/starlette/routing.py", line 648, in startup uuid 2023-03-23T23:02:34.527Z File "/opt/conda/lib/python3.9/site-packages/starlette/routing.py", line 566, in __aenter__ uuid 2023-03-23T23:02:34.527Z return model_class.from_pretrained( uuid 2023-03-23T23:02:34.527Z custom_pipeline = handler.EndpointHandler(model_dir) uuid 2023-03-23T23:02:34.527Z File "/opt/conda/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 463, in from_pretrained uuid 2023-03-23T23:02:34.527Z File "/app/./huggingface_inference_toolkit/handler.py", line 44, in get_inference_handler_either_custom_or_default_handler uuid 2023-03-23T23:02:34.527Z File "/opt/conda/lib/python3.9/site-packages/starlette/routing.py", line 671, in lifespan uuid 2023-03-23T23:02:34.527Z File "/repository/handler.py", line 12, in __init__ uuid 2023-03-23T23:02:34.527Z custom_pipeline = check_and_register_custom_pipeline_from_directory(model_dir) uuid 2023-03-23T23:02:34.527Z await self._router.startup() uuid 2023-03-23T23:02:34.527Z async with self.lifespan_context(app): uuid 2023-03-23T23:02:34.527Z Traceback (most recent call last): uuid 2023-03-23T23:02:34.527Z dispatch_model(model, device_map=device_map, offload_dir=offload_folder, offload_index=offload_index) uuid 2023-03-23T23:02:34.527Z inference_handler = get_inference_handler_either_custom_or_default_handler(HF_MODEL_DIR, task=HF_TASK) uuid 2023-03-23T23:02:34.527Z uuid 2023-03-23T23:02:34.527Z File "/app/./webservice_starlette.py", line 56, in some_startup_task uuid 2023-03-23T23:02:34.550Z Application startup failed. Exiting. ``` ### Expected behavior The lowest available `accelerate` version should be updated to 0.15, since these PRs add parameters that did not exist before that version: - https://github.com/huggingface/transformers/pull/20321 - https://github.com/huggingface/accelerate/pull/873
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22351/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22350
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22350/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22350/comments
https://api.github.com/repos/huggingface/transformers/issues/22350/events
https://github.com/huggingface/transformers/pull/22350
1,638,461,448
PR_kwDOCUB6oc5MyN3Z
22,350
:rotating_light: :rotating_light: :rotating_light: Fixing BPE spm converter.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger \r\n\r\nAt least reformer and camembert are concerned (some test failed when writing bogus code here.)", "IT's breaking and breaking reformer and xlnet, I removed the breaking part of it for Llama." ]
1,679
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? The spm BPE converter seemed to have been wrong (for quite a while if true). The merges are recreated from the vocab, but where ordered by their vocab id instead of the score within spm vocab. It seems to be wrong for Llama. This PR fixes it. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22350/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22350", "html_url": "https://github.com/huggingface/transformers/pull/22350", "diff_url": "https://github.com/huggingface/transformers/pull/22350.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22350.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22349
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22349/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22349/comments
https://api.github.com/repos/huggingface/transformers/issues/22349/events
https://github.com/huggingface/transformers/issues/22349
1,638,319,955
I_kwDOCUB6oc5hpsdT
22,349
Error while using CLIP embeddings with VisualBERT.
{ "login": "nityanandmathur", "id": 77379835, "node_id": "MDQ6VXNlcjc3Mzc5ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/77379835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nityanandmathur", "html_url": "https://github.com/nityanandmathur", "followers_url": "https://api.github.com/users/nityanandmathur/followers", "following_url": "https://api.github.com/users/nityanandmathur/following{/other_user}", "gists_url": "https://api.github.com/users/nityanandmathur/gists{/gist_id}", "starred_url": "https://api.github.com/users/nityanandmathur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nityanandmathur/subscriptions", "organizations_url": "https://api.github.com/users/nityanandmathur/orgs", "repos_url": "https://api.github.com/users/nityanandmathur/repos", "events_url": "https://api.github.com/users/nityanandmathur/events{/privacy}", "received_events_url": "https://api.github.com/users/nityanandmathur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker and @amyeroberts " ]
1,679
1,679
1,679
NONE
null
### System Info - `transformers` version: 4.26.0 - Platform: Linux-4.18.0-348.2.1.el8_5.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am trying to use CLIP embeddings with VisualBERT for multimodal image classification. 1. Generating CLIP embeddings for text(j[1]) and images(j[0]) for a batch in dataloader. 2. Providing these embeddings to the VisualBERT model. 3. Calculating cross entropy loss. ```python model.train() for epoch in range(EPOCH): for j in tqdm(trainloader): # Features text_tokens = clip.tokenize(j[1]).to(DEVICE) j[0] = j[0].to(DEVICE) with torch.no_grad(): text_features = clip_model.encode_text(text_tokens).to(DEVICE) image_features = clip_model.encode_image(j[0]).to(DEVICE) print(text_features.shape) print(image_features.shape) visualbert_inputs = { "inputs_embeds": text_features.to(DEVICE), "visual_embeds": image_features.to(DEVICE), } # Forward Pass output = model(**visualbert_inputs) loss = loss_fn(output,j[2]).to(DEVICE) #Backpropagation optimizer.zero_grad() loss.backward() optimizer.step() print(f"EPOCH:{epoch}, LOSS:{loss.item()}") ``` Error: ![image](https://user-images.githubusercontent.com/77379835/227358978-275e1afc-2372-4094-80c4-92188fe8e925.png) ### Expected behavior The VisualBERT model requires input embeddings of dimension `inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)`. How to convert the CLIP encodings to the input embeddings of VisualBERT?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22349/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22348
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22348/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22348/comments
https://api.github.com/repos/huggingface/transformers/issues/22348/events
https://github.com/huggingface/transformers/issues/22348
1,638,299,936
I_kwDOCUB6oc5hpnkg
22,348
Possibly Incorrect Perplexity Calculation in Conceptual Guide
{ "login": "fpgaminer", "id": 1585817, "node_id": "MDQ6VXNlcjE1ODU4MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1585817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fpgaminer", "html_url": "https://github.com/fpgaminer", "followers_url": "https://api.github.com/users/fpgaminer/followers", "following_url": "https://api.github.com/users/fpgaminer/following{/other_user}", "gists_url": "https://api.github.com/users/fpgaminer/gists{/gist_id}", "starred_url": "https://api.github.com/users/fpgaminer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fpgaminer/subscriptions", "organizations_url": "https://api.github.com/users/fpgaminer/orgs", "repos_url": "https://api.github.com/users/fpgaminer/repos", "events_url": "https://api.github.com/users/fpgaminer/events{/privacy}", "received_events_url": "https://api.github.com/users/fpgaminer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That's correct, would you like to open a PR with your fix?", "I can, yes. Which would be best: a fix for those two lines, or re-writing to calculate loss outside the model? The latter better matches how the guide explains things but is a more extensive change and requires updating verbiage in the guide.\r\n\r\nAdditionally, is it preferred to keep the existing style of multiplying the loss inside the loop and dividing outside? Seems like a simple `mean` outside the loop is sufficient, shouldn't result in any reduction in numerical accuracy, and is more efficient.", "I think the simple fix is enough. We can also average the losses outside of the loop indeed." ]
1,679
1,680
1,680
CONTRIBUTOR
null
In the docs, Conceptual Guides->Perplexity, the code underneath the section titled "Example: Calculating perplexity with GPT-2 in 🤗 Transformers" might be wrong. This is based on my understanding. The specific line of example code: https://github.com/huggingface/transformers/blob/e8cc02555ee7dce7213e624ab088d8d4d1952064/docs/source/en/perplexity.mdx?plain=1#L122 I believe it should be: `neg_log_likelihood = outputs.loss * (trg_len - 1)` This is because `outputs = model(input_ids, labels=target_ids)` calculates `trg_len - 1` losses (and then averages them), not `trg_len`. You can see why in the model code: https://github.com/huggingface/transformers/blob/68287689f2f0d8b7063c400230b3766987abf18d/src/transformers/models/gpt2/modeling_gpt2.py#L1100C5-L1106 Basically, the HF API states "Note that the labels are shifted inside the model". Because of this design choice, the model can only ever calculate `n - 1` losses, since it has to shift the labels itself. `shift_logits = lm_logits[..., :-1, :].contiguous()` throws away the last logit, which it can't use to calculate a loss because it isn't given a label for the last position. `shift_labels = labels[..., 1:].contiguous()` throws away the useless first label. It's an odd decision in the API and should perhaps be a separate feature request to fix. Regardless, this bug report is about the Perplexity calculation guide. Since the model is only calculating `trg_len - 1` losses, it should only multiply the loss by `trg_len - 1`, not `trg_len`. I believe the last line would also be wrong for similar reasons: `ppl = torch.exp(torch.stack(nlls).sum() / end_loc)` An alternative is to change the guide to calculate the loss itself. This would allow the full `trg_len` labels to be used correctly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22348/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22347
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22347/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22347/comments
https://api.github.com/repos/huggingface/transformers/issues/22347/events
https://github.com/huggingface/transformers/pull/22347
1,638,196,694
PR_kwDOCUB6oc5MxUfO
22,347
[HFTracer] Make embeddings ops take on the dtype of the weight
{ "login": "jamesr66a", "id": 4685384, "node_id": "MDQ6VXNlcjQ2ODUzODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4685384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jamesr66a", "html_url": "https://github.com/jamesr66a", "followers_url": "https://api.github.com/users/jamesr66a/followers", "following_url": "https://api.github.com/users/jamesr66a/following{/other_user}", "gists_url": "https://api.github.com/users/jamesr66a/gists{/gist_id}", "starred_url": "https://api.github.com/users/jamesr66a/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jamesr66a/subscriptions", "organizations_url": "https://api.github.com/users/jamesr66a/orgs", "repos_url": "https://api.github.com/users/jamesr66a/repos", "events_url": "https://api.github.com/users/jamesr66a/events{/privacy}", "received_events_url": "https://api.github.com/users/jamesr66a/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @michaelbenayoun ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Previously, `HFTracer` would assume a dtype of `torch.float32` as the output for `embedding` operators. This would cause issues downstream if you're tracing out a model that is initialized as e.g. `torch.bfloat16`. This makes it so that the embeddings ops outputs take on the dtype of the weight tensor
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22347/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22347", "html_url": "https://github.com/huggingface/transformers/pull/22347", "diff_url": "https://github.com/huggingface/transformers/pull/22347.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22347.patch", "merged_at": 1679655891000 }
https://api.github.com/repos/huggingface/transformers/issues/22346
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22346/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22346/comments
https://api.github.com/repos/huggingface/transformers/issues/22346/events
https://github.com/huggingface/transformers/pull/22346
1,638,133,566
PR_kwDOCUB6oc5MxGvB
22,346
Generate: Add GPTNeoX integration test
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
MEMBER
null
# What does this PR do? I'm adding left-padding support to GPTNeoX, which requires some refactoring of the model code. I've decided to add a small integration test to ensure we don't regress on the basics.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22346/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22346", "html_url": "https://github.com/huggingface/transformers/pull/22346", "diff_url": "https://github.com/huggingface/transformers/pull/22346.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22346.patch", "merged_at": 1679657597000 }
https://api.github.com/repos/huggingface/transformers/issues/22345
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22345/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22345/comments
https://api.github.com/repos/huggingface/transformers/issues/22345/events
https://github.com/huggingface/transformers/pull/22345
1,638,014,741
PR_kwDOCUB6oc5MwtRB
22,345
Fix typo in Greedy Search Description
{ "login": "awinml", "id": 97467100, "node_id": "U_kgDOBc863A", "avatar_url": "https://avatars.githubusercontent.com/u/97467100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/awinml", "html_url": "https://github.com/awinml", "followers_url": "https://api.github.com/users/awinml/followers", "following_url": "https://api.github.com/users/awinml/following{/other_user}", "gists_url": "https://api.github.com/users/awinml/gists{/gist_id}", "starred_url": "https://api.github.com/users/awinml/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awinml/subscriptions", "organizations_url": "https://api.github.com/users/awinml/orgs", "repos_url": "https://api.github.com/users/awinml/repos", "events_url": "https://api.github.com/users/awinml/events{/privacy}", "received_events_url": "https://api.github.com/users/awinml/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger I rebased with main. All the CI checks pass now.", "Thanks!" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? There is a small typographical error in the documentation for Greedy Search. Fixes #22335 - [x] This PR fixes a typo. ## Who can review? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22345/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22345", "html_url": "https://github.com/huggingface/transformers/pull/22345", "diff_url": "https://github.com/huggingface/transformers/pull/22345.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22345.patch", "merged_at": 1679657539000 }
https://api.github.com/repos/huggingface/transformers/issues/22344
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22344/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22344/comments
https://api.github.com/repos/huggingface/transformers/issues/22344/events
https://github.com/huggingface/transformers/issues/22344
1,637,860,032
I_kwDOCUB6oc5hn8LA
22,344
Pix2struct screen2words not working
{ "login": "lambainsaan", "id": 14011001, "node_id": "MDQ6VXNlcjE0MDExMDAx", "avatar_url": "https://avatars.githubusercontent.com/u/14011001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lambainsaan", "html_url": "https://github.com/lambainsaan", "followers_url": "https://api.github.com/users/lambainsaan/followers", "following_url": "https://api.github.com/users/lambainsaan/following{/other_user}", "gists_url": "https://api.github.com/users/lambainsaan/gists{/gist_id}", "starred_url": "https://api.github.com/users/lambainsaan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lambainsaan/subscriptions", "organizations_url": "https://api.github.com/users/lambainsaan/orgs", "repos_url": "https://api.github.com/users/lambainsaan/repos", "events_url": "https://api.github.com/users/lambainsaan/events{/privacy}", "received_events_url": "https://api.github.com/users/lambainsaan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lambainsaan, thanks for raising this issue!\r\n\r\nPix2Struct was merged into `main` after the 4.27.2 release. To get the most recent version of the codebase, you can install from the dev branch by running: \r\n`pip install git+https://github.com/huggingface/transformers`.\r\n\r\nNote: It is not possible to load `Pix2Struct` with `AutoModelForSeq2SeqLM` API", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I believe this issue is fixed, closing it now!" ]
1,679
1,682
1,682
NONE
null
### System Info - `transformers` version: 4.27.2 - Platform: macOS-13.2.1-x86_64-i386-64bit - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?[/GPU](https://file+.vscode-resource.vscode-cdn.net/GPU)?[/TPU](https://file+.vscode-resource.vscode-cdn.net/TPU)?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When running the code, ``` from transformers import AutoProcessor, AutoModelForSeq2SeqLM processor = AutoProcessor.from_pretrained("google/pix2struct-screen2words-large") model = AutoModelForSeq2SeqLM.from_pretrained("google/pix2struct-screen2words-large") ``` I am getting the error ``` [transformers/models/auto/processing_auto.py:270](/codes/pix2struct-tryout/~/codes/pix2struct-tryout/.venv/lib/python3.10/site-packages/transformers/models/auto/processing_auto.py:270), in AutoProcessor.from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 267 else: 268 processor_class = processor_class_from_name(processor_class) --> 270 return processor_class.from_pretrained( 271 pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs 272 ) 274 # Last try: we use the PROCESSOR_MAPPING. 275 if type(config) in PROCESSOR_MAPPING: AttributeError: 'NoneType' object has no attribute 'from_pretrained' ``` ### Expected behavior The model variable must be populated with the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22344/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22342
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22342/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22342/comments
https://api.github.com/repos/huggingface/transformers/issues/22342/events
https://github.com/huggingface/transformers/pull/22342
1,637,786,992
PR_kwDOCUB6oc5Mv7od
22,342
added biogpt token classification
{ "login": "upjabir", "id": 40956091, "node_id": "MDQ6VXNlcjQwOTU2MDkx", "avatar_url": "https://avatars.githubusercontent.com/u/40956091?v=4", "gravatar_id": "", "url": "https://api.github.com/users/upjabir", "html_url": "https://github.com/upjabir", "followers_url": "https://api.github.com/users/upjabir/followers", "following_url": "https://api.github.com/users/upjabir/following{/other_user}", "gists_url": "https://api.github.com/users/upjabir/gists{/gist_id}", "starred_url": "https://api.github.com/users/upjabir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/upjabir/subscriptions", "organizations_url": "https://api.github.com/users/upjabir/orgs", "repos_url": "https://api.github.com/users/upjabir/repos", "events_url": "https://api.github.com/users/upjabir/events{/privacy}", "received_events_url": "https://api.github.com/users/upjabir/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@NielsRogge @sgugger can you please look into it ?" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? It add the class for BioGptForTokenClassification based on BioGpt model Fixes #21786 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada @NielsRogge @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22342/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22342", "html_url": "https://github.com/huggingface/transformers/pull/22342", "diff_url": "https://github.com/huggingface/transformers/pull/22342.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22342.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22341
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22341/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22341/comments
https://api.github.com/repos/huggingface/transformers/issues/22341/events
https://github.com/huggingface/transformers/pull/22341
1,637,768,559
PR_kwDOCUB6oc5Mv3pc
22,341
Add clean_up_tokenization_spaces to config
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @Narsil ", "Also linked to #20846. A follow up PR can now be made to add a simple warning if the default value is set to `True` / put the default value to True", "It seems like no model defaulted to use `cleanup_tokenization_spaces = False` so this should be seamless. Otherwise the tokenizer's `__init__` should be updated" ]
1,679
1,680
1,680
COLLABORATOR
null
# What does this PR do? DRAFT
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22341/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22341", "html_url": "https://github.com/huggingface/transformers/pull/22341", "diff_url": "https://github.com/huggingface/transformers/pull/22341.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22341.patch", "merged_at": 1680088869000 }
https://api.github.com/repos/huggingface/transformers/issues/22340
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22340/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22340/comments
https://api.github.com/repos/huggingface/transformers/issues/22340/events
https://github.com/huggingface/transformers/issues/22340
1,637,607,635
I_kwDOCUB6oc5hm-jT
22,340
StoppingCritera for individual samples in batched input
{ "login": "Muennighoff", "id": 62820084, "node_id": "MDQ6VXNlcjYyODIwMDg0", "avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Muennighoff", "html_url": "https://github.com/Muennighoff", "followers_url": "https://api.github.com/users/Muennighoff/followers", "following_url": "https://api.github.com/users/Muennighoff/following{/other_user}", "gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions", "organizations_url": "https://api.github.com/users/Muennighoff/orgs", "repos_url": "https://api.github.com/users/Muennighoff/repos", "events_url": "https://api.github.com/users/Muennighoff/events{/privacy}", "received_events_url": "https://api.github.com/users/Muennighoff/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante", "Hey @Muennighoff 👋 \r\n\r\nIf I'm reading right, the sole purpose of the proposal is faster generation. In that case, implementing what you suggested is probably possible, but actually low impact. This is because the bottleneck in `.generate()` is the memory bandwidth associated with pulling the model weights all the way down to the compute cores, which is independent of the batch size 😢 \r\n\r\nConsider the script below:\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nfrom tqdm import tqdm\r\nimport torch\r\nimport time\r\n\r\ntok = AutoTokenizer.from_pretrained(\"EleutherAI/gpt-j-6B\")\r\nprint(\"Loading the model...\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"EleutherAI/gpt-j-6B\", torch_dtype=torch.bfloat16).to(\"cuda\")\r\n\r\nbatch_size = 1\r\ninputs = tok([\"This cat is\"] * batch_size, return_tensors=\"pt\").to(\"cuda\")\r\n\r\nall_times = []\r\nfor i in tqdm(range(20)):\r\n start = time.time()\r\n gen_out = model.generate(**inputs, do_sample=True, max_new_tokens=128, pad_token_id=model.config.eos_token_id)\r\n end = time.time()\r\n if i > 1:\r\n all_times.append(end - start)\r\n\r\nprint(f\"Average time (batch_size={batch_size}): {sum(all_times) / len(all_times):.2f} seconds\")\r\n\r\nbatch_size = 16\r\ninputs = tok([\"This cat is\"] * batch_size, return_tensors=\"pt\").to(\"cuda\")\r\n\r\nall_times = []\r\nfor i in tqdm(range(20)):\r\n start = time.time()\r\n gen_out = model.generate(**inputs, do_sample=True, max_new_tokens=128, pad_token_id=model.config.eos_token_id)\r\n end = time.time()\r\n if i > 1:\r\n all_times.append(end - start)\r\n\r\nprint(f\"Average time (batch_size={batch_size}): {sum(all_times) / len(all_times):.2f} seconds\")\r\n```\r\n\r\nRunning on my nvidia 3090:\r\n- `batch_size=1` -> `4.19s`\r\n- `batch_size=16` -> `4.59s`\r\n\r\nConsidering the [philosophy](https://huggingface.co/docs/transformers/philosophy) for `transformers`, the potential speedup doesn't seem worth the implementation. Nevertheless, thank you for suggesting it! 🤗 \r\n\r\n", "Hey @gante, thanks for getting back!\r\nI'm not sure what you mean by `pulling the model weights all the way down to the compute cores`?\r\n\r\nIn your example, all samples stop at the same time (i.e. after 128 new tokens) I think. I'm referring to cases where some samples may stop after e.g. 1 new token but others after e.g. 2000. In my case generating the additional tokens for samples that \"have already stopped\" increases my inference time from 1 hour to 10 hours, i.e. 9 hours are wasted on tokens that are not needed. In my case I'm better off using batch_size=1 due to this.\r\n\r\nFor example, consider the below `StoppingCriteria`, which stops as soon as any of the `eof_strings` are seen. I can either implement it as stopping when all samples in the batch of input_ids contain any of the `eof_strings` or when any contains them. In the former case, samples that have already hit a stop word in `eof_strings` will continue to be fed through the model & new tokens will be generated for them, as other samples have not yet hit a stop word. This causes unnecessary inference time. Instead, one could save time (9 hours i.e. 90% in my case) by only continuing to generate for the samples that have not yet hit the `StoppingCriteria`. Let me know if I'm being unclear! \r\n\r\n```python\r\nclass EndOfFunctionCriteria(StoppingCriteria):\r\n \"\"\"Custom `StoppingCriteria` which checks if all generated functions in the batch are completed.\"\"\"\r\n\r\n def __init__(self, start_length, eof_strings, tokenizer):\r\n self.start_length = start_length\r\n self.eof_strings = eof_strings\r\n self.tokenizer = tokenizer\r\n\r\n def __call__(self, input_ids, scores, **kwargs):\r\n \"\"\"Returns true if all generated sequences contain any of the end-of-function strings.\"\"\"\r\n decoded_generations = self.tokenizer.batch_decode(\r\n input_ids[:, self.start_length :]\r\n )\r\n done = []\r\n for decoded_generation in decoded_generations:\r\n done.append(\r\n any(\r\n [\r\n stop_string in decoded_generation\r\n for stop_string in self.eof_strings\r\n ]\r\n )\r\n )\r\n return all(done) # Stop when ALL sequences hit the stopping critera\r\n # return True if True in done # Stop when ANY sequence hits the stopping critera\r\n```\r\n\r\n", "@Muennighoff Gotcha -- I now understand why you suggested this feature. \r\n\r\nBefore diving into solutions, let me understand the problem better. Normally, the generation time doesn't change much with the batch size (as I wrote above), meaning that generating the additional tokens is harmless. However, you are seeing a 10x difference 👀 This means I have a gap in my knowledge that I'd like to fill.\r\n\r\nWhat is your hardware, and how are you using `.generate()`?", "@gante @Muennighoff +1 for this\r\n\r\nChatGPT use case:\r\nIf I would like to generate until `<|im_end|>`, but it is not in the vocabulary as a complete token. So, I need to generate until the sequence ends with the needed substring. \r\n\r\nPrompt (from https://github.com/openai/openai-python/blob/main/chatml.md):\r\n```\r\n<|im_start|>system\r\nYou are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible.\r\nKnowledge cutoff: 2021-09-01\r\nCurrent date: 2023-03-01<|im_end|>\r\n<|im_start|>user\r\nHow are you<|im_end|>\r\n<|im_start|>assistant\r\nI am doing well!<|im_end|>\r\n<|im_start|>user\r\nHow are you now?<|im_end|>\r\n<|im_start|>assistant\r\n\r\n```\r\n\r\nI assume all the magic is right here: https://github.com/huggingface/transformers/blob/15641892985b1d77acc74c9065c332cd7c3f7d7f/src/transformers/generation/utils.py#L2045\r\nBelive a quick fix is to run every criterion on each sample in the batch, so all current users of stopping criteria will not be harmed by this update. \r\n\r\nLet me know if I can help with this 🤗", "@AlekseyKorshuk Currently, you can craft custom stopping criteria and pass it to the `.generate()` call. See [this file](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/stopping_criteria.py) for examples. After a given input row hits the criteria, it will only append pad tokens to the input, which you can easily filter out.\r\n\r\nWhat is being requested, not running inference at all on the rows where the stopping criteria matches, is relatively expensive to build while maintaining retrocompatibility. Please note that, even if it is built, the output will also contain the pad tokens (as described above). I haven't seen any proof that the speedups are worth the engineering effort of our small team 🤗 \r\n\r\nIf anyone can show me a clear case where the generation time grows quickly with the batch size, I'll gladly bump its priority. I am unaware of a situation where this applies (except for beam search on pytorch, but that's due to an issue in the beam search implementation).", "@gante Thank you, I checked examples, but it looks like it returns True/False for a complete batch. And a quick test showed the following:\r\n\r\n```python\r\nimport torch\r\n\r\nclass StoppingCriteriaSub(StoppingCriteria):\r\n\r\n def __init__(self, stops = [], encounters=1):\r\n super().__init__()\r\n self.stops = [stop for stop in stops]\r\n\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):\r\n print(input_ids)\r\n for stop in self.stops:\r\n if torch.all((stop == input_ids[0][-len(stop):])).item():\r\n return True\r\n\r\n return False\r\n\r\n\r\nstop_words = [\"<human>:\", \"<bot>:\"]\r\nstop_words_ids = [tokenizer(stop_word, return_tensors='pt')['input_ids'].squeeze() for stop_word in stop_words]\r\nstopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)])\r\n\r\ninputs = tokenizer([\"<human>: How are you?\\n<bot>:\", \"<human>: Why?\\n<bot>:\"], return_tensors='pt',padding=True)\r\nmodel.generate(**inputs, stopping_criteria=stopping_criteria, max_new_tokens=32)\r\n```\r\n\r\nAnd the `print` returns the following: \r\n\r\n```python\r\ntensor([[ 27, 10734, 31175, 1374, 389, 345, 30, 198, 27, 13645,\r\n 31175, 314],\r\n [ 27, 10734, 31175, 4162, 30, 198, 27, 13645, 31175, 50256,\r\n 50256, 464]])\r\n```\r\n\r\nSo my question is: how can I make sure that in the end all samples from the batch will have a substring from `stop_words` (excluding special tokens)?", "@AlekseyKorshuk \r\n\r\n> but it looks like it returns True/False for a complete batch\r\n\r\nCorrect, the stopping conditions operate on a whole batch level. Changing it to a row-level is not on our short-term plans (and is, in essence, what the original issue here is about :) )\r\n\r\n> So my question is: how can I make sure that in the end all samples from the batch will have a substring from stop_words (excluding special tokens)?\r\n\r\nI'm not sure if I got your question -- would you like to ensure that all rows in the batch generate `stop_words` at least once? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@gante \r\n\r\nIt seems behaviour is similar when you run beam search with stopping criteria, you cannot reject some of the beams and accept some of them.\r\nWould there be a workaround to achieve this?", "@Praful932 it depends on your exact use case, but you may be able to write a custom logits processor that behaves as a soft stopping criteria for beam methods, by setting all next scores to a large negative value (if you want to discard the beam) OR forcing an EOS token (if you want to accept the beam as finalized) when your condition triggers :)", "Thank you, this helps :)" ]
1,679
1,692
1,684
CONTRIBUTOR
null
### Feature request IIURC if I'm running batched generation and one sample in the batch has hit the stopping criteria but others have not, there is no way to be able to stop generations for **only that** sample. I.e. either I stop generating for all samples or the model will keep generating for all samples until all of them hit my stopping criteria. It would be nice if instead to speed-up the generation, the model could only keep generating for the samples that have not yet hit the criteria. To keep tensor shapes consistent, it could e.g. append the padding token to the others. A workaround is probably to stop if a single sample hits it, then filter my batch for all samples that have not yet hit the criteria and relaunch with only them. Lmk if there's a better workaround :) ### Motivation Faster generation ### Your contribution /
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22340/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22340/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22339
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22339/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22339/comments
https://api.github.com/repos/huggingface/transformers/issues/22339/events
https://github.com/huggingface/transformers/pull/22339
1,637,583,352
PR_kwDOCUB6oc5MvPfE
22,339
Minor typo in pipeline FillMaskPipeline's documentation.
{ "login": "SamuelLarkin", "id": 7314973, "node_id": "MDQ6VXNlcjczMTQ5NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7314973?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SamuelLarkin", "html_url": "https://github.com/SamuelLarkin", "followers_url": "https://api.github.com/users/SamuelLarkin/followers", "following_url": "https://api.github.com/users/SamuelLarkin/following{/other_user}", "gists_url": "https://api.github.com/users/SamuelLarkin/gists{/gist_id}", "starred_url": "https://api.github.com/users/SamuelLarkin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SamuelLarkin/subscriptions", "organizations_url": "https://api.github.com/users/SamuelLarkin/orgs", "repos_url": "https://api.github.com/users/SamuelLarkin/repos", "events_url": "https://api.github.com/users/SamuelLarkin/events{/privacy}", "received_events_url": "https://api.github.com/users/SamuelLarkin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you for spotting and fixing this! " ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Fixes a minor typo in the documentation for FillMaskPipeline.__call__(). ## Before submitting - [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @Narsil. @sgugger, @stevhliu and @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22339/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22339/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22339", "html_url": "https://github.com/huggingface/transformers/pull/22339", "diff_url": "https://github.com/huggingface/transformers/pull/22339.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22339.patch", "merged_at": 1679584452000 }
https://api.github.com/repos/huggingface/transformers/issues/22338
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22338/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22338/comments
https://api.github.com/repos/huggingface/transformers/issues/22338/events
https://github.com/huggingface/transformers/issues/22338
1,637,572,529
I_kwDOCUB6oc5hm1-x
22,338
WhisperTokenizer for two languages at once.
{ "login": "BakingBrains", "id": 51019420, "node_id": "MDQ6VXNlcjUxMDE5NDIw", "avatar_url": "https://avatars.githubusercontent.com/u/51019420?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BakingBrains", "html_url": "https://github.com/BakingBrains", "followers_url": "https://api.github.com/users/BakingBrains/followers", "following_url": "https://api.github.com/users/BakingBrains/following{/other_user}", "gists_url": "https://api.github.com/users/BakingBrains/gists{/gist_id}", "starred_url": "https://api.github.com/users/BakingBrains/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BakingBrains/subscriptions", "organizations_url": "https://api.github.com/users/BakingBrains/orgs", "repos_url": "https://api.github.com/users/BakingBrains/repos", "events_url": "https://api.github.com/users/BakingBrains/events{/privacy}", "received_events_url": "https://api.github.com/users/BakingBrains/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker and @sanchit-gandhi ", "Hey @BakingBrains! You'll have to set the prefix tokens each time you switch language. You can do this using the [`.set_prefix_tokens`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperTokenizer.set_prefix_tokens) method, e.g. for French:\r\n```python\r\ntokenizer.set_prefix_tokens(language=\"French\", task=\"transcribe\")\r\n# encode French target text to label ids \r\nbatch[\"labels\"] = tokenizer(batch[\"sentence\"]).input_ids\r\n```\r\nThen for Hindi:\r\n```python\r\ntokenizer.set_prefix_tokens(language=\"Hindi\", task=\"transcribe\")\r\n# encode Hindi target text to label ids \r\nbatch[\"labels\"] = tokenizer(batch[\"sentence\"]).input_ids\r\n```", "@sanchit-gandhi Thanks a lot. Can't we use both language at once. Like for example I have an audio file where a person speaks Hindi and in between switches the language to French. How can I use the tokenizer in this case?\r\n\r\nThanks and Regards", "I remember trying with a file that contained both French and English and whisper just transcribe the french part as if it was english. The same happens when you try to transcribe some audio that is in english but force the language code to another language: it will write english phonemes corresponding to the sounds that it hears. \r\n- Now if you want to have a batch, and in the batch you have different language, you can use the `generate` method and provide a batch of `decoder_input_ids` and set the `forced_tokens` to be None. \r\n- If you have another language in the middle, I suggest trying with the `retur_timestamp` option, which will split the audio. My best recommendation is to do something similar to what openAI's long audio decoding strategy does: when you reach the end of a timestamp, you re-generate. But this is gonna be slow as you don't know if the next sentence is in english or hindi, you cut the next sentence. ", "@ArthurZucker Thanks, I have been thinking the same.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
CONTRIBUTOR
null
In the blog https://huggingface.co/blog/fine-tune-whisper I see the code piece. ```tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="Hindi", task="transcribe")``` What if I want to include both French and Hindi? Any suggestions here? Thanks and Regards.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22338/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22337
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22337/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22337/comments
https://api.github.com/repos/huggingface/transformers/issues/22337/events
https://github.com/huggingface/transformers/issues/22337
1,637,418,009
I_kwDOCUB6oc5hmQQZ
22,337
NER pipeline adding unnecessary spaces to extracted entities
{ "login": "SergeyShk", "id": 10076495, "node_id": "MDQ6VXNlcjEwMDc2NDk1", "avatar_url": "https://avatars.githubusercontent.com/u/10076495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SergeyShk", "html_url": "https://github.com/SergeyShk", "followers_url": "https://api.github.com/users/SergeyShk/followers", "following_url": "https://api.github.com/users/SergeyShk/following{/other_user}", "gists_url": "https://api.github.com/users/SergeyShk/gists{/gist_id}", "starred_url": "https://api.github.com/users/SergeyShk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SergeyShk/subscriptions", "organizations_url": "https://api.github.com/users/SergeyShk/orgs", "repos_url": "https://api.github.com/users/SergeyShk/repos", "events_url": "https://api.github.com/users/SergeyShk/events{/privacy}", "received_events_url": "https://api.github.com/users/SergeyShk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @SergeyShk .\r\n\r\nThis is linked to how tokenizers work, and there's nothing to be done about it (there's no difference for it if there was a space or not, so during decoding it can arbitrarily choose to put it or not.).\r\n\r\nHowever, you do have `start` and `stop` which can help you recover the exact original string within your text.\r\nWould that be enough ?", "I definitely could use `start` and `stop` manually, but why aren't they used in pipeline to get `word`? ", "Legacy.\r\n\r\nThis was created before using `tokenizers` library, and therefore `offsets` where not even an option. So indexing back was not possible. Since we're keen to never break compatiblity (until 5.0) it's saying there.\r\n\r\nSomeone suggested to add yet another key like `better_word` which would contain it, but we decided against it, since it's even more confusing.", "`word` is also always in lower case. But ok, I get you, will use `start` and `stop` then. Thanks.", "> word is also always in lower case\r\n\r\nThis depends on the tokenizer.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
### System Info - `transformers` version: 4.27.2 - Platform: macOS-13.1-x86_64-i386-64bit - Python version: 3.10.9 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @Narsil @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have been using the NER pipeline of transformers to extract named entities from text. However, I have noticed that in some cases, the pipeline adds unnecessary spaces to the extracted entities, which can cause issues downstream. For example, when I input the message "Pay 04-00-04", the pipeline extracts the following entity: ```python tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR) model = AutoModelForTokenClassification.from_pretrained(MODEL_DIR) pipe = pipeline( "token-classification", model=model, tokenizer=tokenizer, accelerator="bettertransformer", aggregation_strategy="first", ) pipe("Pay 04-00-04") { "entity":"CODE", "word":"04 - 00 - 04", "start":4, "end":12 } ``` As you can see, the entity includes spaces between the hyphens, which is not correct. This can cause problems when I want to use the extracted entity in further processing, such as database lookups or machine learning models. I have tested the pipeline on different messages and have found that it consistently adds spaces to some entities. This issue seems to be related to the tokenizer used by the pipeline, which splits the text into tokens before feeding it to the NER model. Thank you for your attention to this matter. ### Expected behavior I would expect to see entiities without unnecessary spaces: ```python { "entity":"CODE", "word":"04-00-04", "start":4, "end":12 } ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22337/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22336
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22336/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22336/comments
https://api.github.com/repos/huggingface/transformers/issues/22336/events
https://github.com/huggingface/transformers/issues/22336
1,637,401,383
I_kwDOCUB6oc5hmMMn
22,336
Have you considered using falsh attention to speed up?
{ "login": "macheng6", "id": 37951216, "node_id": "MDQ6VXNlcjM3OTUxMjE2", "avatar_url": "https://avatars.githubusercontent.com/u/37951216?v=4", "gravatar_id": "", "url": "https://api.github.com/users/macheng6", "html_url": "https://github.com/macheng6", "followers_url": "https://api.github.com/users/macheng6/followers", "following_url": "https://api.github.com/users/macheng6/following{/other_user}", "gists_url": "https://api.github.com/users/macheng6/gists{/gist_id}", "starred_url": "https://api.github.com/users/macheng6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/macheng6/subscriptions", "organizations_url": "https://api.github.com/users/macheng6/orgs", "repos_url": "https://api.github.com/users/macheng6/repos", "events_url": "https://api.github.com/users/macheng6/events{/privacy}", "received_events_url": "https://api.github.com/users/macheng6/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "Hi @macheng6 \r\nThanks a lot for your interest in this! \r\nIndeed there is a similar integration to this called `BetterTransformer` that uses flash attention in the backend I believe. \r\nThis support most of the encoder and decoder models (if you use the `main` branch of `optimum`, please refer to this documentation page: https://huggingface.co/docs/optimum/bettertransformer/overview\r\n\r\ncc @fxmarty @michaelbenayoun ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
### Feature request using flash attention to speed up ### Motivation none ### Your contribution none
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22336/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22335
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22335/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22335/comments
https://api.github.com/repos/huggingface/transformers/issues/22335/events
https://github.com/huggingface/transformers/issues/22335
1,637,385,614
I_kwDOCUB6oc5hmIWO
22,335
Typo in Greedy Search Description
{ "login": "awinml", "id": 97467100, "node_id": "U_kgDOBc863A", "avatar_url": "https://avatars.githubusercontent.com/u/97467100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/awinml", "html_url": "https://github.com/awinml", "followers_url": "https://api.github.com/users/awinml/followers", "following_url": "https://api.github.com/users/awinml/following{/other_user}", "gists_url": "https://api.github.com/users/awinml/gists{/gist_id}", "starred_url": "https://api.github.com/users/awinml/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awinml/subscriptions", "organizations_url": "https://api.github.com/users/awinml/orgs", "repos_url": "https://api.github.com/users/awinml/repos", "events_url": "https://api.github.com/users/awinml/events{/privacy}", "received_events_url": "https://api.github.com/users/awinml/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sure, would you like to suggest a PR?", "Yeah, I can open a PR to fix this." ]
1,679
1,679
1,679
CONTRIBUTOR
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.1+cu117 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.5.0 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: False - Using distributed or parallel set-up in script?: False ### Who can help? @sgugger ### Reproduction There is a small typographical error in the documentation for Greedy Search. https://github.com/huggingface/transformers/blob/ff20f9cf3615a8638023bc82925573cb9d0f3560/docs/source/en/generation_strategies.mdx?plain=1#L149-L152 ### Proposed Solution This could be fixed by just rewriting this to: ```python [`generate`] uses greedy search decoding by default so you don't have to pass any parameters to enable it. This means the parameters `num_beams` is set to 1 and `do_sample=False`. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22335/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22343
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22343/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22343/comments
https://api.github.com/repos/huggingface/transformers/issues/22343/events
https://github.com/huggingface/transformers/issues/22343
1,637,811,311
I_kwDOCUB6oc5hnwRv
22,343
Link for Absent Longformer Task Documentation
{ "login": "mert-kurttutan", "id": 88637659, "node_id": "MDQ6VXNlcjg4NjM3NjU5", "avatar_url": "https://avatars.githubusercontent.com/u/88637659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mert-kurttutan", "html_url": "https://github.com/mert-kurttutan", "followers_url": "https://api.github.com/users/mert-kurttutan/followers", "following_url": "https://api.github.com/users/mert-kurttutan/following{/other_user}", "gists_url": "https://api.github.com/users/mert-kurttutan/gists{/gist_id}", "starred_url": "https://api.github.com/users/mert-kurttutan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mert-kurttutan/subscriptions", "organizations_url": "https://api.github.com/users/mert-kurttutan/orgs", "repos_url": "https://api.github.com/users/mert-kurttutan/repos", "events_url": "https://api.github.com/users/mert-kurttutan/events{/privacy}", "received_events_url": "https://api.github.com/users/mert-kurttutan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Transferred to transformers. As far as I can see, this is fixed in main and will work as intended in the next release. https://huggingface.co/docs/transformers/main/en/model_doc/longformer", "Oh. Ok. Thanks.\r\nIt still seems weird that resources of this page are about generic task demonstration, not particularly related to Longformer\r\nI guess this is a matter of design choice.\r\nFeel free to close this issue" ]
1,679
1,679
1,679
NONE
null
Hi, I tried to look at the example tasks for Longformer. It lead to a page that does not exist and non-ideal response page. How to reproduce: 1) Go to https://huggingface.co/docs/transformers/model_doc/longformer#documentation-resources 2) Click on any of the documentation resources, and arrive at the next page. Maybe better notification saying these documentations dont exist yet would be better I guess this is more like a matter of how you present absent web pages. ![error_page](https://user-images.githubusercontent.com/88637659/227191372-4ce460bd-27f4-42de-9556-7b0b660f0d5b.png) ![error_page2](https://user-images.githubusercontent.com/88637659/227191418-b528ba58-b07b-4356-8f31-a15bb289f7b8.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22343/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22334
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22334/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22334/comments
https://api.github.com/repos/huggingface/transformers/issues/22334/events
https://github.com/huggingface/transformers/pull/22334
1,637,279,262
PR_kwDOCUB6oc5MuNPR
22,334
[`bnb`] Fix bnb slow test
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Seems to be fixed on `main`, probably thanks to https://github.com/huggingface/transformers/pull/22311 " ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Fixes a failing daily CI test. https://github.com/huggingface/transformers/actions/runs/4485738346/jobs/7887641218 Which can be simplified as: ```python from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-560m" memory_mapping = {0: "1GB", 1: "2GB"} model_parallel = AutoModelForCausalLM.from_pretrained( model_name, load_in_8bit=True, max_memory=memory_mapping, device_map="auto" ) # Check correct device map print(set(model_parallel.hf_device_map.values())) >>> EXPECTED={0, 1} / GOT={0} ``` I think this also fixes a bug (which is not negligeable) It seems that we need to add a check `max_memory is None` otherwise `max_memory` will get overriden right after by ```python max_memory = get_balanced_memory( model, dtype=torch_dtype, low_zero=(device_map == "balanced_low_0"), **kwargs, ) ``` Hence `max_memory` seems to be ignored now on the `main` branch in some cases cc @sgugger @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22334/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22334", "html_url": "https://github.com/huggingface/transformers/pull/22334", "diff_url": "https://github.com/huggingface/transformers/pull/22334.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22334.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22333
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22333/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22333/comments
https://api.github.com/repos/huggingface/transformers/issues/22333/events
https://github.com/huggingface/transformers/pull/22333
1,637,236,124
PR_kwDOCUB6oc5MuEEE
22,333
Mention why one needs to specify max_steps in Trainer
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
MEMBER
null
Just a minor change following https://discuss.huggingface.co/t/streaming-dataset-into-trainer-does-not-implement-len-max-steps-has-to-be-specified/32893 about why max_steps is needed for iterable datasets
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22333/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22333", "html_url": "https://github.com/huggingface/transformers/pull/22333", "diff_url": "https://github.com/huggingface/transformers/pull/22333.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22333.patch", "merged_at": 1679581611000 }
https://api.github.com/repos/huggingface/transformers/issues/22331
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22331/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22331/comments
https://api.github.com/repos/huggingface/transformers/issues/22331/events
https://github.com/huggingface/transformers/issues/22331
1,636,970,282
I_kwDOCUB6oc5hki8q
22,331
whisper model's default task should be "transcribe"
{ "login": "chenht2021", "id": 1046370, "node_id": "MDQ6VXNlcjEwNDYzNzA=", "avatar_url": "https://avatars.githubusercontent.com/u/1046370?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chenht2021", "html_url": "https://github.com/chenht2021", "followers_url": "https://api.github.com/users/chenht2021/followers", "following_url": "https://api.github.com/users/chenht2021/following{/other_user}", "gists_url": "https://api.github.com/users/chenht2021/gists{/gist_id}", "starred_url": "https://api.github.com/users/chenht2021/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chenht2021/subscriptions", "organizations_url": "https://api.github.com/users/chenht2021/orgs", "repos_url": "https://api.github.com/users/chenht2021/repos", "events_url": "https://api.github.com/users/chenht2021/events{/privacy}", "received_events_url": "https://api.github.com/users/chenht2021/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker and @sanchit-gandhi 🙏 ", "Hey! As you can see [here](https://github.com/ArthurZucker/transformers/blob/d2854e753bcfd62dce2f968d6088232d0fc41f8c/src/transformers/models/whisper/modeling_whisper.py#L1586) the default (if the generation_config does not have a `task` set is still `transcribe`. What changed is the `configuration.json` see this [commit](https://huggingface.co/openai/whisper-large-v2/commit/e823955b7861a1d66fef509b8601ada6d7762c03) where the default went from `transcribe` ( 50358) to `translate` (50359 in the forced_decoder_ids). The update in transformers just makes sure to properly use this, while the previous version did not take it into account. ", "This is more a fix than a breaking change IMO", "Thank you for your explanation." ]
1,679
1,680
1,680
NONE
null
### System Info - `transformers` version: 4.27.2 - Platform: Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.16 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sanchit-gandhi @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction In transformers v4.26.1, the following script will output right language, some chinese text. It's the right task "transcribe". However, in version 4.27.2, it will output translated english text, which is another task "translate". [<script src="https://gist.github.com/chenht2010/174f2480641b6780cbccd588431176b8.js"></script>](https://gist.github.com/chenht2010/174f2480641b6780cbccd588431176b8) ### Expected behavior Do ASR, and Output chinese
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22331/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22330
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22330/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22330/comments
https://api.github.com/repos/huggingface/transformers/issues/22330/events
https://github.com/huggingface/transformers/issues/22330
1,636,830,512
I_kwDOCUB6oc5hkA0w
22,330
Can't export Deformable Detr to ONNX
{ "login": "ashim-mahara", "id": 48154590, "node_id": "MDQ6VXNlcjQ4MTU0NTkw", "avatar_url": "https://avatars.githubusercontent.com/u/48154590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashim-mahara", "html_url": "https://github.com/ashim-mahara", "followers_url": "https://api.github.com/users/ashim-mahara/followers", "following_url": "https://api.github.com/users/ashim-mahara/following{/other_user}", "gists_url": "https://api.github.com/users/ashim-mahara/gists{/gist_id}", "starred_url": "https://api.github.com/users/ashim-mahara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ashim-mahara/subscriptions", "organizations_url": "https://api.github.com/users/ashim-mahara/orgs", "repos_url": "https://api.github.com/users/ashim-mahara/repos", "events_url": "https://api.github.com/users/ashim-mahara/events{/privacy}", "received_events_url": "https://api.github.com/users/ashim-mahara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Detr export should be supported: https://github.com/huggingface/optimum/blob/8252f4b0c48183198f4bed54bd6e0822213ef78b/optimum/exporters/tasks.py#L344-L349\r\n\r\nCan you try `pip install -U optimum transformers` and `optimum-cli export onnx --model SenseTime/deformable-detr --task object-segmentation detr_onnx/`?", "`object-segmentation` is not available as a task. I assumed you mean `object-detection` instead and tried the command. \r\n\r\nCommand:\r\n\r\n```\r\noptimum-cli export onnx --model SenseTime/deformable-detr --task object-detection detr_onnx/\r\n```\r\n\r\nError:\r\n```\r\nKeyError: \"deformable-detr is not supported yet. Only {'speech-to-text', 'hubert', 'mobilenet-v1', 'xlm', 'blenderbot', 'camembert', 'mobilenet-v2', 'wav2vec2-conformer', 'donut-swin', 'xlm-roberta', 'marian', 'electra', 'm2m-100', 'mbart', 'perceiver', 'whisper', 'swin', 'bert', 'poolformer', 'audio-spectrogram-transformer', 'unispeech', 'gpt-neo', 'levit', 'layoutlmv3', 'segformer', 'codegen', 'deit', 'mpnet', 'vit', 'roberta', 'deberta-v2', 'mt5', 'wavlm', 'data2vec-vision', 'data2vec-text', 'flaubert', 'blenderbot-small', 'vision-encoder-decoder', 'nystromformer', 'sew-d', 'yolos', 'gpt-neox', 'detr', 'gpt2', 'layoutlm', 'mobilevit', 't5', 'splinter', 'roformer', 'bloom', 'convnext', 'resnet', 'convbert', 'mobilebert', 'distilbert', 'squeezebert', 'unispeech-sat', 'gptj', 'clip', 'wav2vec2', 'groupvit', 'sew', 'deberta', 'beit', 'pegasus', 'longt5', 'ibert', 'albert', 'bart', 'data2vec-audio'} are supported. If you want to support deformable-detr please propose a PR or open up an issue.\"\r\n\r\n```", "Thank you @ashim-mahara , apologies indeed this is not supported currently - was confused by deformable_detr / detr.\r\n\r\nWould you like to submit a PR to add the support in the export?\r\n\r\nThis would entail (among others):\r\n* Adding a relevant config in https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/model_configs.py (with defined inputs/outputs and inputs generators)\r\n* Adding `deformable_detr` in tasks.py: https://github.com/huggingface/optimum/blob/4bbcc1b1d077e9258649f39b752370ff70163c00/optimum/exporters/tasks.py#L388", "Okay I'll try and status update here in ~3 days.", "@fxmarty I Still had the same error when I added the configs and checked if it will then import the model with:\r\n\r\n`ORTModel.from_pretrained(\"../savedModels/deformable-detr/\", from_transformers= True)`.\r\n\r\nError:\r\n\r\n```\r\n 581 _C._jit_pass_inline_fork_wait(graph)\r\n 582 _C._jit_pass_lint(graph)\r\n--> 583 _C._jit_pass_onnx_autograd_function_process(graph)\r\n 584 _C._jit_pass_lower_all_tuples(graph)\r\n 586 # we now record some ops like ones/zeros\r\n 587 # into a trace where we previously recorded constants.\r\n 588 # use constant prop to maintain our current level of onnx support\r\n 589 # without implementing symbolics for all of them\r\n\r\nRuntimeError: required keyword attribute 'Subgraph' is undefined\r\n```\r\n\r\nI am not an expert on this but I think the path tracing is failing. So probably will need the model author to give it a look.", "@ashim-mahara Could you open a PR in optimum so that I can have a look?", "@fxmarty here is the PR: https://github.com/huggingface/optimum/pull/931", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@fxmarty is there any way to make the pretrained deformable-detr models compatible with the new code? I tried exporting `SenseTime/deformable-detr` after changing the `disable_custom_kernels` to `True` but it still throws an error. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,685
1,685
NONE
null
### System Info - `transformers` version: 4.27.2 - Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Tried both, doesn't work - Using distributed or parallel set-up in script?: No ### Who can help? @amyeroberts @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Code to reproduce: ``` from transformers import DeformableDetrForObjectDetection import torch model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr") example = torch.Tensor(1, 3, 600, 600) torch.onnx.export( model, (example, None), f="./test-ddetr.onnx", input_names=['pixel_values'], output_names=['logits', 'pred_boxes'], dynamic_axes={"pixel_values": {0: "batch_size", 1: "image_channel", 2: "image_height", 3: "image_width"}}, do_constant_folding=True, opset_version=16 ) ``` Error: ``` File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/torch/onnx/utils.py:581, in _optimize_graph(graph, operator_export_type, _disable_torch_constant_prop, fixed_batch_size, params_dict, dynamic_axes, input_names, module) 579 _C._jit_pass_inline_fork_wait(graph) 580 _C._jit_pass_lint(graph) --> 581 _C._jit_pass_onnx_autograd_function_process(graph) 582 _C._jit_pass_lower_all_tuples(graph) 584 # we now record some ops like ones/zeros 585 # into a trace where we previously recorded constants. 586 # use constant prop to maintain our current level of onnx support 587 # without implementing symbolics for all of them RuntimeError: required keyword attribute 'Subgraph' is undefined ``` ### Expected behavior Should export an onnx model. I can export the Detr model but not Deformable Detr. I have tried it on PyTorch 2.0 too. I don't know if I should post this in this issue or another one but Deformable Detr is not supported on optimum.ORTModelForObjectDetection. Also, I tried to create an OnnxConfig (copied from detr source) and export it using `transformers.onnx.export` but that resulted in the above error too.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22330/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22329
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22329/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22329/comments
https://api.github.com/repos/huggingface/transformers/issues/22329/events
https://github.com/huggingface/transformers/pull/22329
1,636,813,751
PR_kwDOCUB6oc5MspoK
22,329
Enable training Llama with model or pipeline parallelism
{ "login": "kooshi", "id": 1934337, "node_id": "MDQ6VXNlcjE5MzQzMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/1934337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kooshi", "html_url": "https://github.com/kooshi", "followers_url": "https://api.github.com/users/kooshi/followers", "following_url": "https://api.github.com/users/kooshi/following{/other_user}", "gists_url": "https://api.github.com/users/kooshi/gists{/gist_id}", "starred_url": "https://api.github.com/users/kooshi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kooshi/subscriptions", "organizations_url": "https://api.github.com/users/kooshi/orgs", "repos_url": "https://api.github.com/users/kooshi/repos", "events_url": "https://api.github.com/users/kooshi/events{/privacy}", "received_events_url": "https://api.github.com/users/kooshi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Great jobs!", "Could you share a snippet of code that is failing prior to this PR?", "Certainly, here's a relatively minimal example:\r\n```python\r\nimport transformers, datasets\r\nfrom peft import (\r\n LoraConfig,\r\n get_peft_model,\r\n)\r\n\r\nCHECKPOINT = \"decapoda-research/llama-7b-hf\"\r\nmodel = transformers.LlamaForCausalLM.from_pretrained(\r\n CHECKPOINT,\r\n device_map=\"auto\",\r\n max_memory={0:\"15GB\", 1:\"15GB\"}\r\n)\r\ntokenizer = transformers.LlamaTokenizer.from_pretrained(CHECKPOINT, add_eos_token=True)\r\ntokenizer.pad_token_id = 0\r\n\r\nconfig = LoraConfig(\r\n r=8,\r\n lora_alpha=16,\r\n target_modules=[\"q_proj\",\"v_proj\"],\r\n lora_dropout=0.05,\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n)\r\nmodel = get_peft_model(model, config)\r\n\r\ndata = datasets.load_dataset(\"laion/OIG\", data_files=\"unified_chip2.jsonl\", split=\"train\")\r\n\r\ndef tokenize(examples):\r\n return tokenizer(\r\n examples[\"text\"],\r\n truncation=True,\r\n max_length=256,\r\n padding=\"max_length\",\r\n )\r\n\r\ndata = data.map(tokenize)\r\n\r\n#Tell Trainer not to attempt DataParallel\r\nmodel.is_parallelizable = True\r\nmodel.model_parallel = True\r\ntrainer = transformers.Trainer(\r\n model=model,\r\n train_dataset=data,\r\n args=transformers.TrainingArguments(\r\n per_device_train_batch_size=1,\r\n learning_rate=3e-4,\r\n logging_steps=10,\r\n evaluation_strategy=\"no\",\r\n save_strategy=\"no\",\r\n output_dir=\"/tmp/\"\r\n ),\r\n data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),\r\n)\r\nmodel.config.use_cache = False\r\ntrainer.train()\r\n```\r\n\r\nBefore the change, this fails with\r\n`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward)`\r\n\r\nI found the solution here: [Pytorch Pipeline Tutorial](https://pytorch.org/tutorials/intermediate/pipeline_tutorial.html#run-the-model)\r\n> \\# Need to move targets to the device where the output of the pipeline resides.\r\n\r\nAnd after the change, the code example above runs as expected.\r\n" ]
1,679
1,680
1,679
CONTRIBUTOR
null
# What does this PR do? This PR enables model and pipeline parallelism for Llama models. The change moves the target tensor to the output device, if needed, for loss calculation. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22329/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22329", "html_url": "https://github.com/huggingface/transformers/pull/22329", "diff_url": "https://github.com/huggingface/transformers/pull/22329.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22329.patch", "merged_at": 1679591752000 }
https://api.github.com/repos/huggingface/transformers/issues/22328
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22328/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22328/comments
https://api.github.com/repos/huggingface/transformers/issues/22328/events
https://github.com/huggingface/transformers/issues/22328
1,636,765,286
I_kwDOCUB6oc5hjw5m
22,328
PyTorch/XLA FSDP doesn't seem to work on TPU-v3-8 VM
{ "login": "vjeronymo2", "id": 37119493, "node_id": "MDQ6VXNlcjM3MTE5NDkz", "avatar_url": "https://avatars.githubusercontent.com/u/37119493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vjeronymo2", "html_url": "https://github.com/vjeronymo2", "followers_url": "https://api.github.com/users/vjeronymo2/followers", "following_url": "https://api.github.com/users/vjeronymo2/following{/other_user}", "gists_url": "https://api.github.com/users/vjeronymo2/gists{/gist_id}", "starred_url": "https://api.github.com/users/vjeronymo2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vjeronymo2/subscriptions", "organizations_url": "https://api.github.com/users/vjeronymo2/orgs", "repos_url": "https://api.github.com/users/vjeronymo2/repos", "events_url": "https://api.github.com/users/vjeronymo2/events{/privacy}", "received_events_url": "https://api.github.com/users/vjeronymo2/received_events", "type": "User", "site_admin": false }
[ { "id": 4101623725, "node_id": "LA_kwDOCUB6oc70ec-t", "url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch%20FSDP", "name": "PyTorch FSDP", "color": "B60205", "default": false, "description": "" } ]
closed
false
null
[]
[ "I still think this still needs to be addressed ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,685
1,685
NONE
null
### System Info GCP TPU-v3-8 VM Operating System: Ubuntu 20.04.4 LTS Kernel: Linux 5.13.0-1027-gcp transformers 4.28.0.dev0 (pip install git+https://github.com/huggingface/transformers.git on 03/22/2023) torch 2.0.0 torch-xla 2.0 ### Who can help? People from #21406 that is @AlexWertheim, possibly @pacman100 and @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The [glue example with Trainer for TPUs](https://github.com/huggingface/transformers/tree/main/examples/pytorch#running-on-tpus) without FSTP worked flawlessly in my TPU-v3-8 VM with xlm-roberta-base (because the model and batch fit properly within each core). Now that FSTP was integrated thanks to @AlexWertheim, I tried running facebook/xlm-roberta-xl on this example with the additional parameters. ```bash python xla_spawn.py --num_cores 8 \ run_glue.py \ --model_name_or_path facebook/xlm-roberta-xl \ --task_name mnli \ --do_train \ --max_seq_length 128 \ --per_device_train_batch_size 4 \ --learning_rate 2e-5 \ --num_train_epochs 10.0 \ --output_dir mnli_output \ --report_to all \ --fsdp 'shard_grad_op' \ --fsdp_config '../fstp_config.json' \ --debug 'tpu_metrics_debug' \ --logging_steps 100 \ --gradient_accumulation_steps 8 ``` fstp_config.json: ```json { "fsdp_min_num_params": 10000000, "xla": true, "xla_fsdp_settings": {} } ``` I also tried using `"fsdp_transformer_layer_cls_to_wrap": ["XLMRobertaXLModel","XLMRobertaXLClassificationHead"]` instead of `"fsdp_min_num_params": 10000000`. Also `full_shard` instead of `shard_grad_op` and some other variations but they're all giving me the following error: ```bash 0%| | 1/3068000 [08:09<416756:07:35, 489.02s/it]2023-03-23 02:02:19.905715: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:22.081681: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] StackTrace: 2023-03-23 02:02:22.081762: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** Begin stack trace *** 2023-03-23 02:02:22.081770: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] tsl::CurrentStackTrace() 2023-03-23 02:02:22.081777: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::util::ReportComputationError(tsl::Status const&, absl::lts_20220623::Span<xla::XlaComputation const* const>, absl::lts_20220623::Span<xla::Shape const* const>) 2023-03-23 02:02:22.081783: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::XrtComputationClient::ExecuteComputation(xla::ComputationClient::Computation const&, absl::lts_20220623::Span<std::shared_ptr<xla::ComputationClient::Data> const>, std::string const&, xla::ComputationClient::ExecuteComputationOptions const&) 2023-03-23 02:02:22.081790: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch_xla::XlaBackendImpl::ExecuteComputation(std::shared_ptr<torch::lazy::Computation>, c10::ArrayRef<std::shared_ptr<torch::lazy::BackendData> >, torch::lazy::BackendDevice const&) const 2023-03-23 02:02:22.081809: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081818: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch::lazy::MultiWait::Complete(std::function<void ()> const&) 2023-03-23 02:02:22.081825: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081831: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081836: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081842: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] clone 2023-03-23 02:02:22.081847: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** End stack trace *** 2023-03-23 02:02:22.081854: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081862: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Status: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2023-03-23 02:02:22.081870: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2 root error(s) found. 2023-03-23 02:02:22.081878: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:22.081891: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]] 2023-03-23 02:02:22.081898: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:22.081905: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081911: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[XRTExecute_G10]] 2023-03-23 02:02:22.081920: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:22.081928: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081937: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:22.081944: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]] 2023-03-23 02:02:22.081951: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:22.081959: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:22.081967: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 successful operations. 2023-03-23 02:02:22.081975: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 derived errors ignored. 2023-03-23 02:02:22.081983: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Recent warning and error logs: 2023-03-23 02:02:22.081989: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. /home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( Exception in device=TPU:1: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2 root error(s) found. (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [[XRTExecute_G10]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 0 successful operations. 0 derived errors ignored. Recent warning and error logs: OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. Traceback (most recent call last): File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 334, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 328, in _start_fn fn(gindex, *args) File "/datadrive/test/run_glue.py", line 622, in _mp_fn main() File "/datadrive/test/run_glue.py", line 534, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1644, in train return inner_training_loop( File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1881, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 30, in __next__ return self.next() File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 42, in next xm.mark_step() File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/core/xla_model.py", line 949, in mark_step torch_xla._XLAC._xla_step_marker( RuntimeError: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2 root error(s) found. (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [[XRTExecute_G10]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 0 successful operations. 0 derived errors ignored. Recent warning and error logs: OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:23.050198: W tensorflow/core/framework/op_kernel.cc:1830] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. https://symbolize.stripped_domain/r/?trace=7f7627be9376,7f7627bee41f,0&map= *** SIGTERM received by PID 89268 (TID 89268) on cpu 51 from PID 89123; stack trace: *** PC: @ 0x7f7627be9376 (unknown) pthread_cond_wait@@GLIBC_2.3.2 @ 0x7f74d8c2aa1a 1152 (unknown) @ 0x7f7627bee420 (unknown) (unknown) @ 0x1 (unknown) (unknown) https://symbolize.stripped_domain/r/?trace=7f7627be9376,7f74d8c2aa19,7f7627bee41f,0&map=ceee8fa20ddf9c34af43f587221e91de:7f74cbd02000-7f74d8e41840 E0323 02:02:23.479201 89268 coredump_hook.cc:360] RAW: Remote crash gathering disabled for SIGTERM. E0323 02:02:24.172933 89268 process_state.cc:784] RAW: Raising signal 15 with default behavior 2023-03-23 02:02:25.056856: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] StackTrace: 2023-03-23 02:02:25.056942: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** Begin stack trace *** 2023-03-23 02:02:25.056952: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] tsl::CurrentStackTrace() 2023-03-23 02:02:25.056959: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::util::ReportComputationError(tsl::Status const&, absl::lts_20220623::Span<xla::XlaComputation const* const>, absl::lts_20220623::Span<xla::Shape const* const>) 2023-03-23 02:02:25.056967: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] xla::XrtComputationClient::ExecuteComputation(xla::ComputationClient::Computation const&, absl::lts_20220623::Span<std::shared_ptr<xla::ComputationClient::Data> const>, std::string const&, xla::ComputationClient::ExecuteComputationOptions const&) 2023-03-23 02:02:25.056976: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch_xla::XlaBackendImpl::ExecuteComputation(std::shared_ptr<torch::lazy::Computation>, c10::ArrayRef<std::shared_ptr<torch::lazy::BackendData> >, torch::lazy::BackendDevice const&) const 2023-03-23 02:02:25.056984: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.056997: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] torch::lazy::MultiWait::Complete(std::function<void ()> const&) 2023-03-23 02:02:25.057005: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057011: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057018: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057025: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] clone 2023-03-23 02:02:25.057033: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] *** End stack trace *** 2023-03-23 02:02:25.057041: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057050: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Status: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2023-03-23 02:02:25.057058: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2 root error(s) found. 2023-03-23 02:02:25.057067: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:25.057075: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]] 2023-03-23 02:02:25.057085: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:25.057094: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057102: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[XRTExecute_G12]] 2023-03-23 02:02:25.057111: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:25.057135: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057143: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:25.057151: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] [[{{node XRTExecute}}]] 2023-03-23 02:02:25.057160: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 2023-03-23 02:02:25.057168: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 2023-03-23 02:02:25.057176: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 successful operations. 2023-03-23 02:02:25.057186: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] 0 derived errors ignored. 2023-03-23 02:02:25.057194: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] Recent warning and error logs: 2023-03-23 02:02:25.057202: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:25.057209: E tensorflow/compiler/xla/xla_client/xla_util.cc:90] OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. /home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning warnings.warn( Exception in device=TPU:6: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2 root error(s) found. (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [[XRTExecute_G12]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 0 successful operations. 0 derived errors ignored. Recent warning and error logs: OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. Traceback (most recent call last): File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 334, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 328, in _start_fn fn(gindex, *args) File "/datadrive/test/run_glue.py", line 622, in _mp_fn main() File "/datadrive/test/run_glue.py", line 534, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1644, in train return inner_training_loop( File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/transformers/trainer.py", line 1881, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 30, in __next__ return self.next() File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/parallel_loader.py", line 42, in next xm.mark_step() File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/core/xla_model.py", line 949, in mark_step torch_xla._XLAC._xla_step_marker( RuntimeError: RESOURCE_EXHAUSTED: From /job:localservice/replica:0/task:0: 2 root error(s) found. (0) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [[XRTExecute_G12]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. (1) RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. 0 successful operations. 0 derived errors ignored. Recent warning and error logs: OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. OP_REQUIRES failed at tpu_execute_op.cc:266 : RESOURCE_EXHAUSTED: Attempting to reserve 10.51G at the bottom of memory. That was not possible. There are 8.97G free, 0B reserved, and 8.97G reservable. 2023-03-23 02:02:29.834867: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.834650343","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835007: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.834795697","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835038: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834893793","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835095: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834956775","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835197: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.835008010","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835206: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834976683","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835408: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.835235487","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835456: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834964014","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835480: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1679536949.835338354","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835540: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834899794","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835614: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.834992684","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835687: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.835345000","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC 2023-03-23 02:02:29.835752: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1679536949.835176851","description":"Error received from peer ipv4:127.0.0.1:51011","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC Traceback (most recent call last): File "xla_spawn.py", line 83, in <module> main() File "xla_spawn.py", line 79, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 397, in spawn result = torch.multiprocessing.start_processes( File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes while not context.join(): File "/home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 149, in join raise ProcessExitedException( torch.multiprocessing.spawn.ProcessExitedException: process 1 terminated with exit code 17 /home/vitor_jeronymo/miniconda3/envs/torch-xla/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' ``` ### Expected behavior From my understanding, the model was supposed to be split loaded onto the TPU cores, along with whatever `full_shard` entails, but it doesn't seem to be happening.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22328/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22327
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22327/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22327/comments
https://api.github.com/repos/huggingface/transformers/issues/22327/events
https://github.com/huggingface/transformers/pull/22327
1,636,714,827
PR_kwDOCUB6oc5MsVCB
22,327
Added type hints to TFDeiTModel
{ "login": "Batese2001", "id": 69521504, "node_id": "MDQ6VXNlcjY5NTIxNTA0", "avatar_url": "https://avatars.githubusercontent.com/u/69521504?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Batese2001", "html_url": "https://github.com/Batese2001", "followers_url": "https://api.github.com/users/Batese2001/followers", "following_url": "https://api.github.com/users/Batese2001/following{/other_user}", "gists_url": "https://api.github.com/users/Batese2001/gists{/gist_id}", "starred_url": "https://api.github.com/users/Batese2001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Batese2001/subscriptions", "organizations_url": "https://api.github.com/users/Batese2001/orgs", "repos_url": "https://api.github.com/users/Batese2001/repos", "events_url": "https://api.github.com/users/Batese2001/events{/privacy}", "received_events_url": "https://api.github.com/users/Batese2001/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? This pull request adds type hints for modeling_tf_deit.py as outlined in Issue #16059 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22327/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22327", "html_url": "https://github.com/huggingface/transformers/pull/22327", "diff_url": "https://github.com/huggingface/transformers/pull/22327.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22327.patch", "merged_at": 1679585493000 }
https://api.github.com/repos/huggingface/transformers/issues/22326
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22326/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22326/comments
https://api.github.com/repos/huggingface/transformers/issues/22326/events
https://github.com/huggingface/transformers/issues/22326
1,636,710,163
I_kwDOCUB6oc5hjjcT
22,326
torch_compile fail with multi-gpus on samples
{ "login": "frank-dong-ms", "id": 123416088, "node_id": "U_kgDOB1suGA", "avatar_url": "https://avatars.githubusercontent.com/u/123416088?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frank-dong-ms", "html_url": "https://github.com/frank-dong-ms", "followers_url": "https://api.github.com/users/frank-dong-ms/followers", "following_url": "https://api.github.com/users/frank-dong-ms/following{/other_user}", "gists_url": "https://api.github.com/users/frank-dong-ms/gists{/gist_id}", "starred_url": "https://api.github.com/users/frank-dong-ms/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frank-dong-ms/subscriptions", "organizations_url": "https://api.github.com/users/frank-dong-ms/orgs", "repos_url": "https://api.github.com/users/frank-dong-ms/repos", "events_url": "https://api.github.com/users/frank-dong-ms/events{/privacy}", "received_events_url": "https://api.github.com/users/frank-dong-ms/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I don't think `torch.compile` support `DataParallel`. You should launch your script in a distributed fashion using `torchrun`.", "Make sense, in that case I think transformers trainer should be refined to let torch.compile work on multi-gpu", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
### System Info platform: ubuntu 20.04 Pytorch version: nightly transformer version: build from source ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. ubuntu 20.04, python 3.8, install Pytorch nightly, transformers from source 2. install necessary dependency 3. go to official example: transformers/examples/pytorch/text-classificationg 4. Run sample with torch_compile: python run_glue.py --model_name_or_path finiteautomata/bertweet-base-sentiment-analysis --task_name mnli --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 1 --learning_rate 2e-5 --num_train_epochs 1 --overwrite_output_dir --output_dir ./outputs/ --per_device_eval_batch_size 1 --seed 1137 --fp16 True --max_train_samples 1000 --**torch_compile** 5. Got exception from pytorch: Exception: Please convert all Tensors to FakeTensors first or instantiate FakeTensorMode with 'allow_non_fake_inputs'. try to use single gpu as: export CUDA_VISIBLE_DEVICES=0 adn rerun the sample command, this time it runs without issue. Looks like if multi-gpu found, torch model will be also wrapped with nn.DataParallel: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1387 this seems cause issue with torch compile. ### Expected behavior Sample runs without issue with torch.compile with multi-gpu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22326/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22325
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22325/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22325/comments
https://api.github.com/repos/huggingface/transformers/issues/22325/events
https://github.com/huggingface/transformers/pull/22325
1,636,666,663
PR_kwDOCUB6oc5MsLC5
22,325
[gptj] support older pytorch version
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you for the quick review, Sylvain.", "Thank you @stas00 @sgugger and sorry again for the breakage.", "You haven't done anything wrong, Nick. It's just very difficult to instantly test all the different variations. We have more indepth multi-version CI running on a daily basis, so usually any missed problems get detected on the next day.\r\n\r\nAnd thank you for your contribution!" ]
1,679
1,679
1,679
CONTRIBUTOR
null
Unbreak 2 issues introduced by https://github.com/huggingface/transformers/pull/22069 . I validated that this version works even with pt-1.9, which is the new lowest version supported by transformers since https://github.com/huggingface/transformers/pull/22291 Fixes: ``` E File "/mnt/nvme0/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_clm.py", line 412, in main E model = AutoModelForCausalLM.from_pretrained( E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 470, in from_pretrained E model_class = _get_model_class(config, cls._model_mapping) E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 360, in _get_model_class E supported_models = model_mapping[type(config)] E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 602, in __getitem__ E return self._load_attr_from_module(model_type, model_name) E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 616, in _load_attr_from_module E return getattribute_from_module(self._modules[module_name], attr) E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 561, in getattribute_from_module E if hasattr(module, attr): E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/utils/import_utils.py", line 1109, in __getattr__ E module = self._get_module(self._class_to_module[name]) E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/utils/import_utils.py", line 1121, in _get_module E raise RuntimeError( E RuntimeError: Failed to import transformers.models.gptj.modeling_gptj because of the following error (look up to see its traceback): E module 'torch' has no attribute 'fx' ``` credits for the fix: @mrwyattii and: ``` E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/gptj/modeling_gptj.py", line 61, in create_sinusoidal_positions E return torch.concat((torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)), dim=1) E AttributeError: module 'torch' has no attribute 'concat' ``` credits for the fix: @njhill
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22325/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22325", "html_url": "https://github.com/huggingface/transformers/pull/22325", "diff_url": "https://github.com/huggingface/transformers/pull/22325.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22325.patch", "merged_at": 1679535304000 }
https://api.github.com/repos/huggingface/transformers/issues/22324
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22324/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22324/comments
https://api.github.com/repos/huggingface/transformers/issues/22324/events
https://github.com/huggingface/transformers/issues/22324
1,636,438,457
I_kwDOCUB6oc5hihG5
22,324
GPT2ForSequenceClassification logits unmatched size
{ "login": "OshriAvnery", "id": 16324226, "node_id": "MDQ6VXNlcjE2MzI0MjI2", "avatar_url": "https://avatars.githubusercontent.com/u/16324226?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OshriAvnery", "html_url": "https://github.com/OshriAvnery", "followers_url": "https://api.github.com/users/OshriAvnery/followers", "following_url": "https://api.github.com/users/OshriAvnery/following{/other_user}", "gists_url": "https://api.github.com/users/OshriAvnery/gists{/gist_id}", "starred_url": "https://api.github.com/users/OshriAvnery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OshriAvnery/subscriptions", "organizations_url": "https://api.github.com/users/OshriAvnery/orgs", "repos_url": "https://api.github.com/users/OshriAvnery/repos", "events_url": "https://api.github.com/users/OshriAvnery/events{/privacy}", "received_events_url": "https://api.github.com/users/OshriAvnery/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! \r\nI am not really sure about your usage (setting the vocabulary size to the number of classes?) but as it can be found on the documentation, this is how you should be using the `GPT2ForSequenceClassification` class : \r\n```python \r\nimport torch\r\nfrom transformers import GPT2Tokenizer, GPT2ForSequenceClassification\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nmodel = GPT2ForSequenceClassification.from_pretrained(\"gpt2\", num_labels = 4)\r\n\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\nwith torch.no_grad():\r\n logits = model(**inputs).logits\r\n\r\npredicted_class_id = logits.argmax().item()\r\nmodel.config.id2label[predicted_class_id]\r\n```\r\n```\r\n'LABEL_3'\r\n```\r\n(and the shape of the logit is `[1,4]` as expected.\r\nIf the model was not trained on the specific task, by default it will not have the correct shape in the last output layer, so you will see the following warning: \r\n```python \r\nSome weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2 and are newly initialized: ['score.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nNow regarding your issue, I simply suggest that there is something wrong with the shapes of the inputs that you are providing to your model. \r\nThe following works:\r\n```\r\nconfig = GPT2Config(num_labels=4)\r\nmodel = GPT2ForSequenceClassification(config)\r\nlogits = model(**inputs).logits \r\nassert logits.shape[-1] == 4\r\n``` \r\n" ]
1,679
1,679
1,679
NONE
null
### System Info huggingface-hub-0.13.3 tokenizers-0.13.2 transformers-4.27.2 python3.9 Hi :) Using GPT2ForSequenceClassification, I have num_labels > 1, but get logits with shape (batch_size,1) instead of (batch_size, config.num_labels) as written in the docs. I verified that those values are correct: config.num_classes model.num_labels and model.score Linear with the correct out_features. I also debugged and the shape of _logits_ inside the forward method is correct, so seems like the problem is with the line: pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths] maybe I didn't understand what it's supposed to do, but the result is getting logits in another shape than needed and than expected. Will be glad for help, correction or clarification :) Thank you! ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python config = GPT2Config(vocab_size = num_classes, n_embd = input_size, n_layer = 12, n_head = 8, num_labels = num_classes ) model = GPT2ForSequenceClassification(config).to(device) model.config.pad_token_id = model.config.eos_token_id outputs = model(inputs_embeds=inputs).logits ``` ### Expected behavior outputs.shape == (batch_size, config.num_labels)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22324/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22323
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22323/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22323/comments
https://api.github.com/repos/huggingface/transformers/issues/22323/events
https://github.com/huggingface/transformers/pull/22323
1,636,344,601
PR_kwDOCUB6oc5MrF8m
22,323
Seq2seq trainer generation config arg
{ "login": "Natooz", "id": 56734983, "node_id": "MDQ6VXNlcjU2NzM0OTgz", "avatar_url": "https://avatars.githubusercontent.com/u/56734983?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Natooz", "html_url": "https://github.com/Natooz", "followers_url": "https://api.github.com/users/Natooz/followers", "following_url": "https://api.github.com/users/Natooz/following{/other_user}", "gists_url": "https://api.github.com/users/Natooz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Natooz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Natooz/subscriptions", "organizations_url": "https://api.github.com/users/Natooz/orgs", "repos_url": "https://api.github.com/users/Natooz/repos", "events_url": "https://api.github.com/users/Natooz/events{/privacy}", "received_events_url": "https://api.github.com/users/Natooz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Examples are broken here is due to `Seq2SeqTrainingArguments.generation_max_length` and `Seq2SeqTrainingArguments.generation_num_beams` being removed.\r\n\r\nFrom here what do you suggest between putting them back (and send a warning ?) or / and updating the examples ?", "Hey guys, thanks for reviewing, its my pleasure to contribute considering how useful transformers have been to me ! 😃\r\n\r\n@gante \r\n1. Noted. I have put them back.\r\n2. Sounds good, you probably know better the demand / usages. One note though, I initially called `load_generation_config` from the `evaluate` and `predict` methods for in case users specify a `generation_config` `kwarg` for these methods. Should we get rid of this (then forcing users to override `trainer.generation_config` if they need to change it) ? If not, maybe we do not need a `__init__` whose purpose would solely be to create `self._gen_config`, which would also be done in `evaluate` and `predict` anyway ?", "@Natooz I think so (forcing to override). \r\n\r\nI'd rather have a simple solution now, and make it complex in the future if there is demand for it. Maintenance is a limitation on our side, our team is relatively small :)", "That's totally understandable. You guys are already managing this ecosystem really well, and make a huge impact ! 🙌\r\nI created the `__init__` method, overriding `model.generation_config`. Indeed the code is shorter and simpler.", "Hey @gante,\r\n\r\nThanks, the last changes are done.\r\nI'll take the instructions for the rebase, I just didn't do it right", "_The documentation is not available anymore as the PR was closed or merged._", "@Natooz I think everything went well, no further touches are needed in the rebase front :) Now we only need to add back the old arguments (`self.args.generation_max_length` and `self.args.generation_num_beams`) and their logic!", "Good, evaluate and predict are back as original, it should be good now", "Suggestions applied, sorry for these typos (copy / paste ...) 😅", "@Natooz no worries. Thank you for all the contributions to this PR, they will help many LLM+trainer users! 🤗 ", "Sure, here it is" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? `Seq2SeqTrainer` can load a `GenerationConfig`, by calling the `from_pretrained` method. This is done with the `generation_config_from_pretrain` argument from `Seq2SeqTrainingArguments` (or in `kwargs` of the `Seq2SeqTrainer.evaluate` and `Seq2SeqTrainer.predict` methods). At first, we though of using a `generation_config_file` argument (#22203). I thought it would be even more versatile to consider it as a "from_pretrained" approach. Hence here `generation_config_from_pretrain` can also handle model ids and urls. ### Small suggestion As `Seq2SeqTrainer` actually brings very few additional functionality or modifications, would directly including these in `Trainer` be a good idea ? (one trainer to rule them all 💍) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Yes: #22203 - [x] Did you make sure to update the documentation with your changes? - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22323/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22323", "html_url": "https://github.com/huggingface/transformers/pull/22323", "diff_url": "https://github.com/huggingface/transformers/pull/22323.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22323.patch", "merged_at": 1679928455000 }
https://api.github.com/repos/huggingface/transformers/issues/22322
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22322/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22322/comments
https://api.github.com/repos/huggingface/transformers/issues/22322/events
https://github.com/huggingface/transformers/pull/22322
1,636,304,182
PR_kwDOCUB6oc5Mq9QU
22,322
Generate: add test for left-padding support
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Current failures -- including Llama and GPTNeoX, the two I had my eyes on:\r\n![Screenshot 2023-03-22 at 18 42 57](https://user-images.githubusercontent.com/12240844/227005881-868067e5-a083-4e87-9858-020694869843.png)\r\n", "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger reduced to 10 runs, merging when CI gets green :)" ]
1,679
1,679
1,679
MEMBER
null
# What does this PR do? This PR adds a test to check whether a decoder-only model supports left padding. The test was somewhat tricky to design -- the reasoning is all commented in the code, for future reference. Let me know if you agree with the decisions made in the test! Hopefully we will be able to detect whether a model supports left-padding before merging 🤞
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22322/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/22322/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22322", "html_url": "https://github.com/huggingface/transformers/pull/22322", "diff_url": "https://github.com/huggingface/transformers/pull/22322.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22322.patch", "merged_at": 1679590823000 }
https://api.github.com/repos/huggingface/transformers/issues/22321
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22321/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22321/comments
https://api.github.com/repos/huggingface/transformers/issues/22321/events
https://github.com/huggingface/transformers/issues/22321
1,636,270,864
I_kwDOCUB6oc5hh4MQ
22,321
line 714 bsz, seq_len = input_shape may crash in CLIPTextTransformer
{ "login": "qilei123", "id": 21085736, "node_id": "MDQ6VXNlcjIxMDg1NzM2", "avatar_url": "https://avatars.githubusercontent.com/u/21085736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qilei123", "html_url": "https://github.com/qilei123", "followers_url": "https://api.github.com/users/qilei123/followers", "following_url": "https://api.github.com/users/qilei123/following{/other_user}", "gists_url": "https://api.github.com/users/qilei123/gists{/gist_id}", "starred_url": "https://api.github.com/users/qilei123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qilei123/subscriptions", "organizations_url": "https://api.github.com/users/qilei123/orgs", "repos_url": "https://api.github.com/users/qilei123/repos", "events_url": "https://api.github.com/users/qilei123/events{/privacy}", "received_events_url": "https://api.github.com/users/qilei123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @qilei123, thanks for raising this issue.\r\n\r\nCould you share a minimal code sample which reproduces the error and information about the environment the code was run in (run `transformers-cli env` to get this info)?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
### System Info Hi, there, When input_ids is more than 2 dimentions , the input_shape = input_ids.size() will give more than 2 numbers. Then bsz, seq_len = input_shape will crash. I don't know if I was right. Just face the issue and change the code in my machine and it works. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction when use CLIPTextTransformer, pass a input_ids with more than 2 dimentions. ### Expected behavior crash: unpack more than expected
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22321/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22320/comments
https://api.github.com/repos/huggingface/transformers/issues/22320/events
https://github.com/huggingface/transformers/pull/22320
1,636,226,763
PR_kwDOCUB6oc5Mqsj0
22,320
Fix PipelineTests skip conditions
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? While I am starting to try to fix some pipeline tests that are skipped previously, I realized PR #21516 (removing the usage of meta class in pipeline testing) **accidentally skip more tests than it should**. - previously, each combination of model/tokenizer/processor class has its own test case (being generated on the fly), so we can use `self.skipTest` to skip some cases. - After #21516,each model + task has its test cases, but inside that test, it runs against all tokenizers/processors that are available. In particular, slow / fast tokenizer. - As mentioned before, we have some slow tokenizer issues in pipeline testing (never tested before #20426), and we skip some failing cases for now. - but after #21516, when we use `self.skipTest`, we **might** **skip the next combination(s)**. This is **NOT good/expected**. This PR instead uses `logger.warning` to log the skipped cases and continue to the next combination in a test. This is not very pleasant, but as we don't want to use meta class, this is the only way I can think of for now. It's better to prepare a report file that clearly indicate which test cases and/or combination being [skipped].
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22320/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22320", "html_url": "https://github.com/huggingface/transformers/pull/22320", "diff_url": "https://github.com/huggingface/transformers/pull/22320.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22320.patch", "merged_at": 1679511745000 }
https://api.github.com/repos/huggingface/transformers/issues/22319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22319/comments
https://api.github.com/repos/huggingface/transformers/issues/22319/events
https://github.com/huggingface/transformers/pull/22319
1,636,198,984
PR_kwDOCUB6oc5MqmlX
22,319
Hardware Auto-Setup for Examples
{ "login": "dongreenberg", "id": 15992114, "node_id": "MDQ6VXNlcjE1OTkyMTE0", "avatar_url": "https://avatars.githubusercontent.com/u/15992114?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dongreenberg", "html_url": "https://github.com/dongreenberg", "followers_url": "https://api.github.com/users/dongreenberg/followers", "following_url": "https://api.github.com/users/dongreenberg/following{/other_user}", "gists_url": "https://api.github.com/users/dongreenberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/dongreenberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dongreenberg/subscriptions", "organizations_url": "https://api.github.com/users/dongreenberg/orgs", "repos_url": "https://api.github.com/users/dongreenberg/repos", "events_url": "https://api.github.com/users/dongreenberg/events{/privacy}", "received_events_url": "https://api.github.com/users/dongreenberg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Definitely, that makes sense, and thanks for the fast review! Updated per your suggestions.", "Thanks for iterating! There is just the issue of the tests that are not running now. It seems there is an issue with your CircleCI permissions. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?", "Oh sorry! I think I've granted it access. Do I need to trigger anything on my side?", "Probably an empty commit to re-trigger the CI.", "Ok, so it looks like we just need a quick `make style` to fix the formatting on the added examples and we should be good to go.", "Oops, thought I already ran via `make fixup`. Pushed!", "Ok, last failures are fixed on main so merging. Thanks!", "Thank you, Sylvain!!" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Discussed with @sgugger and @LysandreJik . This PR introduces auto-setup functionality for tutorials and examples (we are sending a parallel PR to accelerate, and maybe Diffusers and Spaces shortly). This allows users to run transformers code, tutorials, and scripts on self-hosted hardware (either their own instance or a cloud instances) including on-demand allocation of the hardware itself on AWS, GCP, Azure, or Lambda Labs, and installation of dependencies. This introduces a similar level of turnkey usage and reproducibility that users only typically expect in Colab, but for any type of hardware on any cloud (we've tested on Paperspace and Coreweave as well, allocating the instance in their UI and then plugging in the IP as a static cluster). Note that Runhouse OSS is facilitating the setup (via SkyPilot) and rpc, but users don't don't need to create a Runhouse account or anything like that, this is strictly inside their own cloud accounts with their own credentials (or using their IP and ssh creds without a cloud account). ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22319/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22319", "html_url": "https://github.com/huggingface/transformers/pull/22319", "diff_url": "https://github.com/huggingface/transformers/pull/22319.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22319.patch", "merged_at": 1679936873000 }
https://api.github.com/repos/huggingface/transformers/issues/22318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22318/comments
https://api.github.com/repos/huggingface/transformers/issues/22318/events
https://github.com/huggingface/transformers/pull/22318
1,636,178,944
PR_kwDOCUB6oc5MqiS-
22,318
Hardware Auto-Setup for Tutorials and Examples
{ "login": "dongreenberg", "id": 15992114, "node_id": "MDQ6VXNlcjE1OTkyMTE0", "avatar_url": "https://avatars.githubusercontent.com/u/15992114?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dongreenberg", "html_url": "https://github.com/dongreenberg", "followers_url": "https://api.github.com/users/dongreenberg/followers", "following_url": "https://api.github.com/users/dongreenberg/following{/other_user}", "gists_url": "https://api.github.com/users/dongreenberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/dongreenberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dongreenberg/subscriptions", "organizations_url": "https://api.github.com/users/dongreenberg/orgs", "repos_url": "https://api.github.com/users/dongreenberg/repos", "events_url": "https://api.github.com/users/dongreenberg/events{/privacy}", "received_events_url": "https://api.github.com/users/dongreenberg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Whoops, shouldn't be on main, I'll close and reopen on a new branch.", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Discussed with @sgugger and @LysandreJik . This PR introduces auto-setup functionality for tutorials and examples (we are sending a parallel PR to accelerate, and maybe Diffusers and Spaces shortly). This allows users to run transformers code, tutorials, and scripts on self-hosted hardware (either their own instance or a cloud instances) including on-demand allocation of the hardware itself on AWS, GCP, Azure, or Lambda Labs, and installation of dependencies. This introduces a similar level of turnkey usage and reproducibility that users only typically expect in Colab, but for any type of hardware on any cloud (we've tested on Paperspace and Coreweave as well, allocating the instance in their UI and then plugging in the IP as a static cluster). Note that Runhouse OSS is facilitating the setup (via SkyPilot) and rpc, but users don't don't need to create a Runhouse account or anything like that, this is strictly inside their own cloud accounts with their own credentials (or using their IP and ssh creds without a cloud account). This PR is WIP and seeking feedback. A few open questions: 1. launch_auto_hardware.mdx is structured to be like a notebook, but how do I make it have the "launch in colab" etc. buttons on the top right? (I'm happy to refactor it into an ipynb if needed) 2. launch_auto_hardware.mdx shows only inference. I think it could be valuable to make it mirror the initial PyTorch parts of [the yelp fine-tuning tutorial](https://huggingface.co/docs/transformers/training), to show preprocessing, training, and inference all on remote hardware. Does that make sense? 3. We're generally using the term "remote hardware" with "auto setup". Would it be better to say "self-hosted" or something like that (to avoid confusion with hosted solutions)? 4. Should we showcase more examples? Right now we show just a few but can add more if you think that's valuable (e.g. @sgugger mentioned showing TPUs before). We can also tailor the auto-setup scripts to different examples if desired, rather than the one-size fits all approach. Thank you for your input on this so far!! ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22318/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22318", "html_url": "https://github.com/huggingface/transformers/pull/22318", "diff_url": "https://github.com/huggingface/transformers/pull/22318.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22318.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22317/comments
https://api.github.com/repos/huggingface/transformers/issues/22317/events
https://github.com/huggingface/transformers/pull/22317
1,636,103,388
PR_kwDOCUB6oc5MqSH7
22,317
Add `MegatronT5ForConditionalGeneration`
{ "login": "eagle705", "id": 7252598, "node_id": "MDQ6VXNlcjcyNTI1OTg=", "avatar_url": "https://avatars.githubusercontent.com/u/7252598?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eagle705", "html_url": "https://github.com/eagle705", "followers_url": "https://api.github.com/users/eagle705/followers", "following_url": "https://api.github.com/users/eagle705/following{/other_user}", "gists_url": "https://api.github.com/users/eagle705/gists{/gist_id}", "starred_url": "https://api.github.com/users/eagle705/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eagle705/subscriptions", "organizations_url": "https://api.github.com/users/eagle705/orgs", "repos_url": "https://api.github.com/users/eagle705/repos", "events_url": "https://api.github.com/users/eagle705/events{/privacy}", "received_events_url": "https://api.github.com/users/eagle705/received_events", "type": "User", "site_admin": false }
[ { "id": 2669577093, "node_id": "MDU6TGFiZWwyNjY5NTc3MDkz", "url": "https://api.github.com/repos/huggingface/transformers/labels/PR%20for%20Model%20Addition", "name": "PR for Model Addition", "color": "5319e7", "default": false, "description": "" } ]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22317). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey! Thanks for contributing! In the current state I cannot really see the differences between this model and `T5`. Adding the `# Copied from` statements would help a lot. However if the model is very similar (and you still want to persue the PR!) I would recommend adding the model to the hub following [this](https://huggingface.co/docs/transformers/custom_models) tutorial! It will be simpler for you and you won't have to deal with all the red CIs! ", "@ArthurZucker \r\n\r\nYou are correct that the basic structure of the model is based on the existing T5. However, there are differences in the implementation between huggingface and MegatronLM (or NeMo) regarding the reshaping of tensors for attention computation, as well as various differences in normalization methods. Due to these differences, I decided to submit the pull request. Simply mapping the model weights wouldn't result in proper functioning, so a custom class was required. I will refer to the guide you provided and give it a try. Thank you :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey! Could you refer me to the link of the updated model if you already push it to the hub? 😉 This is in order to keep track of models on the hub!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,689
1,689
NONE
null
# What does this PR do? This PR adds the `MegatronT5ForConditionalGeneration` class, which among standard applications can be used for pretrained T5 model from NVIDIA NeMo MegatronT5 :) I also add converting script from NeMo MegatronT5 to Huggingface MegatronT5ForConditionalGeneration model <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #22315 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc @ArthurZucker and @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22317/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22317", "html_url": "https://github.com/huggingface/transformers/pull/22317", "diff_url": "https://github.com/huggingface/transformers/pull/22317.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22317.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22316/comments
https://api.github.com/repos/huggingface/transformers/issues/22316/events
https://github.com/huggingface/transformers/pull/22316
1,635,999,677
PR_kwDOCUB6oc5Mp8YW
22,316
docs: Resolve incorrect type typo in trainer methods
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
MEMBER
null
# What does this PR do? Replace incorrect `Lst[str]` with `List[str]` in docstrings in various locations in the `Trainer`. ## Before submitting - [x] This PR fixes a typo or improves the docs ## Who can review? Documentation: @sgugger * Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22316/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22316", "html_url": "https://github.com/huggingface/transformers/pull/22316", "diff_url": "https://github.com/huggingface/transformers/pull/22316.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22316.patch", "merged_at": 1679500629000 }
https://api.github.com/repos/huggingface/transformers/issues/22315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22315/comments
https://api.github.com/repos/huggingface/transformers/issues/22315/events
https://github.com/huggingface/transformers/issues/22315
1,635,962,835
I_kwDOCUB6oc5hgs_T
22,315
Add MegatronT5
{ "login": "eagle705", "id": 7252598, "node_id": "MDQ6VXNlcjcyNTI1OTg=", "avatar_url": "https://avatars.githubusercontent.com/u/7252598?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eagle705", "html_url": "https://github.com/eagle705", "followers_url": "https://api.github.com/users/eagle705/followers", "following_url": "https://api.github.com/users/eagle705/following{/other_user}", "gists_url": "https://api.github.com/users/eagle705/gists{/gist_id}", "starred_url": "https://api.github.com/users/eagle705/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eagle705/subscriptions", "organizations_url": "https://api.github.com/users/eagle705/orgs", "repos_url": "https://api.github.com/users/eagle705/repos", "events_url": "https://api.github.com/users/eagle705/events{/privacy}", "received_events_url": "https://api.github.com/users/eagle705/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[]
1,679
1,679
null
NONE
null
### Model description In NeMo Megatron, the T5 model is available, but there is currently no MegatronT5 class for huggingface, such as MegatronBERT or MegatronGPT2. I have recently finished the porting work and have tested the model internally. I would like to share this model with the community. ### Open source status - [X] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation - [NeMo Megatron models](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html) - [NeMo](https://github.com/NVIDIA/NeMo) - [Megatron-LM T5 model](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/t5_model.py)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22315/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22315/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22314/comments
https://api.github.com/repos/huggingface/transformers/issues/22314/events
https://github.com/huggingface/transformers/pull/22314
1,635,949,187
PR_kwDOCUB6oc5Mpxbf
22,314
Beef up Llama tests
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
MEMBER
null
# What does this PR do? I was starting to work on left padding support for Llama, and I noticed it was missing the usual test mixins. This PR rectifies that before I introduce further changes on Llama.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22314/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22314", "html_url": "https://github.com/huggingface/transformers/pull/22314", "diff_url": "https://github.com/huggingface/transformers/pull/22314.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22314.patch", "merged_at": 1679498449000 }
https://api.github.com/repos/huggingface/transformers/issues/22313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22313/comments
https://api.github.com/repos/huggingface/transformers/issues/22313/events
https://github.com/huggingface/transformers/pull/22313
1,635,907,907
PR_kwDOCUB6oc5Mpoyf
22,313
🚨🚨🚨 `[NLLB Tokenizer]` Fix the prefix tokens 🚨🚨🚨
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you for the implementation, @ArthurZucker!\r\n\r\nCould you please put in the PR description the gist of the breaking change with a code sample, and how to revert to the previous behavior if users would like that?\r\n\r\nThank you", "Indeed thanks for the tip on how to enable that swiftly! ", "One test is failing with NLLB (running slow ones locally, `test_encode_decode_with_spaces`), fixing this before merging.\r\nEdit: Fast and slow have a different behaviour! `space_between_special_tokens` does not exist in rust (yet, PR coming soon)", "Cool, I like the flag :)\r\n\r\nCan the doc be shown more prominently? Maybe to replace the disclaimer mentioning to tag me? A disclaimer mentioning that we changed it to what it is now, with the code snippet?\r\n\r\n\r\n\r\n![image](https://user-images.githubusercontent.com/30755778/229569649-9c87f71d-ba2c-4db4-8531-cf441373ff03.png)", "Thanks both for proof reading! 👍🏻 " ]
1,679
1,680
1,680
COLLABORATOR
null
# What does this PR do? The NLLB tokenizer's suffix and prefix token were wrong w.r.t to the paper. This breaking change fixes the tokenizer. Could be none breaking if we add these to the configuration file maybe? But it is a required change Have to update the tests but should be good. The big problem was the `prefix` and `suffix` tokens. The previous version adds `[self.eos_token_id, self.cur_lang_code]` at the end of the token sequence for both target and source tokenization. This is wrong as the `NLLB` paper mentions (page 48, 6.1.1. Model Architecture) : > Note that we prefix the source sequence with the source language, as opposed to the target language as previously done in several works (Arivazhagan et al., 2019; Johnson et al., 2017). This is primarily because we prioritize optimizing zero-shot performance of our model on any pair of 200 languages at a minor cost to supervised performance. Previous behaviour: ```python >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> tokenizer("How was your day?").input_ids [13374, 1398, 4260, 4039, 248130, 2, 256047] >>> # 2: '</s>' >>> # 256047 : 'eng_Latn' ``` New behaviour ```python >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> tokenizer("How was your day?").input_ids [256047, 13374, 1398, 4260, 4039, 248130, 2] ``` Enabling the old behaviour: ```python >>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", legacy_behaviour = True) ``` This parameter should be part of the `tokenizer_config.json`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22313/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22313/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22313", "html_url": "https://github.com/huggingface/transformers/pull/22313", "diff_url": "https://github.com/huggingface/transformers/pull/22313.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22313.patch", "merged_at": 1680612787000 }
https://api.github.com/repos/huggingface/transformers/issues/22312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22312/comments
https://api.github.com/repos/huggingface/transformers/issues/22312/events
https://github.com/huggingface/transformers/issues/22312
1,635,734,739
I_kwDOCUB6oc5hf1TT
22,312
LlamaTokenizer has no `pad` token, leading to failure during batch-tokenization
{ "login": "adivekar-utexas", "id": 71379271, "node_id": "MDQ6VXNlcjcxMzc5Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/71379271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adivekar-utexas", "html_url": "https://github.com/adivekar-utexas", "followers_url": "https://api.github.com/users/adivekar-utexas/followers", "following_url": "https://api.github.com/users/adivekar-utexas/following{/other_user}", "gists_url": "https://api.github.com/users/adivekar-utexas/gists{/gist_id}", "starred_url": "https://api.github.com/users/adivekar-utexas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adivekar-utexas/subscriptions", "organizations_url": "https://api.github.com/users/adivekar-utexas/orgs", "repos_url": "https://api.github.com/users/adivekar-utexas/repos", "events_url": "https://api.github.com/users/adivekar-utexas/events{/privacy}", "received_events_url": "https://api.github.com/users/adivekar-utexas/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "- Possible root cause:\r\n\r\nI don't see padding token set anywhere: https://github.com/huggingface/transformers/blob/c07a02a4b7892edfee22cbe57d3cdd9e10ae7a4d/src/transformers/models/llama/convert_llama_weights_to_hf.py#L241\r\n\r\nA bunch of LLaMa libraries seem to be setting the IDs from the sentencepiece `tokenizer.model`: https://github.com/markasoftware/llama-cpu/blob/main/llama/tokenizer.py#L24\r\n\r\nFor me, running the following yields:\r\n\r\n```\r\n>>> print(sp_model.bos_id(), sp_model.eos_id(), sp_model.pad_id())\r\n1 2 -1\r\n```\r\n\r\n...which makes me believe the original tokenizer does not have a pad token? This is confirmed by the following:\r\n\r\n```\r\nsp_model.id_to_piece(1) ## '<s>', which is the bos token for LLaMa\r\nsp_model.id_to_piece(2) ## '</s>', which is the eos token for LLaMa\r\nsp_model.id_to_piece(-1) ## Throws: IndexError: piece id is out of range.\r\n```\r\n\r\nAdditional confirmation:\r\n\r\n```\r\nvocab: Dict[str, int] = {sp_model.id_to_piece(id): id for id in range(sp_model.get_piece_size())}\r\nprint(vocab['<s>']) ## 1\r\nprint(vocab['</s>']) ## 2\r\nprint(vocab['<unk>']) ## 0\r\nprint(vocab['<pad>']) ## KeyError: '<pad>'\r\n```\r\n\r\n", "Hey, indeed the original sentencepiece model does not have a padding token. You can probably pad using the `eos_token` like it is done for `GPT2`, need to check what is mentioned on the paper, but the llama code does not use the`pad_token` it seems. ", "Yes, I don't think the original model has a padding token. The same code with GPT-2 will fail, you need to add the pad token yourself as indicated by the error message.", "So attempting to set the PAD token as the EOS token (i.e. `''`) fails with the same error message:\r\n\r\n```\r\nfrom transformers import LlamaTokenizer, LlamaForCausalLM\r\ntokenizer = LlamaTokenizer.from_pretrained(\"decapoda-research/llama-7b-hf\")\r\n\r\nprint(repr(tokenizer.pad_token)) ## None\r\nprint(repr(tokenizer.bos_token)) ## ''\r\nprint(repr(tokenizer.eos_token)) ## ''\r\nprint()\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\nprint(repr(tokenizer.pad_token)) ## ''\r\nprint(repr(tokenizer.bos_token)) ## ''\r\nprint(repr(tokenizer.eos_token)) ## ''\r\n\r\n\r\nbatch = tokenizer(\r\n [\r\n \"Singer Billy Joel yesterday \",\r\n \"The primary use of LLaMA is research on large language \"\r\n ],\r\n return_tensors=\"pt\",\r\n padding=True\r\n)\r\n```\r\n\r\nError: \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In[61], line 1\r\n----> 1 batch = tokenizer(\r\n 2 [\r\n 3 \"Singer Billy Joel yesterday \",\r\n 4 \"The primary use of LLaMA is research on large language \"\r\n 5 ],\r\n 6 return_tensors=\"pt\",\r\n 7 padding=True\r\n 8 )\r\n\r\nFile /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2531, in PreTrainedTokenizerBase.__call__(self, text, text_pair, text_target, text_pair_target, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)\r\n 2529 if not self._in_target_context_manager:\r\n 2530 self._switch_to_input_mode()\r\n-> 2531 encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)\r\n 2532 if text_target is not None:\r\n 2533 self._switch_to_target_mode()\r\n\r\nFile /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2617, in PreTrainedTokenizerBase._call_one(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)\r\n 2612 raise ValueError(\r\n 2613 f\"batch length of `text`: {len(text)} does not match batch length of `text_pair`:\"\r\n 2614 f\" {len(text_pair)}.\"\r\n 2615 )\r\n 2616 batch_text_or_text_pairs = list(zip(text, text_pair)) if text_pair is not None else text\r\n-> 2617 return self.batch_encode_plus(\r\n 2618 batch_text_or_text_pairs=batch_text_or_text_pairs,\r\n 2619 add_special_tokens=add_special_tokens,\r\n 2620 padding=padding,\r\n 2621 truncation=truncation,\r\n 2622 max_length=max_length,\r\n 2623 stride=stride,\r\n 2624 is_split_into_words=is_split_into_words,\r\n 2625 pad_to_multiple_of=pad_to_multiple_of,\r\n 2626 return_tensors=return_tensors,\r\n 2627 return_token_type_ids=return_token_type_ids,\r\n 2628 return_attention_mask=return_attention_mask,\r\n 2629 return_overflowing_tokens=return_overflowing_tokens,\r\n 2630 return_special_tokens_mask=return_special_tokens_mask,\r\n 2631 return_offsets_mapping=return_offsets_mapping,\r\n 2632 return_length=return_length,\r\n 2633 verbose=verbose,\r\n 2634 **kwargs,\r\n 2635 )\r\n 2636 else:\r\n 2637 return self.encode_plus(\r\n 2638 text=text,\r\n 2639 text_pair=text_pair,\r\n (...)\r\n 2655 **kwargs,\r\n 2656 )\r\n\r\nFile /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2799, in PreTrainedTokenizerBase.batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)\r\n 2782 \"\"\"\r\n 2783 Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.\r\n 2784 \r\n (...)\r\n 2795 details in `encode_plus`).\r\n 2796 \"\"\"\r\n 2798 # Backward compatibility for 'truncation_strategy', 'pad_to_max_length'\r\n-> 2799 padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(\r\n 2800 padding=padding,\r\n 2801 truncation=truncation,\r\n 2802 max_length=max_length,\r\n 2803 pad_to_multiple_of=pad_to_multiple_of,\r\n 2804 verbose=verbose,\r\n 2805 **kwargs,\r\n 2806 )\r\n 2808 return self._batch_encode_plus(\r\n 2809 batch_text_or_text_pairs=batch_text_or_text_pairs,\r\n 2810 add_special_tokens=add_special_tokens,\r\n (...)\r\n 2825 **kwargs,\r\n 2826 )\r\n\r\nFile /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2436, in PreTrainedTokenizerBase._get_padding_truncation_strategies(self, padding, truncation, max_length, pad_to_multiple_of, verbose, **kwargs)\r\n 2434 # Test if we have a padding token\r\n 2435 if padding_strategy != PaddingStrategy.DO_NOT_PAD and (not self.pad_token or self.pad_token_id < 0):\r\n-> 2436 raise ValueError(\r\n 2437 \"Asking to pad but the tokenizer does not have a padding token. \"\r\n 2438 \"Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` \"\r\n 2439 \"or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.\"\r\n 2440 )\r\n 2442 # Check that we will truncate to a multiple of pad_to_multiple_of if both are provided\r\n 2443 if (\r\n 2444 truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE\r\n 2445 and padding_strategy != PaddingStrategy.DO_NOT_PAD\r\n (...)\r\n 2448 and (max_length % pad_to_multiple_of != 0)\r\n 2449 ):\r\n\r\nValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.\r\n```", "Can you share a link on how GPT2 does it?", "I can confirm that the following works:\r\n\r\n```\r\nfrom transformers import LlamaTokenizer, LlamaForCausalLM\r\ntokenizer = LlamaTokenizer.from_pretrained(\"decapoda-research/llama-7b-hf\")\r\n\r\nprint(repr(tokenizer.pad_token)) ## None\r\nprint(repr(tokenizer.bos_token)) ## ''\r\nprint(repr(tokenizer.eos_token)) ## ''\r\nprint()\r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\n\r\nprint(repr(tokenizer.pad_token)) ## ''\r\nprint(repr(tokenizer.bos_token)) ## ''\r\nprint(repr(tokenizer.eos_token)) ## ''\r\n\r\nbatch = tokenizer(\r\n [\r\n \"Singer Billy Joel yesterday \",\r\n \"The primary use of LLaMA is research on large language \"\r\n ],\r\n return_tensors=\"pt\",\r\n padding=True\r\n)\r\n```", "Glad that it's now working. \r\n\r\nAs an explanation: the error arising when using `tokenizer.pad_token = tokenizer.eos_token` is because `self.pad_token` is set as an empty string which evaluates as `False` in [this check](https://github.com/huggingface/transformers/blob/5fd4e3c87c685fba2dd9615be62131748a8b5ee3/src/transformers/tokenization_utils_base.py#L2435). This seems like an expected exception as it's not possible to pad with an empty string. \r\n\r\nIn the working example, I think second print of pad token should show: \r\n`print(repr(tokenizer.pad_token)) ## '[PAD]'`\r\n", "Note that the EOS token returned by `tokenizer.eos_token` is wrong in any case (this is a known issue and @ArthurZucker should fix this). The EOS token is not `\"\"` but `\"<s>\"`. Once this issue is fixed, doing `tokenizer.pad_token = tokenizer.eos_token` will be possible.", "There is also a weird issue of increase in vocab size depending on how we add the pad token. \r\n\r\nMethod 1:\r\n\r\n`from transformers import LlamaTokenizer, LlamaForCausalLM`\r\n`tokenizer = LlamaTokenizer.from_pretrained(\"decapoda-research/llama-7b-hf\")`\r\n`tokenizer.pad_token='[PAD]'`\r\n`print(f\"pad_token_id={tokenizer.pad_token_id}\") #prints 0`\r\n`print(f\"vocab length={len(tokenizer.get_vocab())}\") #prints 32000`\r\n\r\n\r\nMethod 2\r\n`from transformers import LlamaTokenizer, LlamaForCausalLM`\r\n`tokenizer = LlamaTokenizer.from_pretrained(\"decapoda-research/llama-7b-hf\")`\r\n`num_spl_tokens_added=tokenizer.add_special_tokens({'pad_token': '[PAD]'}) #returns 1 `\r\n`print(f\"pad_token_id={tokenizer.pad_token_id}\") #prints 32000`\r\n`print(f\"vocab length={len(tokenizer.get_vocab())}\") #prints 32001`\r\n\r\nWhy is this discrepancy between `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and `tokenizer.pad_token='[PAD]'` ? \r\n\r\nDownstream issues:\r\nThe Stanford Alpaca model independently trained on decapoda-research/llama-7b-hf at \"chavinlo/alpaca-native\" uses `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and hence the model's vocab size is set to 32001. ", "I think https://github.com/huggingface/transformers/pull/22402 should fix this?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I am sorry if it is a wrong question, but don't we need padding token to train model with bs > 1, or are they concatenating sentences together, separated by eos token while training?", "@basujindal \r\n\r\nMy general understanding for bs > 1, we need to pad during finetuning. However, in pretraining the input text is set to max-length -- you can think of a sliding window over a large text corpora.", "Exactly! This was fixed in #22402 so keeping it closed!", "> There is also a weird issue of increase in vocab size depending on how we add the pad token.\r\n> \r\n> Method 1:\r\n> \r\n> `from transformers import LlamaTokenizer, LlamaForCausalLM` `tokenizer = LlamaTokenizer.from_pretrained(\"decapoda-research/llama-7b-hf\")` `tokenizer.pad_token='[PAD]'` `print(f\"pad_token_id={tokenizer.pad_token_id}\") #prints 0` `print(f\"vocab length={len(tokenizer.get_vocab())}\") #prints 32000`\r\n> \r\n> Method 2 `from transformers import LlamaTokenizer, LlamaForCausalLM` `tokenizer = LlamaTokenizer.from_pretrained(\"decapoda-research/llama-7b-hf\")` `num_spl_tokens_added=tokenizer.add_special_tokens({'pad_token': '[PAD]'}) #returns 1 ` `print(f\"pad_token_id={tokenizer.pad_token_id}\") #prints 32000` `print(f\"vocab length={len(tokenizer.get_vocab())}\") #prints 32001`\r\n> \r\n> Why is this discrepancy between `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and `tokenizer.pad_token='[PAD]'` ?\r\n> \r\n> Downstream issues: The Stanford Alpaca model independently trained on decapoda-research/llama-7b-hf at \"chavinlo/alpaca-native\" uses `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and hence the model's vocab size is set to 32001.\r\n\r\nSeems as if this discrepancy is done intentionally. With `tranformers==4.30.0.dev0`, \r\n```\r\nfrom transformers import (\r\n LlamaForCausalLM, \r\n LlamaTokenizer\r\n)\r\ntokenizer = LlamaTokenizer.from_pretrained(\"/root/HF_llama\")\r\nmodel = LlamaForCausalLM.from_pretrained(\"/root/HF_llama\").to(\"cuda\")\r\n\r\ntokenized_text = tokenizer([\"some text\", \"this will cause padding\"], padding = True, return_tensors='pt').to(\"cuda\")\r\nmodel.generate(tokenized_text['input_ids'])\r\n```\r\n\r\n### Output\r\n```\r\nValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` \r\n\r\n`(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': \r\n\r\n\\'[PAD]'})`.\r\n```\r\n\r\nWhat's the reasoning behind the distinction of the two methods?", "Hey @christoukmaji , this kind of question should be asked on the [forum](https://discuss.huggingface.co/). \r\nThe first method will set `pad_token_id` to `2` while the other will give a different index. ", "> Note that the EOS token returned by `tokenizer.eos_token` is wrong in any case (this is a known issue and @ArthurZucker should fix this). The EOS token is not `\"\"` but `\"<s>\"`. Once this issue is fixed, doing `tokenizer.pad_token = tokenizer.eos_token` will be possible.\r\n\r\nI think that `bos_token = \"<s>\"` and `eos_token = \"</s>\"`, you have a mistake. ", "> There is also a weird issue of increase in vocab size depending on how we add the pad token.\r\n> \r\n> Method 1:\r\n> \r\n> `from transformers import LlamaTokenizer, LlamaForCausalLM` `tokenizer = LlamaTokenizer.from_pretrained(\"decapoda-research/llama-7b-hf\")` `tokenizer.pad_token='[PAD]'` `print(f\"pad_token_id={tokenizer.pad_token_id}\") #prints 0` `print(f\"vocab length={len(tokenizer.get_vocab())}\") #prints 32000`\r\n> \r\n> Method 2 `from transformers import LlamaTokenizer, LlamaForCausalLM` `tokenizer = LlamaTokenizer.from_pretrained(\"decapoda-research/llama-7b-hf\")` `num_spl_tokens_added=tokenizer.add_special_tokens({'pad_token': '[PAD]'}) #returns 1 ` `print(f\"pad_token_id={tokenizer.pad_token_id}\") #prints 32000` `print(f\"vocab length={len(tokenizer.get_vocab())}\") #prints 32001`\r\n> \r\n> Why is this discrepancy between `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and `tokenizer.pad_token='[PAD]'` ?\r\n> \r\n> Downstream issues: The Stanford Alpaca model independently trained on decapoda-research/llama-7b-hf at \"chavinlo/alpaca-native\" uses `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` and hence the model's vocab size is set to 32001.\r\n\r\nSo what is the difference between the two and what would be the appropriate practice between the two?", "Method 1 does not really work if you want to have a different token for padding and `<unk>`: \r\n```python \r\n>>> from transformers import LlamaTokenizer, LlamaForCausalLM\r\n>>> tokenizer = LlamaTokenizer.from_pretrained(\"decapoda-research/llama-7b-hf\")\r\n>>> tokenizer.pad_token='[PAD]' \r\n>>> tokenizer.pad_token\r\n['PAD']\r\n>>> tokenizer.pad_token_id\r\n0\r\n>>> tokenizer.unk_token_id\r\n0\r\n``` \r\nThe pad tokens was not `added` but just set, which means it is unkown and will be always encoded as 0. ", "the solution suggested here doesn't work afaik if the model doesn't have that token, right?\r\n\r\nsee: https://stackoverflow.com/questions/76633368/why-does-the-falcon-qlora-tutorial-code-use-eos-token-as-pad-token/76639568#76639568", "Given recent release of Llama2,, and in the light of the fact that resizing from 32K to 32K+1 can make inference and training slower, will support `padding_index=-1`. I'll be working on this soon! ", "Curious what does padding_index=-1 mean and how does it solve the problem?\r\n-----\r\nBrando Miranda\r\nPh.D. Student\r\nComputer Science, Stanford University\r\nEDGE Scholar, Stanford University\r\n***@***.***\r\nwebsite: https://brando90.github.io/brandomiranda/home.html\r\nmentorship opportunities: https://brando90.github.io/brandomiranda/prospective-collaborations.html\r\n\r\n\r\n\r\nOn Jul 25, 2023, at 9:48 AM, Arthur ***@***.***> wrote:\r\n\r\n\r\n\r\nGiven recent release of Llama2,, and in the light of the fact that resizing from 32K to 32K+1 can make inference and training slower, will support padding_index=-1. I'll be working on this soon!\r\n\r\n—\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/22312#issuecomment-1650188903>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAOE6LRKD5BFJZ6VV5KVVALXR72FVANCNFSM6AAAAAAWDZNEHI>.\r\nYou are receiving this because you commented.Message ID: ***@***.***>\r\n\r\n", "If you set the padding index of the token embedding layer to -1, you don't need to change the size of the vocab, neither for the model nor for the tokenizer. The embeding layer will send zeros when it will see padding token, as it is supposed to and as it is implemented in the original Llama codebase! ", "If you want to follow the advances: #25088", "@ArthurZucker is the padding problem solved, how we have to set pad token", "Hey! PR is not merged yet, should be by the end of the week.!", "great , thank you", "@ArthurZucker looks like it's merged now — thanks for fixing this!\r\n\r\nThe PR seems to add `pad_to_multiple_of` — it's a little unclear to me how that fixes this issue. Will llama-2's tokenizer work with batch inference out of the box with this change, or do we need to do something to configure the padding still?\r\n\r\n", "Yes! The idea is that depending on your hardware, you should choose a `pad_to_multiple_of` value. This is for people who need performance optimisation. Otherwise, just add a padding token and resize normally. Gonna add a little bit of doc today about this! ", "I guess what's unclear is how `pad_to_multiple_of` addresses the issue you highlighted in your previous comment:\r\n> in the light of the fact that resizing from 32K to 32K+1 can make inference and training slower, will support padding_index=-1\r\n\r\nI thought the problem here was that we can't add a padding token without going to 32K+1, and using an existing token such as `eos` or `unk` is sub-optimal because that was not how the model was trained.\r\n\r\n" ]
1,679
1,707
1,682
NONE
null
### System Info System info: - Code: Current `main` branch, installed via: `pip install git+https://github.com/huggingface/transformers` on 22nd March 2023 ### Who can help? @ArthurZucker @sgugger @zphang ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction - Code to reproduce: ``` from transformers import LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") print(repr(tokenizer.pad_token)) ## None print(repr(tokenizer.bos_token)) ## '' print(repr(tokenizer.eos_token)) ## '' ``` - Where this causes an issue: ``` batch = tokenizer( [ "Singer Billy Joel yesterday ", "The primary use of LLaMA is research on large language " ], return_tensors="pt", padding=True ) ``` The above statement raises an issue: ``` Using pad_token, but it is not set yet. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[53], line 1 ----> 1 batch = tokenizer( 2 [ 3 "Singer Billy Joel yesterday ", 4 "The primary use of LLaMA is research on large language " 5 ], 6 return_tensors="pt", 7 padding=True 8 ) File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2531, in PreTrainedTokenizerBase.__call__(self, text, text_pair, text_target, text_pair_target, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2529 if not self._in_target_context_manager: 2530 self._switch_to_input_mode() -> 2531 encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs) 2532 if text_target is not None: 2533 self._switch_to_target_mode() File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2617, in PreTrainedTokenizerBase._call_one(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2612 raise ValueError( 2613 f"batch length of `text`: {len(text)} does not match batch length of `text_pair`:" 2614 f" {len(text_pair)}." 2615 ) 2616 batch_text_or_text_pairs = list(zip(text, text_pair)) if text_pair is not None else text -> 2617 return self.batch_encode_plus( 2618 batch_text_or_text_pairs=batch_text_or_text_pairs, 2619 add_special_tokens=add_special_tokens, 2620 padding=padding, 2621 truncation=truncation, 2622 max_length=max_length, 2623 stride=stride, 2624 is_split_into_words=is_split_into_words, 2625 pad_to_multiple_of=pad_to_multiple_of, 2626 return_tensors=return_tensors, 2627 return_token_type_ids=return_token_type_ids, 2628 return_attention_mask=return_attention_mask, 2629 return_overflowing_tokens=return_overflowing_tokens, 2630 return_special_tokens_mask=return_special_tokens_mask, 2631 return_offsets_mapping=return_offsets_mapping, 2632 return_length=return_length, 2633 verbose=verbose, 2634 **kwargs, 2635 ) 2636 else: 2637 return self.encode_plus( 2638 text=text, 2639 text_pair=text_pair, (...) 2655 **kwargs, 2656 ) File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2799, in PreTrainedTokenizerBase.batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2782 """ 2783 Tokenize and prepare for the model a list of sequences or a list of pairs of sequences. 2784 (...) 2795 details in `encode_plus`). 2796 """ 2798 # Backward compatibility for 'truncation_strategy', 'pad_to_max_length' -> 2799 padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies( 2800 padding=padding, 2801 truncation=truncation, 2802 max_length=max_length, 2803 pad_to_multiple_of=pad_to_multiple_of, 2804 verbose=verbose, 2805 **kwargs, 2806 ) 2808 return self._batch_encode_plus( 2809 batch_text_or_text_pairs=batch_text_or_text_pairs, 2810 add_special_tokens=add_special_tokens, (...) 2825 **kwargs, 2826 ) File /home/ec2-user/anaconda3/envs/llm-gen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:2436, in PreTrainedTokenizerBase._get_padding_truncation_strategies(self, padding, truncation, max_length, pad_to_multiple_of, verbose, **kwargs) 2434 # Test if we have a padding token 2435 if padding_strategy != PaddingStrategy.DO_NOT_PAD and (not self.pad_token or self.pad_token_id < 0): -> 2436 raise ValueError( 2437 "Asking to pad but the tokenizer does not have a padding token. " 2438 "Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` " 2439 "or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`." 2440 ) 2442 # Check that we will truncate to a multiple of pad_to_multiple_of if both are provided 2443 if ( 2444 truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE 2445 and padding_strategy != PaddingStrategy.DO_NOT_PAD (...) 2448 and (max_length % pad_to_multiple_of != 0) 2449 ): ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`. ``` ### Expected behavior The following code should work: ``` from transformers import LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf") batch = tokenizer( [ "Singer Billy Joel yesterday ", "The primary use of LLaMA is research on large language " ], return_tensors="pt", padding=True ) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22312/reactions", "total_count": 10, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 5, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22312/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22311/comments
https://api.github.com/repos/huggingface/transformers/issues/22311/events
https://github.com/huggingface/transformers/pull/22311
1,635,717,778
PR_kwDOCUB6oc5MpAI-
22,311
Enforce `max_memory` for device_map strategies
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? In #22271, by removing the `max_memory` from the kwargs before it gets passed to `get_balanced_memory`, I effectively made the `max_memory` argument ignored when `device_map` is `"auto"`, `"balanced"` or `"balanced_low_0"` (as was caught in the multi-GPU tests). This PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22311/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22311", "html_url": "https://github.com/huggingface/transformers/pull/22311", "diff_url": "https://github.com/huggingface/transformers/pull/22311.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22311.patch", "merged_at": 1679491328000 }
https://api.github.com/repos/huggingface/transformers/issues/22310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22310/comments
https://api.github.com/repos/huggingface/transformers/issues/22310/events
https://github.com/huggingface/transformers/pull/22310
1,635,670,992
PR_kwDOCUB6oc5Mo2Pg
22,310
Generate: Export TF generate with a TF tokenizer
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@Rocketknight1 no need to review, but FYI -- we can now compile the whole thing into a graph :D", "_The documentation is not available anymore as the PR was closed or merged._", "This is amazing!" ]
1,679
1,679
1,679
MEMBER
null
# What does this PR do? See #22254 As the title says, this PR adds the possibility to export TF generate with a TF-native tokenizer -- the full thing in a single TF graph 🤯 The missing piece was removing a redundant `if` before `tf.while_cond` -- `tf.while_cond` checks the condition before running the body, so the existing `if` before it was redundant. It was also the root cause behind the error in #22254, so removing it was a double win 🎉 A test was added to ensure we don't regress.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22310/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22310", "html_url": "https://github.com/huggingface/transformers/pull/22310", "diff_url": "https://github.com/huggingface/transformers/pull/22310.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22310.patch", "merged_at": 1679497221000 }
https://api.github.com/repos/huggingface/transformers/issues/22309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22309/comments
https://api.github.com/repos/huggingface/transformers/issues/22309/events
https://github.com/huggingface/transformers/pull/22309
1,635,568,976
PR_kwDOCUB6oc5MogQb
22,309
[`MBart`] Add `accelerate` support for MBart
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Partially fixes: #22305 This PR adds `accelerate` support for `MBart` models. To run the `accelerate` tests: ```bash RUN_SLOW=1 pytest -m accelerate_tests tests/models/mbart/test_modeling_mbart.py ``` A fix similar to https://github.com/huggingface/transformers/pull/19927 needs to be applied in order for the tests to pass cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22309/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22309", "html_url": "https://github.com/huggingface/transformers/pull/22309", "diff_url": "https://github.com/huggingface/transformers/pull/22309.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22309.patch", "merged_at": 1679564083000 }
https://api.github.com/repos/huggingface/transformers/issues/22308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22308/comments
https://api.github.com/repos/huggingface/transformers/issues/22308/events
https://github.com/huggingface/transformers/issues/22308
1,635,475,791
I_kwDOCUB6oc5he2FP
22,308
Using FNet model in Encoder Decoder Models
{ "login": "Parmida-Granfar", "id": 26651199, "node_id": "MDQ6VXNlcjI2NjUxMTk5", "avatar_url": "https://avatars.githubusercontent.com/u/26651199?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Parmida-Granfar", "html_url": "https://github.com/Parmida-Granfar", "followers_url": "https://api.github.com/users/Parmida-Granfar/followers", "following_url": "https://api.github.com/users/Parmida-Granfar/following{/other_user}", "gists_url": "https://api.github.com/users/Parmida-Granfar/gists{/gist_id}", "starred_url": "https://api.github.com/users/Parmida-Granfar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Parmida-Granfar/subscriptions", "organizations_url": "https://api.github.com/users/Parmida-Granfar/orgs", "repos_url": "https://api.github.com/users/Parmida-Granfar/repos", "events_url": "https://api.github.com/users/Parmida-Granfar/events{/privacy}", "received_events_url": "https://api.github.com/users/Parmida-Granfar/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Hi @Parmida-Granfar \r\n\r\nI would suggest to make the necessary changes to the `EncoderDecoderModel` and/or `FNetModel` code according your own need.\r\nAs you have already observed `FNet does not have attention` (very nice finding 💯 ), you can remove the following line\r\n\r\nhttps://github.com/huggingface/transformers/blob/5fd4e3c87c685fba2dd9615be62131748a8b5ee3/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L592\r\n\r\nand see if everything works then. If you still need any further help, I am more than happy to answer (but in a thread on [Hugging Face Forums](https://discuss.huggingface.co/) instead)\r\n\r\nWith all the modeling architectures coming out at a fast pace nowadays, it's not practical and realistic to make composite modeling like `EncoderDecoder` to handle all pairs of encoder and decoder models. But the good thing is the code is open source, and everyone can make changes to it :-). \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
Hello every one I want to train a model with an encoder with FNet model and decoder with another transformer model like gpt. I searched and found EncoderDecoderModel in hugging face library which makes such changes easier. I put the link below: [(https://huggingface.co/transformers/v3.5.1/model_doc/encoderdecoder.html#transformers.EncoderDecoderModel](url) . I want to use the FNet model in Encoder Decoder Models but cannot. because I face this error: > TypeError: forward() got an unexpected keyword argument 'attention_mask' I understand that this is because FNet does not have attention, but I do not know how to resolve it. I searched the internet and found out that EncoderDecoderModel does not work for all transformer models and I wanted to know why and wanted to suggest adding FNet.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22308/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22307/comments
https://api.github.com/repos/huggingface/transformers/issues/22307/events
https://github.com/huggingface/transformers/pull/22307
1,635,043,529
PR_kwDOCUB6oc5Mmufy
22,307
Fix --bf16 option support for Neuron after PR #22300
{ "login": "jeffhataws", "id": 56947987, "node_id": "MDQ6VXNlcjU2OTQ3OTg3", "avatar_url": "https://avatars.githubusercontent.com/u/56947987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeffhataws", "html_url": "https://github.com/jeffhataws", "followers_url": "https://api.github.com/users/jeffhataws/followers", "following_url": "https://api.github.com/users/jeffhataws/following{/other_user}", "gists_url": "https://api.github.com/users/jeffhataws/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeffhataws/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeffhataws/subscriptions", "organizations_url": "https://api.github.com/users/jeffhataws/orgs", "repos_url": "https://api.github.com/users/jeffhataws/repos", "events_url": "https://api.github.com/users/jeffhataws/events{/privacy}", "received_events_url": "https://api.github.com/users/jeffhataws/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> This means no mixed precision at all will be used during training as this variable controls the autocast context manager.\r\n\r\n@sgugger could you help point me to the autocast context manager? Is there a way to make it use [PyTorch autocast](https://pytorch.org/docs/stable/amp.html) instead of cuda.amp.autocast?", "The autocast context manager is defined [here](https://github.com/huggingface/transformers/blob/f48d3314e42bf54accc9dd8fd8dc1bf4197b34c6/src/transformers/trainer.py#L2604).\r\n\r\nAs for your question on `torch.autocast`, we can't use it as it's only in very recent versions of PyTorch and we support PyTorch >= 1.9", "> The autocast context manager is defined [here](https://github.com/huggingface/transformers/blob/f48d3314e42bf54accc9dd8fd8dc1bf4197b34c6/src/transformers/trainer.py#L2604).\r\n> \r\n> As for your question on `torch.autocast`, we can't use it as it's only in very recent versions of PyTorch and we support PyTorch >= 1.9\r\n\r\nOk. Thanks @sgugger . Please see my revised PR. It does resolve the runtime error while keeping the autocast functionality.", "Mmm we cannot patch torch like this in Transformers as it's too magical and might yield to hard-to-debug issues for the users.", "> Mmm we cannot patch torch like this in Transformers as it's too magical and might yield to hard-to-debug issues for the users.\r\n\r\nThanks. Please take a look at the new revision. I switched to cpu_amp.", "> Mmm we cannot patch torch like this in Transformers as it's too magical and might yield to hard-to-debug issues for the users.\r\n\r\n@sgugger looks like using cpu_amp did not yield expected result, as the XLA/HLO graphs generated still all have fp32 ports so effectively bf16 flag has no effect. The only way I can get it to work is to use gpu_amp with the override \"torch.cuda.is_bf16_supported = lambda: True\" which is limited to Neuron (if is_torch_neuroncore_available) and thus will be using torch_neuronx package and not using torch.cuda anyways so it is safe. Let me know if it is still acceptable, and I will resubmit a revision.", "I don't understand why it is necessary to patch torch.cuda for something you are telling me will not use torch.cuda anyway. Looks like there is some specific neuroncore tests that are necessary to fix the issue, but as I said before, patching torch.cuda is too magical to be accepted in Transformers. The only patch to other modules we accept are those done briefly inside a context manager.", "> I don't understand why it is necessary to patch torch.cuda for something you are telling me will not use torch.cuda anyway. Looks like there is some specific neuroncore tests that are necessary to fix the issue, but as I said before, patching torch.cuda is too magical to be accepted in Transformers. The only patch to other modules we accept are those done briefly inside a context manager.\r\n\r\nBy \"not using torch.cuda anyways\" I meant we use the GPU AMP feature to autocast to bfloat16, but once that's done, the rest is executed on Neuron. I will keep debugging, but the CPU AMP feature is not working well with pytorch XLA. ", "@sgugger I have posted a revert here https://github.com/huggingface/transformers/pull/22451 . Apologies for the extra work." ]
1,679
1,680
1,679
CONTRIBUTOR
null
This PR fixes the "RuntimeError: No CUDA GPUs are available" when running with --bf16 option on Neuron. Related PRs: https://github.com/huggingface/transformers/pull/20684 https://github.com/huggingface/transformers/pull/22300 # What does this PR do? While PR #22300 restores fp16 option on XLA GPU device, it causes "RuntimeError: No CUDA GPUs are available" when running with --bf16 option on Neuron. This PR fixes this error. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? (Manual test below) ``` export TASK_NAME=mrpc python3 ./run_glue.py \ --model_name_or_path bert-large-uncased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --bf16 \ --max_seq_length 128 \ --per_device_train_batch_size 8 \ --learning_rate 2e-5 \ --num_train_epochs 5 \ --overwrite_output_dir \ --output_dir /tmp/$TASK_NAME/ |& tee log_run ``` ``` ***** train metrics ***** epoch = 5.0 train_loss = 0.2675 train_runtime = 0:09:46.82 train_samples = 3668 train_samples_per_second = 31.253 train_steps_per_second = 3.911 100%|██████████| 51/51 [00:03<00:00, 14.66it/s] ***** eval metrics ***** epoch = 5.0 eval_accuracy = 0.8676 eval_combined_score = 0.8869 eval_f1 = 0.9062 eval_loss = 0.7155 eval_runtime = 0:00:14.42 eval_samples = 408 eval_samples_per_second = 28.289 eval_steps_per_second = 3.536 ``` ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @ymwangg @Lokiiiiii
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22307/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22307", "html_url": "https://github.com/huggingface/transformers/pull/22307", "diff_url": "https://github.com/huggingface/transformers/pull/22307.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22307.patch", "merged_at": 1679588833000 }
https://api.github.com/repos/huggingface/transformers/issues/22306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22306/comments
https://api.github.com/repos/huggingface/transformers/issues/22306/events
https://github.com/huggingface/transformers/issues/22306
1,635,040,912
I_kwDOCUB6oc5hdL6Q
22,306
Malfunctioning of PreTrainedTokenizer's tokenize method
{ "login": "chless", "id": 76512090, "node_id": "MDQ6VXNlcjc2NTEyMDkw", "avatar_url": "https://avatars.githubusercontent.com/u/76512090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chless", "html_url": "https://github.com/chless", "followers_url": "https://api.github.com/users/chless/followers", "following_url": "https://api.github.com/users/chless/following{/other_user}", "gists_url": "https://api.github.com/users/chless/gists{/gist_id}", "starred_url": "https://api.github.com/users/chless/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chless/subscriptions", "organizations_url": "https://api.github.com/users/chless/orgs", "repos_url": "https://api.github.com/users/chless/repos", "events_url": "https://api.github.com/users/chless/events{/privacy}", "received_events_url": "https://api.github.com/users/chless/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @chless, thanks for raising this issue. \r\n\r\nThe `Ġ` symbol is a special character which represents a space. The reason it's seen here is that the sentence is tokenized into `\"example\"` and `\" input\"` i.e. `Ġ` indicates there's a space in front of `input`. We see the symmetric if the words are reversed:\r\n\r\n```py\r\n>>> tokenizer = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\n>>> tokenizer.tokenize(\"input example\")\r\n['input', 'Ġexample']\r\n```\r\n\r\nThe reverse mapping, `tokenizer.decode(tokenizer.encode(...))`, is consistent as a space is added in front of `input` in the decoded sentence.\r\n\r\nMore discussion about the `Ġ` symbol and tokenization can be found [here in the forums](https://discuss.huggingface.co/t/bpe-tokenizers-and-spaces-before-words/475/2?u=amyeroberts). ", "It looks like the behavior you're seeing is actually expected, and the Ġ symbol is a special character used by the tokenizer to represent spaces.\r\n\r\nWhen you call tokenizer.tokenize(\"example input\"), the tokenizer is splitting the input string into two tokens: \"example\" and \"Ġinput\". The Ġ symbol in front of \"input\" indicates that there is a space between \"example\" and \"input\".\r\n\r\nSimilarly, when you call tokenizer.decode(tokenizer.encode(\"example input\", add_special_tokens=False)), the tokenizer is encoding the string into two tokens: \"example\" and \"Ġinput\", and then decoding those tokens back into the original string with a space between \"example\" and \"input\".\r\n\r\nSo the behavior you're seeing is actually expected, and there's no need to fix anything. If you want to remove the Ġ symbol, you can simply join the tokens returned by tokenizer.tokenize with spaces using the .join() method. \r\nFor example:\r\ntokens = tokenizer.tokenize(\"example input\")\r\nstring = \" \".join(tokens)\r\n\r\nThis should give you the output without the Ġ symbol \r\n", "Thanks for your information.\r\nNow I understand why the Ġ symbol comes, and it is proper usage.\r\nI will close this issue." ]
1,679
1,680
1,680
NONE
null
### System Info * transformers.__version__ '4.25.1' I found that at least a tokenizer's tokenized method shows wrong return. Below is code for reproduce ```python transformers.__version__ '4.25.1' tokenizer = transformers.BartTokenizer.from_pretrained("facebook/bart-large") tokenizer.tokenize("example input") ['example', 'Ġinput'] tokenizer.decode(tokenizer.encode("example input", add_special_tokens=False)) 'example input' ``` As you see, if I use "tokenize" method, it prefixes each token with some strange character. However, the encode-decode method gives the correct answer. If I'm right, should be fixed. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` transformers.__version__ '4.25.1' tokenizer = transformers.BartTokenizer.from_pretrained("facebook/bart-large") tokenizer.tokenize("example input") ['example', 'Ġinput'] tokenizer.decode(tokenizer.encode("example input", add_special_tokens=False)) 'example input' ``` ### Expected behavior ``` transformers.__version__ '4.25.1' tokenizer = transformers.BartTokenizer.from_pretrained("facebook/bart-large") tokenizer.tokenize("example input") ['example', 'input'] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22306/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22305/comments
https://api.github.com/repos/huggingface/transformers/issues/22305/events
https://github.com/huggingface/transformers/issues/22305
1,635,020,142
I_kwDOCUB6oc5hdG1u
22,305
MarianMTModel/MBartForConditionalGeneration does not support `device_map='auto'` yet
{ "login": "TranPhu1999", "id": 43123257, "node_id": "MDQ6VXNlcjQzMTIzMjU3", "avatar_url": "https://avatars.githubusercontent.com/u/43123257?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TranPhu1999", "html_url": "https://github.com/TranPhu1999", "followers_url": "https://api.github.com/users/TranPhu1999/followers", "following_url": "https://api.github.com/users/TranPhu1999/following{/other_user}", "gists_url": "https://api.github.com/users/TranPhu1999/gists{/gist_id}", "starred_url": "https://api.github.com/users/TranPhu1999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TranPhu1999/subscriptions", "organizations_url": "https://api.github.com/users/TranPhu1999/orgs", "repos_url": "https://api.github.com/users/TranPhu1999/repos", "events_url": "https://api.github.com/users/TranPhu1999/events{/privacy}", "received_events_url": "https://api.github.com/users/TranPhu1999/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @TranPhu1999, thanks for raising this issue.\r\n\r\nYes, it seems an equivalent update to the MBart and MarianMT models would need to be added, as the one added [to XGLM](https://github.com/huggingface/transformers/pull/22207/). Would you like to open a PR to add these changes? \r\n\r\ncc @younesbelkada ", "Hi @TranPhu1999 \r\nYou should be now able to use 8bit models for MBart, you can just do:\r\n```bash\r\npip install git+https://github.com/huggingface/transformers.git\r\n```\r\nI will work later on adding the same support for Marian as well", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Any updates on MarianMT models?", "cc @younesbelkada " ]
1,679
1,693
1,682
NONE
null
### System Info Hi, I'm experiment some Transformer models for Translation task. These model are [vinai-translate-en2vi](https://huggingface.co/vinai/vinai-translate-en2vi), [wmt19-ru-en](https://huggingface.co/facebook/wmt19-ru-en) In the attempt to optimize the Transformer inference time on single GPU, I tried to follow the instruction on this [document ](https://huggingface.co/docs/transformers/perf_infer_gpu_one#running-mixedint8-models-single-gpu-setup) but stump on this error. I found a similar case [here](https://github.com/huggingface/transformers/issues/22188) where the solution is to Add `accelerate` support for the correspond model. Is it the solution for my problem too? Can anyone share your experience to optimize Transformer inference time? Thanks a lot. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I use this code like the [example](https://huggingface.co/docs/transformers/perf_infer_gpu_one#running-mixedint8-models-single-gpu-setup) ` from transformers import AutoModelForSeq2SeqLM model_name = "vinai-translate-en2vi" model_8bit = AutoModelForSeq2SeqLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)` and ` from transformers import AutoModelForSeq2SeqLM model_name = "wmt19-ru-en" model_8bit = AutoModelForSeq2SeqLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)` Error ValueError: MarianMTModel does not support `device_map='auto'` yet. and ValueError: MBartForConditionalGeneration does not support `device_map='auto'` yet. ### Expected behavior The code in the instruction should be working
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22305/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22304/comments
https://api.github.com/repos/huggingface/transformers/issues/22304/events
https://github.com/huggingface/transformers/issues/22304
1,634,988,923
I_kwDOCUB6oc5hc_N7
22,304
Why there is no data send to data_collator?
{ "login": "Luoyang144", "id": 63402979, "node_id": "MDQ6VXNlcjYzNDAyOTc5", "avatar_url": "https://avatars.githubusercontent.com/u/63402979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Luoyang144", "html_url": "https://github.com/Luoyang144", "followers_url": "https://api.github.com/users/Luoyang144/followers", "following_url": "https://api.github.com/users/Luoyang144/following{/other_user}", "gists_url": "https://api.github.com/users/Luoyang144/gists{/gist_id}", "starred_url": "https://api.github.com/users/Luoyang144/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Luoyang144/subscriptions", "organizations_url": "https://api.github.com/users/Luoyang144/orgs", "repos_url": "https://api.github.com/users/Luoyang144/repos", "events_url": "https://api.github.com/users/Luoyang144/events{/privacy}", "received_events_url": "https://api.github.com/users/Luoyang144/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That's related to the data you are preprocessing, not the Transformers library or its examples. There is simply no `\"seq2seq2\"` in the features you prepare with your function. I suggest posting on the [forums](https://discuss.huggingface.co/) to get help from the larger community.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
### System Info - `transformers` version: 4.27.0.dev0 - Platform: Linux-3.10.0-514.el7.x86_64-x86_64-with-centos-7.3.1611-Core - Python version: 3.7.13 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction **I'm tring to us code below** And get error `KeyError: 'seq2seq'`, after output feature input to data_collator, get output like follow: `[{}, {}, {}, {}]` Why this happened? I output train dataset and get correct result, but when using `trainer.train()` it can get data I need. [train.txt](https://github.com/huggingface/transformers/files/11035839/train.txt) ### Expected behavior How can I send data in training process? Thanks for your help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22304/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22303/comments
https://api.github.com/repos/huggingface/transformers/issues/22303/events
https://github.com/huggingface/transformers/issues/22303
1,634,974,505
I_kwDOCUB6oc5hc7sp
22,303
"LlamaTokenizer" in transformers._import_structure["models.llama"] │ │ 9 ), "LLaMA is now in HuggingFace's main branch.\nPlease reinstall it: pip uninstall trans │ │ 10 from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
{ "login": "lucasjinreal", "id": 21303438, "node_id": "MDQ6VXNlcjIxMzAzNDM4", "avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucasjinreal", "html_url": "https://github.com/lucasjinreal", "followers_url": "https://api.github.com/users/lucasjinreal/followers", "following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}", "gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions", "organizations_url": "https://api.github.com/users/lucasjinreal/orgs", "repos_url": "https://api.github.com/users/lucasjinreal/repos", "events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}", "received_events_url": "https://api.github.com/users/lucasjinreal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Solved.", "> Solved.\r\n\r\nHow?\r\nCould you tell us more?", "> > Solved.\r\n> \r\n> How? Could you tell us more?\r\n\r\npip install sentencepiece && pip install git+https://github.com/huggingface/transformers.git", "Related: #22222 ?", "I do not understand. It was not working with the latest version, but now it seems to work. Maybe somebody updated something very recently. Still the name change is confusing: LlamaTokenizer -> LLaMATokenizer" ]
1,679
1,680
1,679
NONE
null
### System Info transformers main does have LlamaTokenizer ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction pip uninstall transformers && pip install git+https://github.com/huggingface/transformers.git ### Expected behavior Have it
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22303/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22302/comments
https://api.github.com/repos/huggingface/transformers/issues/22302/events
https://github.com/huggingface/transformers/pull/22302
1,634,606,345
PR_kwDOCUB6oc5MlSET
22,302
Fixed bug to calculate correct xpath_sub_list in MarkupLMTokenizer
{ "login": "silentghoul-spec", "id": 58596410, "node_id": "MDQ6VXNlcjU4NTk2NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/58596410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/silentghoul-spec", "html_url": "https://github.com/silentghoul-spec", "followers_url": "https://api.github.com/users/silentghoul-spec/followers", "following_url": "https://api.github.com/users/silentghoul-spec/following{/other_user}", "gists_url": "https://api.github.com/users/silentghoul-spec/gists{/gist_id}", "starred_url": "https://api.github.com/users/silentghoul-spec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silentghoul-spec/subscriptions", "organizations_url": "https://api.github.com/users/silentghoul-spec/orgs", "repos_url": "https://api.github.com/users/silentghoul-spec/repos", "events_url": "https://api.github.com/users/silentghoul-spec/events{/privacy}", "received_events_url": "https://api.github.com/users/silentghoul-spec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for fixing!" ]
1,679
1,680
1,679
CONTRIBUTOR
null
Fixed the problem where xpath_sub_list was computed incorrectly due to bug in code of MarkupLM Tokenizers. # What does this PR do? This PR fixes the bug with function get_xpath_seq method of MarkupLMTokenizer class inside src.transformers.models.markuplm.tokenization_markuplm module. Earlier in line number 304 xpath_sub_list was assigned instance of xpath_tag_list which supposedly made embedding of tags with different subscripts similar. Eg li[0] was similar to li[1]. It also fixes the same bug present in src.transformers.models.markuplm.tokenization_markuplm_fast module Fixes # (issue) Fixes the issue with incorrect xpath_sub_list computation inside get_xpath_seq method of MarkupLMTokenizer class inside src.transformers.models.markuplm.tokenization_markuplm module and also fixes the same issue in get_xpath_seq method of MarkupLMTokenizerFast class inside src.transformers.models.markuplm.tokenization_markuplm_fast module. ## Before submitting - [ No] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [Yes ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ No] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ Yes] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ No] Did you write any new necessary tests? ## Who can review? @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22302/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22302/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22302", "html_url": "https://github.com/huggingface/transformers/pull/22302", "diff_url": "https://github.com/huggingface/transformers/pull/22302.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22302.patch", "merged_at": 1679486869000 }
https://api.github.com/repos/huggingface/transformers/issues/22301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22301/comments
https://api.github.com/repos/huggingface/transformers/issues/22301/events
https://github.com/huggingface/transformers/issues/22301
1,634,569,900
I_kwDOCUB6oc5hbY6s
22,301
BlenderbotSmall incorrect usage of start and end tokens
{ "login": "xenova", "id": 26504141, "node_id": "MDQ6VXNlcjI2NTA0MTQx", "avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xenova", "html_url": "https://github.com/xenova", "followers_url": "https://api.github.com/users/xenova/followers", "following_url": "https://api.github.com/users/xenova/following{/other_user}", "gists_url": "https://api.github.com/users/xenova/gists{/gist_id}", "starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xenova/subscriptions", "organizations_url": "https://api.github.com/users/xenova/orgs", "repos_url": "https://api.github.com/users/xenova/repos", "events_url": "https://api.github.com/users/xenova/events{/privacy}", "received_events_url": "https://api.github.com/users/xenova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Hey! Thanks for reporting, will investigate! ", "Hey! When I use the Conversational pipeline I get the same outputs as you. \r\nRegarding the content of the special tokens, it does not really matter as long as the mapping is correct. If the model's bos_id is 1, then as long as `<s>` maps to `1` then the generation will make sense. \r\nAnd indeed we have:\r\n```python \r\nIn [35]: tokenizer.encode(\"<s>\")\r\nOut[35]: [3, 330, 1360]\r\n\r\nIn [36]: tokenizer.encode(\"__start__\")\r\nOut[36]: [1]\r\n``` \r\nThe doc example should be updated, or the tokenizer only should be updated. \r\nNice catch (however, this does not seem to really change the output for this example).\r\nAlso I am not entirely sure of how these `eos` and `bos` should be used in the context of BlenderBot. They should mark the start and end of a conversation when training the model on different converstations, while `\\n` is used to sperate different prompts (so from the user and the bot). \r\nI could not find anything online, gonna take a while to check with the messy original codebase\r\n", "Just bumping this again (in response to being marked as stale)", "When I checked the original PR that added BlenderBot (could not really find anyting on the original repo ... ) seems like the doc example should be updated to use `__end__` and `__start__`. See #4803. ", "Closed in #24092" ]
1,679
1,686
1,686
CONTRIBUTOR
null
### System Info - `transformers` version: 4.27.2 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.3 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction As stated in the documentation: https://huggingface.co/docs/transformers/model_doc/blenderbot-small#transformers.BlenderbotSmallForConditionalGeneration.forward.example the model should use `</s>` and `<s>` for separating the user input and response: ```python from transformers import AutoTokenizer, BlenderbotSmallForConditionalGeneration mname = "facebook/blenderbot_small-90M" model = BlenderbotSmallForConditionalGeneration.from_pretrained(mname) tokenizer = AutoTokenizer.from_pretrained(mname) UTTERANCE = "My friends are cool but they eat too many carbs." print("Human: ", UTTERANCE) inputs = tokenizer([UTTERANCE], return_tensors="pt") reply_ids = model.generate(**inputs) print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]) REPLY = "I'm not sure" print("Human: ", REPLY) NEXT_UTTERANCE = ( "My friends are cool but they eat too many carbs.</s> <s>what kind of carbs do they eat? " "i don't know much about carbs</s> " "<s> I'm not sure." ) inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt") next_reply_ids = model.generate(**inputs) print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0]) ``` However, these tokens are not present in the [vocabulary](https://huggingface.co/facebook/blenderbot_small-90M/blob/main/vocab.json) or [special tokens](https://huggingface.co/facebook/blenderbot_small-90M/blob/main/special_tokens_map.json) I assume they should be replaced with `__start__` and `__end__`? --- I have also tried to use the [ConversationPipeline](https://huggingface.co/docs/transformers/v4.27.2/en/main_classes/pipelines#transformers.ConversationalPipeline), and follow steps outlined [here](https://huggingface.co/tasks/conversational#inference), but I always get nonsensical results. Even when trying the hosted inference API for the model (https://huggingface.co/facebook/blenderbot_small-90M), it either repeats itself, or doesn't follow in conversation. ### Expected behavior The tokens should be correct, and the chatbot should engage in more realistic conversation
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22301/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22300/comments
https://api.github.com/repos/huggingface/transformers/issues/22300/events
https://github.com/huggingface/transformers/pull/22300
1,634,546,426
PR_kwDOCUB6oc5MlFPi
22,300
Restore fp16 support on xla gpu device
{ "login": "ymwangg", "id": 19481308, "node_id": "MDQ6VXNlcjE5NDgxMzA4", "avatar_url": "https://avatars.githubusercontent.com/u/19481308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ymwangg", "html_url": "https://github.com/ymwangg", "followers_url": "https://api.github.com/users/ymwangg/followers", "following_url": "https://api.github.com/users/ymwangg/following{/other_user}", "gists_url": "https://api.github.com/users/ymwangg/gists{/gist_id}", "starred_url": "https://api.github.com/users/ymwangg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ymwangg/subscriptions", "organizations_url": "https://api.github.com/users/ymwangg/orgs", "repos_url": "https://api.github.com/users/ymwangg/repos", "events_url": "https://api.github.com/users/ymwangg/events{/privacy}", "received_events_url": "https://api.github.com/users/ymwangg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Failure is unrelated and due to the branch being old, it's fixed on main, so merging." ]
1,679
1,679
1,679
CONTRIBUTOR
null
ERROR: type should be string, got "https://github.com/huggingface/transformers/pull/20684 accidentally disabled fp16 support on xla gpu device, which leads to significant performance regression. This PR restores this feature.\r\n\r\ncc @jeffhataws @sgugger @Lokiiiiii\r\n\r\nTested with \r\n```\r\nGPU_NUM_DEVICES=1 python run_mlm.py \\\r\n --model_name_or_path bert-base-uncased \\\r\n --dataset_name wikitext \\\r\n --dataset_config_name wikitext-2-raw-v1 \\\r\n --overwrite_output_dir true \\\r\n --output_dir /tmp/test-mlm \\\r\n --per_gpu_train_batch_size 24 \\\r\n --do_eval \\\r\n --fp16 true \\\r\n --do_train \\\r\n --num_train_epochs 3 \\\r\n --optim adamw_torch_xla\r\n```\r\n\r\n```\r\n***** train metrics *****\r\n epoch = 3.0\r\n train_loss = 1.7725\r\n train_runtime = 0:04:58.00\r\n train_samples = 4627\r\n train_samples_per_second = 46.58\r\n train_steps_per_second = 1.943\r\nINFO:__main__:*** Evaluate ***\r\n[INFO|trainer.py:739] 2023-03-21 19:05:53,483 >> The following columns in the evaluation set don't have a corresponding argument in `BertForMaskedLM.forward` and have been ignored: special_tokens_mask. If special_tokens_mask are not expected by `BertForMaskedLM.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:3072] 2023-03-21 19:05:53,487 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3074] 2023-03-21 19:05:53,487 >> Num examples = 479\r\n[INFO|trainer.py:3077] 2023-03-21 19:05:53,487 >> Batch size = 8\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 60/60 [00:07<00:00, 8.38it/s]\r\n***** eval metrics *****\r\n epoch = 3.0\r\n eval_loss = 1.5811\r\n eval_runtime = 0:00:29.83\r\n eval_samples = 479\r\n eval_samples_per_second = 16.055\r\n eval_steps_per_second = 2.011\r\n perplexity = 4.8601\r\n```"
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22300/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22300", "html_url": "https://github.com/huggingface/transformers/pull/22300", "diff_url": "https://github.com/huggingface/transformers/pull/22300.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22300.patch", "merged_at": 1679430763000 }
https://api.github.com/repos/huggingface/transformers/issues/22299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22299/comments
https://api.github.com/repos/huggingface/transformers/issues/22299/events
https://github.com/huggingface/transformers/pull/22299
1,634,527,968
PR_kwDOCUB6oc5MlBRv
22,299
Final update of doctest
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "There is a style issue for `src/transformers/models/bertweet/tokenization_bertweet.py`. Will check it tomorrow instead. Sorry.", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
COLLABORATOR
null
# What does this PR do? Fix the remaining doc examples. The doc example in `feature_extraction_markuplm` is un-fixable (unless we remove the example), so not added to the list for testing. Follow up PR of #22268 and #22292
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22299/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22299", "html_url": "https://github.com/huggingface/transformers/pull/22299", "diff_url": "https://github.com/huggingface/transformers/pull/22299.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22299.patch", "merged_at": 1679443234000 }
https://api.github.com/repos/huggingface/transformers/issues/22298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22298/comments
https://api.github.com/repos/huggingface/transformers/issues/22298/events
https://github.com/huggingface/transformers/pull/22298
1,634,515,815
PR_kwDOCUB6oc5Mk-sn
22,298
Correct NATTEN function signatures and force new version
{ "login": "alihassanijr", "id": 68103095, "node_id": "MDQ6VXNlcjY4MTAzMDk1", "avatar_url": "https://avatars.githubusercontent.com/u/68103095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alihassanijr", "html_url": "https://github.com/alihassanijr", "followers_url": "https://api.github.com/users/alihassanijr/followers", "following_url": "https://api.github.com/users/alihassanijr/following{/other_user}", "gists_url": "https://api.github.com/users/alihassanijr/gists{/gist_id}", "starred_url": "https://api.github.com/users/alihassanijr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alihassanijr/subscriptions", "organizations_url": "https://api.github.com/users/alihassanijr/orgs", "repos_url": "https://api.github.com/users/alihassanijr/repos", "events_url": "https://api.github.com/users/alihassanijr/events{/privacy}", "received_events_url": "https://api.github.com/users/alihassanijr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Unsure why a Flax test is failing. I'm assuming I'll have to wait for another PR to merge and then rebase?", "Hi, Thank you for the fix and PR. Regarding failing flax test, you can ignore it🤗", "This is needed to fix the CI on the main branch (broken by the new release of `natten`) so merging." ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? This complements #22229. (Sorry for breaking it into two; I was traveling when I realized the issue so I only started a quick PR to fix circleci builds.) We're releasing a new [NATTEN](https://github.com/SHI-Labs/NATTEN/pull/24) build that corrects the signature inconsistency between the function calls (see [my comment in the previous PR for more](https://github.com/huggingface/transformers/pull/22229#issuecomment-1474015157).) Rather than wait for a future build, we decided to do it right now because we could end up forgetting to open a PR to transformers. We're finishing up testing the new build, but I figured I'd open this PR before I push to PyPI. If circleci tries to get the latest version from PyPI it would fail the unit tests associated with models depending on this package. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ydshieh @amyeroberts @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22298/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22298", "html_url": "https://github.com/huggingface/transformers/pull/22298", "diff_url": "https://github.com/huggingface/transformers/pull/22298.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22298.patch", "merged_at": 1679433694000 }
https://api.github.com/repos/huggingface/transformers/issues/22297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22297/comments
https://api.github.com/repos/huggingface/transformers/issues/22297/events
https://github.com/huggingface/transformers/issues/22297
1,634,441,294
I_kwDOCUB6oc5ha5hO
22,297
Training wav2vac2 requires a lot of compute power
{ "login": "ngawang88", "id": 62231990, "node_id": "MDQ6VXNlcjYyMjMxOTkw", "avatar_url": "https://avatars.githubusercontent.com/u/62231990?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ngawang88", "html_url": "https://github.com/ngawang88", "followers_url": "https://api.github.com/users/ngawang88/followers", "following_url": "https://api.github.com/users/ngawang88/following{/other_user}", "gists_url": "https://api.github.com/users/ngawang88/gists{/gist_id}", "starred_url": "https://api.github.com/users/ngawang88/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ngawang88/subscriptions", "organizations_url": "https://api.github.com/users/ngawang88/orgs", "repos_url": "https://api.github.com/users/ngawang88/repos", "events_url": "https://api.github.com/users/ngawang88/events{/privacy}", "received_events_url": "https://api.github.com/users/ngawang88/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ngawang88, thanks for opening an issue! \r\n\r\nThis is a question that is better suited to the [forums](https://discuss.huggingface.co/). We try and keep github issues reserved for bugs and feature requests. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,679
1,682
1,682
NONE
null
I am trying to Fine tune wav2vac2 model for my national language and I have 15k data points but while trainings my system could only train on 1k datapoints and if I increase the datapoints my system either crashes or I get CUDA out of memory. so I am wonder is their any other alternatives ?? secondly can I first train on 1k data and then save the model locally and again load the model train with another 1k new data to improve my model ?? will it actually work ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22297/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22296/comments
https://api.github.com/repos/huggingface/transformers/issues/22296/events
https://github.com/huggingface/transformers/pull/22296
1,634,277,972
PR_kwDOCUB6oc5MkMDM
22,296
Add translation perf_infer_gpu_one for it
{ "login": "davidegazze", "id": 1748729, "node_id": "MDQ6VXNlcjE3NDg3Mjk=", "avatar_url": "https://avatars.githubusercontent.com/u/1748729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidegazze", "html_url": "https://github.com/davidegazze", "followers_url": "https://api.github.com/users/davidegazze/followers", "following_url": "https://api.github.com/users/davidegazze/following{/other_user}", "gists_url": "https://api.github.com/users/davidegazze/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidegazze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidegazze/subscriptions", "organizations_url": "https://api.github.com/users/davidegazze/orgs", "repos_url": "https://api.github.com/users/davidegazze/repos", "events_url": "https://api.github.com/users/davidegazze/events{/privacy}", "received_events_url": "https://api.github.com/users/davidegazze/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @nickprock The PR is good to merge with your seal of approval :) ", "@amyeroberts LGTM\r\nThanks @davidegazze " ]
1,679
1,679
1,679
CONTRIBUTOR
null
[davidegazze](https://github.com/davidegazze) davidegazze commented [52 minutes ago](https://github.com/huggingface/transformers/pull/22295#issue-1634162644) • See issue [https://github.com/https://github.com/https://github.com/huggingface/transformers/issues/17459] Add Italian translation of perf_infer_gpu_one.mdx and update _toctree.yml. It's my first pull request, so I hope it's ok The GitHub-related issue is https://github.com/huggingface/transformers/issues/22294 @omarespejel @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22296/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22296", "html_url": "https://github.com/huggingface/transformers/pull/22296", "diff_url": "https://github.com/huggingface/transformers/pull/22296.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22296.patch", "merged_at": 1679418451000 }
https://api.github.com/repos/huggingface/transformers/issues/22295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22295/comments
https://api.github.com/repos/huggingface/transformers/issues/22295/events
https://github.com/huggingface/transformers/pull/22295
1,634,162,644
PR_kwDOCUB6oc5Mjy7b
22,295
Translate perf_infer_gpu one
{ "login": "davidegazze", "id": 1748729, "node_id": "MDQ6VXNlcjE3NDg3Mjk=", "avatar_url": "https://avatars.githubusercontent.com/u/1748729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidegazze", "html_url": "https://github.com/davidegazze", "followers_url": "https://api.github.com/users/davidegazze/followers", "following_url": "https://api.github.com/users/davidegazze/following{/other_user}", "gists_url": "https://api.github.com/users/davidegazze/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidegazze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidegazze/subscriptions", "organizations_url": "https://api.github.com/users/davidegazze/orgs", "repos_url": "https://api.github.com/users/davidegazze/repos", "events_url": "https://api.github.com/users/davidegazze/events{/privacy}", "received_events_url": "https://api.github.com/users/davidegazze/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,679
1,679
1,679
CONTRIBUTOR
null
See issue [https://github.com/https://github.com/huggingface/transformers/issues/17459] Add Italian translation of perf_infer_gpu_one.mdx and update _toctree.yml. It's my first pull request, so I hope it's ok The GitHub related issue is [here](https://github.com/huggingface/transformers/issues/22294) @omarespejel @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22295/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22295", "html_url": "https://github.com/huggingface/transformers/pull/22295", "diff_url": "https://github.com/huggingface/transformers/pull/22295.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22295.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22294/comments
https://api.github.com/repos/huggingface/transformers/issues/22294/events
https://github.com/huggingface/transformers/issues/22294
1,634,156,178
I_kwDOCUB6oc5hZz6S
22,294
Add perf_infer_gpu_one.mdx italian translation
{ "login": "davidegazze", "id": 1748729, "node_id": "MDQ6VXNlcjE3NDg3Mjk=", "avatar_url": "https://avatars.githubusercontent.com/u/1748729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidegazze", "html_url": "https://github.com/davidegazze", "followers_url": "https://api.github.com/users/davidegazze/followers", "following_url": "https://api.github.com/users/davidegazze/following{/other_user}", "gists_url": "https://api.github.com/users/davidegazze/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidegazze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidegazze/subscriptions", "organizations_url": "https://api.github.com/users/davidegazze/orgs", "repos_url": "https://api.github.com/users/davidegazze/repos", "events_url": "https://api.github.com/users/davidegazze/events{/privacy}", "received_events_url": "https://api.github.com/users/davidegazze/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "Thanks again for the contribution and congrats on your first PR @davidegazze 🔥 ! Feel free to close the issue if all of the relevant pieces of work have been merged in. " ]
1,679
1,679
1,679
CONTRIBUTOR
null
See issue [https://github.com/huggingface/transformers/issues/17459] Add Italian translation of perf_infer_gpu_one.mdx and update _toctree.yml. It's my first pull request, so I hope it's ok
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22294/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22293/comments
https://api.github.com/repos/huggingface/transformers/issues/22293/events
https://github.com/huggingface/transformers/pull/22293
1,634,103,805
PR_kwDOCUB6oc5MjmPz
22,293
fix: Allow only test_file in pytorch and flax summarization
{ "login": "connor-henderson", "id": 78612354, "node_id": "MDQ6VXNlcjc4NjEyMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-henderson", "html_url": "https://github.com/connor-henderson", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "repos_url": "https://api.github.com/users/connor-henderson/repos", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,679
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? Fixes #22276 for `flax`and `pytorch` run_summarization. Is this wanted for `tensforflow`'s? I see an option in the tensorflow file to provide a test_file but not for do_predict. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22293/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22293", "html_url": "https://github.com/huggingface/transformers/pull/22293", "diff_url": "https://github.com/huggingface/transformers/pull/22293.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22293.patch", "merged_at": 1679482017000 }