url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/20384
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20384/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20384/comments
https://api.github.com/repos/huggingface/transformers/issues/20384/events
https://github.com/huggingface/transformers/pull/20384
1,459,854,636
PR_kwDOCUB6oc5DdZd6
20,384
More TF int dtype fixes
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Update: This PR now updates all serving signatures and dummy inputs to `tf.int64`. All tests like `test_saved_model_creation_extended` now pass.\r\n\r\nHowever, I couldn't find any way to cleanly save both a `tf.int32` and a `tf.int64` signature. The reason is that our serving methods currently get their signatures from their tf.function() decorator. This decorator makes it very difficult to extract the underlying function, and we can't call `get_concrete_function` on it with a signature that doesn't match its existing signature. This meant it was quite hard even to insert a shim in `save()` or `save_pretrained()` to save both signatures!\r\n\r\nI think we might just have to encourage people onto `tf.int64` as the standard int dtype when exporting models, and require them to pass their own custom signature/function is they want something else. Cc @gante @ydshieh @amyeroberts . If anyone has any good ideas, I'm open!", "@gante that's a really good point - models serialized this way might throw errors if users pass tokens straight from our tokenizers, which doesn't seem very sensible for us to do. Let me keep digging - maybe I can find some hack to export an int32 and int64 serving signature, because that's probably too big of a problem to just leave in the codebase.\r\n\r\n(Though right now the situation is that models serve either only int32 or only int64 based on what's in the serving signature, which differs between models, and that might be even worse)", "Update (with a bit of a deep dive into the TF internals):\r\n\r\n\r\nWhen you save a subclassed `Model` as `SavedModel`, the default signature is the forward pass of the model, with the input shape and dtypes that were used the *first* time the model was called. In the TF internals, what happens is that `model._set_save_spec()` is called when the model is built with those first inputs. This records the 'spec' of those inputs, and that spec is used to create the model trace for the SavedModel.\r\n\r\nWe used to have a huge number of problems because we build our models by passing tiny dummy inputs to them, which locked in that tiny dummy shape as the `save_spec` of the model. We get around that now by calling `_set_save_spec()` in the `__init__()` of the model, and passing flexible shapes (with `None` dimensions). This mostly works well!\r\n\r\nThe one downside is that even though we can save a spec with flexible shapes by doing this, there's no way to save a spec with flexible dtypes, and multiple specs aren't supported.\r\n\r\nTo save multiple traces with a `SavedModel`, you can use the `signatures` argument to `model.save()`. However, note that these can be a bit hard for users to find - if the user calls the model directly, they will only get the trace from the `save_spec`, which is locked to one dtype. In other words, you get behaviour like this (assuming the save_spec was `tf.int64`):\r\n\r\n```python\r\n# This works\r\nloaded_model(int64_inputs)\r\n\r\n# This complains that it can't find a matching signature\r\nloaded_model(int32_inputs)\r\n```\r\n\r\nIf you explicitly pass a `tf.int32` signature as well as a `tf.int64` signature to the `signatures` argument of `model.save()`, this doesn't fix the issue above. However, you will be able to do this:\r\n\r\n```python\r\n# This works\r\nloaded_model.signatures[\"serving_int32\"](int32_inputs)\r\n```\r\n\r\nIdeally, I'd like the `SavedModel` to be directly callable with `int32` or `int64`, but I haven't been able to find any way to make this possible, so unfortunately I think we have to pick a 'preferred' int dtype and support the other one only via an awkward call to `model.signatures`. I've added a commit to support this, but I still wish I could find a better approach.", "I believe you re-run the target test `test_saved_model_creation_extended` and maybe a few ones after this change, right?\r\nIf so, still good for me 💯 !", "Yes, I checked those tests again!" ]
1,669
1,669
1,669
MEMBER
null
This PR fixes (hopefully) the last remaining TF int dtype issues. - [x] Ensure all integer dummy inputs are int32 and add test - [x] Ensure all serving signatures support all-int32 - [x] Check that this fixes `test_saved_model_creation_extended` - [x] Add a second serving signature when saving with `save_pretrained` to support both int dtypes
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20384/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20384/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20384", "html_url": "https://github.com/huggingface/transformers/pull/20384", "diff_url": "https://github.com/huggingface/transformers/pull/20384.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20384.patch", "merged_at": 1669641884000 }
https://api.github.com/repos/huggingface/transformers/issues/20383
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20383/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20383/comments
https://api.github.com/repos/huggingface/transformers/issues/20383/events
https://github.com/huggingface/transformers/issues/20383
1,459,843,403
I_kwDOCUB6oc5XA3FL
20,383
[ViT] `'str' object cannot be interpreted as an integer`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts ", "False alarm! \r\nthe snippet works when using `google/vit-base-patch16-224`\r\n```\r\nimport requests\r\nfrom PIL import Image\r\nfrom transformers import AutoProcessor\r\n\r\nmodel_id = \"google/vit-base-patch16-224\"\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\n# Use the same feature extractor for everyone\r\nfeature_extractor = AutoProcessor.from_pretrained(model_id)\r\ninputs = feature_extractor(images=image, return_tensors=\"pt\")\r\n```\r\nso I expect the fix to be on `hf-internal-testing/tiny-random-ViTModel`, updating `size` attribute by an integer seems to fix the issue.\r\n\r\nOpened a PR in: https://huggingface.co/hf-internal-testing/tiny-random-ViTModel/discussions/2", "I don't think the model should be updated. It does require a source install of Transformers but it was also added since the last release, so it's fair IMO.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
CONTRIBUTOR
null
### System Info using `transformers==4.24.0`, the snippet: ``` import requests from PIL import Image from transformers import AutoProcessor url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) # Use the same feature extractor for everyone feature_extractor = AutoProcessor.from_pretrained("hf-internal-testing/tiny-random-ViTModel") inputs = feature_extractor(images=image, return_tensors="pt") ``` fails with the error: ``` 2022-11-22 13:06:49.965720: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-11-22 13:06:50.154832: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2022-11-22 13:06:50.732918: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/nccl2/lib:/usr/local/cuda/extras/CUPTI/lib64 2022-11-22 13:06:50.733007: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/nccl2/lib:/usr/local/cuda/extras/CUPTI/lib64 2022-11-22 13:06:50.733017: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. Traceback (most recent call last): File "/home/younes_huggingface_co/scratch/test_processor.py", line 10, in <module> inputs = feature_extractor(images=image, return_tensors="pt") File "/home/younes_huggingface_co/miniconda3/envs/fix-bnb-test/lib/python3.10/site-packages/transformers/models/vit/feature_extraction_vit.py", line 144, in __call__ images = [self.resize(image=image, size=self.size, resample=self.resample) for image in images] File "/home/younes_huggingface_co/miniconda3/envs/fix-bnb-test/lib/python3.10/site-packages/transformers/models/vit/feature_extraction_vit.py", line 144, in <listcomp> images = [self.resize(image=image, size=self.size, resample=self.resample) for image in images] File "/home/younes_huggingface_co/miniconda3/envs/fix-bnb-test/lib/python3.10/site-packages/transformers/image_utils.py", line 418, in resize return image.resize(size, resample=resample) File "/home/younes_huggingface_co/miniconda3/envs/fix-bnb-test/lib/python3.10/site-packages/PIL/Image.py", line 2082, in resize return self._new(self.im.resize(size, resample, box)) TypeError: 'str' object cannot be interpreted as an integer ``` However, using the main branch fixes the issue, so just flagging it! cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20383/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20383/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20382
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20382/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20382/comments
https://api.github.com/repos/huggingface/transformers/issues/20382/events
https://github.com/huggingface/transformers/issues/20382
1,459,775,679
I_kwDOCUB6oc5XAmi_
20,382
Finetunig of wav2vec2-xls-r-300m outputs invalid words for Bengali data
{ "login": "manjuke", "id": 6142443, "node_id": "MDQ6VXNlcjYxNDI0NDM=", "avatar_url": "https://avatars.githubusercontent.com/u/6142443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manjuke", "html_url": "https://github.com/manjuke", "followers_url": "https://api.github.com/users/manjuke/followers", "following_url": "https://api.github.com/users/manjuke/following{/other_user}", "gists_url": "https://api.github.com/users/manjuke/gists{/gist_id}", "starred_url": "https://api.github.com/users/manjuke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manjuke/subscriptions", "organizations_url": "https://api.github.com/users/manjuke/orgs", "repos_url": "https://api.github.com/users/manjuke/repos", "events_url": "https://api.github.com/users/manjuke/events{/privacy}", "received_events_url": "https://api.github.com/users/manjuke/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @manjuke! Cool to see you're using XLS-R for fine-tuning on Bengali. The \"issues\" page on Transformers is reserved for issues related to the Transformers modelling code. For questions related to fine-tuning experiments, the forum is the best place to ask: https://discuss.huggingface.co\r\n\r\nCould you copy your question over there? If you tag me (@sanchit-gandhi) I'll aim to answer quickly! Could you also provide a script / colab / command to reproduce this behaviour? From your eval/loss curve, it looks like you're overfitting quite drastically on your training set. This shouldn't happen for a dataset with 1k training hours, so something is definitely up!", "Sure I have created a new ticket @ https://discuss.huggingface.co/t/finetunig-of-wav2vec2-xls-r-300m-outputs-invalid-words-for-bengali-data/26507", "@sanchit-gandhi Could you please reply to my issue raised on hugging face. Awaiting response. Thanks", "Replied on the forum!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
### System Info I have used wav2vec2 pretrained model of wav2vec2-xls-r-300m, and finetuned it to 1000hrs Bengali dataset. Training took 4 full days with 20 epochs. But, there is issue in decoding. It is decoding in some arbitrary fashion, basically outputs random combination of Bengali letters (which does not have any meaning as confirmed by Bengali natives). It is showing a WER of 100% for all the sentences. My code is based on the notebook at https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLS_R_on_Common_Voice.ipynb @patrickvonplaten, @anton-l @sanchit-gandhi Pls suggest on what could have gone wrong. Should I use fairseq & redo the experiments? Thanks Eval loss graph on tensorboard looks like below. ![image](https://user-images.githubusercontent.com/6142443/203315605-d0453531-b0f6-47ad-8e6c-f04c94409938.png) ### Who can help? @patrickvonplaten, @anton-l, @sanchit-gandhi ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Finetuning on XLSR 300m models using Bengali dataset resulted this behaviour ### Expected behavior WER should have been less than 100%, and should have outputted reasonably readable hypothesis words
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20382/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20381
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20381/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20381/comments
https://api.github.com/repos/huggingface/transformers/issues/20381/events
https://github.com/huggingface/transformers/pull/20381
1,459,646,652
PR_kwDOCUB6oc5DcrTU
20,381
Revert `keys_to_ignore` for M2M100
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20381). All of your documentation changes will be reflected on that endpoint." ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? This PR reverts a change that has been made on `M2M100Model` , to reproduce you can run: ``` import torch from transformers import AutoModelForSeq2SeqLM, AutoTokenizer src_lang = "eng_Latn" tgt_lang = "spa_Latn" model_id = "facebook/nllb-200-3.3B" tokenizer = AutoTokenizer.from_pretrained(model_id, src_lang=src_lang) model = AutoModelForSeq2SeqLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") ``` on `main`, users get: ``` │ /home/younes_huggingface_co/debug_issues/code/transformers/src/transformers/modeling_utils.py:24 │ │ 59 in _load_pretrained_model │ │ │ │ 2456 │ │ │ for key in missing_keys: │ │ 2457 │ │ │ │ if key.startswith(prefix): │ │ 2458 │ │ │ │ │ key = ".".join(key.split(".")[1:]) │ │ ❱ 2459 │ │ │ │ param = model_state_dict[key] │ │ 2460 │ │ │ │ if param.device == torch.device("meta"): │ │ 2461 │ │ │ │ │ if not load_in_8bit: │ │ 2462 │ │ │ │ │ │ set_module_tensor_to_device(model, key, "cpu", torch.empty(*para │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ KeyError: 'encoder.embed_positions.weights' ``` cc @Narsil
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20381/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20381", "html_url": "https://github.com/huggingface/transformers/pull/20381", "diff_url": "https://github.com/huggingface/transformers/pull/20381.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20381.patch", "merged_at": 1669121784000 }
https://api.github.com/repos/huggingface/transformers/issues/20380
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20380/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20380/comments
https://api.github.com/repos/huggingface/transformers/issues/20380/events
https://github.com/huggingface/transformers/pull/20380
1,459,605,199
PR_kwDOCUB6oc5DciDc
20,380
[ResNet] Improve backbone
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? This PR improves the ResNetBackbone, by not assuming stages are always 4, and improving tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20380/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20380/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20380", "html_url": "https://github.com/huggingface/transformers/pull/20380", "diff_url": "https://github.com/huggingface/transformers/pull/20380.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20380.patch", "merged_at": 1669132808000 }
https://api.github.com/repos/huggingface/transformers/issues/20379
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20379/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20379/comments
https://api.github.com/repos/huggingface/transformers/issues/20379/events
https://github.com/huggingface/transformers/pull/20379
1,459,591,830
PR_kwDOCUB6oc5DcfF_
20,379
Add `accelerate` support for `ESM`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? This PR adds `accelerate` support for `ESM` models, so that the largest `ESM` models (15B params) can be loaded in 8-bit, therefore easing accessibility for large protein models. This also introduces the first protein model that can be loaded in 8bit. ``` # pip install accelerate bitsandbytes from transformers import AutoModel model = AutoModel.from_pretrained("acebook/esm2_t48_15B_UR50D", device_map="auto", load_in_8bit=True) ``` cc @sgugger @Rocketknight1 slow tests pass
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20379/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20379/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20379", "html_url": "https://github.com/huggingface/transformers/pull/20379", "diff_url": "https://github.com/huggingface/transformers/pull/20379.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20379.patch", "merged_at": 1669122360000 }
https://api.github.com/repos/huggingface/transformers/issues/20378
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20378/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20378/comments
https://api.github.com/repos/huggingface/transformers/issues/20378/events
https://github.com/huggingface/transformers/pull/20378
1,459,569,843
PR_kwDOCUB6oc5DcaQ9
20,378
Bump pillow from 9.0.1 to 9.3.0 in /examples/research_projects/decision_transformer
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
Bumps [pillow](https://github.com/python-pillow/Pillow) from 9.0.1 to 9.3.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/python-pillow/Pillow/releases">pillow's releases</a>.</em></p> <blockquote> <h2>9.3.0</h2> <p><a href="https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html">https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html</a></p> <h2>Changes</h2> <ul> <li>Initialize libtiff buffer when saving <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6699">#6699</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Limit SAMPLESPERPIXEL to avoid runtime DOS <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6700">#6700</a> [<a href="https://github.com/wiredfool"><code>@​wiredfool</code></a>]</li> <li>Inline fname2char to fix memory leak <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6329">#6329</a> [<a href="https://github.com/nulano"><code>@​nulano</code></a>]</li> <li>Fix memory leaks related to text features <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6330">#6330</a> [<a href="https://github.com/nulano"><code>@​nulano</code></a>]</li> <li>Use double quotes for version check on old CPython on Windows <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6695">#6695</a> [<a href="https://github.com/hugovk"><code>@​hugovk</code></a>]</li> <li>GHA: replace deprecated set-output command with GITHUB_OUTPUT file <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6697">#6697</a> [<a href="https://github.com/nulano"><code>@​nulano</code></a>]</li> <li>Remove backup implementation of Round for Windows platforms <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6693">#6693</a> [<a href="https://github.com/cgohlke"><code>@​cgohlke</code></a>]</li> <li>Upload fribidi.dll to GitHub Actions <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6532">#6532</a> [<a href="https://github.com/nulano"><code>@​nulano</code></a>]</li> <li>Fixed set_variation_by_name offset <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6445">#6445</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Windows build improvements <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6562">#6562</a> [<a href="https://github.com/nulano"><code>@​nulano</code></a>]</li> <li>Fix malloc in _imagingft.c:font_setvaraxes <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6690">#6690</a> [<a href="https://github.com/cgohlke"><code>@​cgohlke</code></a>]</li> <li>Only use ASCII characters in C source file <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6691">#6691</a> [<a href="https://github.com/cgohlke"><code>@​cgohlke</code></a>]</li> <li>Release Python GIL when converting images using matrix operations <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6418">#6418</a> [<a href="https://github.com/hmaarrfk"><code>@​hmaarrfk</code></a>]</li> <li>Added ExifTags enums <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6630">#6630</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Do not modify previous frame when calculating delta in PNG <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6683">#6683</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Added support for reading BMP images with RLE4 compression <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6674">#6674</a> [<a href="https://github.com/npjg"><code>@​npjg</code></a>]</li> <li>Decode JPEG compressed BLP1 data in original mode <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6678">#6678</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>pylint warnings <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6659">#6659</a> [<a href="https://github.com/marksmayo"><code>@​marksmayo</code></a>]</li> <li>Added GPS TIFF tag info <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6661">#6661</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Added conversion between RGB/RGBA/RGBX and LAB <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6647">#6647</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Do not attempt normalization if mode is already normal <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6644">#6644</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Fixed seeking to an L frame in a GIF <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6576">#6576</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Consider all frames when selecting mode for PNG save_all <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6610">#6610</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Don't reassign crc on ChunkStream close <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6627">#6627</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Raise a warning if NumPy failed to raise an error during conversion <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6594">#6594</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Only read a maximum of 100 bytes at a time in IMT header <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6623">#6623</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Show all frames in ImageShow <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6611">#6611</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Allow FLI palette chunk to not be first <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6626">#6626</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>If first GIF frame has transparency for RGB_ALWAYS loading strategy, use RGBA mode <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6592">#6592</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Round box position to integer when pasting embedded color <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6517">#6517</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Removed EXIF prefix when saving WebP <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6582">#6582</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Pad IM palette to 768 bytes when saving <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6579">#6579</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Added DDS BC6H reading <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6449">#6449</a> [<a href="https://github.com/ShadelessFox"><code>@​ShadelessFox</code></a>]</li> <li>Added support for opening WhiteIsZero 16-bit integer TIFF images <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6642">#6642</a> [<a href="https://github.com/JayWiz"><code>@​JayWiz</code></a>]</li> <li>Raise an error when allocating translucent color to RGB palette <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6654">#6654</a> [<a href="https://github.com/jsbueno"><code>@​jsbueno</code></a>]</li> <li>Moved mode check outside of loops <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6650">#6650</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Added reading of TIFF child images <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6569">#6569</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Improved ImageOps palette handling <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6596">#6596</a> [<a href="https://github.com/PososikTeam"><code>@​PososikTeam</code></a>]</li> <li>Defer parsing of palette into colors <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6567">#6567</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Apply transparency to P images in ImageTk.PhotoImage <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6559">#6559</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Use rounding in ImageOps contain() and pad() <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6522">#6522</a> [<a href="https://github.com/bibinhashley"><code>@​bibinhashley</code></a>]</li> <li>Fixed GIF remapping to palette with duplicate entries <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6548">#6548</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Allow remap_palette() to return an image with less than 256 palette entries <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6543">#6543</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> <li>Corrected BMP and TGA palette size when saving <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6500">#6500</a> [<a href="https://github.com/radarhere"><code>@​radarhere</code></a>]</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst">pillow's changelog</a>.</em></p> <blockquote> <h2>9.3.0 (2022-10-29)</h2> <ul> <li> <p>Limit SAMPLESPERPIXEL to avoid runtime DOS <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6700">#6700</a> [wiredfool]</p> </li> <li> <p>Initialize libtiff buffer when saving <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6699">#6699</a> [radarhere]</p> </li> <li> <p>Inline fname2char to fix memory leak <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6329">#6329</a> [nulano]</p> </li> <li> <p>Fix memory leaks related to text features <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6330">#6330</a> [nulano]</p> </li> <li> <p>Use double quotes for version check on old CPython on Windows <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6695">#6695</a> [hugovk]</p> </li> <li> <p>Remove backup implementation of Round for Windows platforms <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6693">#6693</a> [cgohlke]</p> </li> <li> <p>Fixed set_variation_by_name offset <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6445">#6445</a> [radarhere]</p> </li> <li> <p>Fix malloc in _imagingft.c:font_setvaraxes <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6690">#6690</a> [cgohlke]</p> </li> <li> <p>Release Python GIL when converting images using matrix operations <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6418">#6418</a> [hmaarrfk]</p> </li> <li> <p>Added ExifTags enums <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6630">#6630</a> [radarhere]</p> </li> <li> <p>Do not modify previous frame when calculating delta in PNG <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6683">#6683</a> [radarhere]</p> </li> <li> <p>Added support for reading BMP images with RLE4 compression <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6674">#6674</a> [npjg, radarhere]</p> </li> <li> <p>Decode JPEG compressed BLP1 data in original mode <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6678">#6678</a> [radarhere]</p> </li> <li> <p>Added GPS TIFF tag info <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6661">#6661</a> [radarhere]</p> </li> <li> <p>Added conversion between RGB/RGBA/RGBX and LAB <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6647">#6647</a> [radarhere]</p> </li> <li> <p>Do not attempt normalization if mode is already normal <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6644">#6644</a> [radarhere]</p> </li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/python-pillow/Pillow/commit/d594f4cb8dc47fb0c69ae58d9fff86faae4515bd"><code>d594f4c</code></a> Update CHANGES.rst [ci skip]</li> <li><a href="https://github.com/python-pillow/Pillow/commit/909dc64ed5f676169aa3d9b0c26f132a06321b83"><code>909dc64</code></a> 9.3.0 version bump</li> <li><a href="https://github.com/python-pillow/Pillow/commit/1a51ce7b955c65c8f2c6bc7772735b197b8a6aa3"><code>1a51ce7</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6699">#6699</a> from hugovk/security-libtiff_buffer</li> <li><a href="https://github.com/python-pillow/Pillow/commit/2444cddab2f83f28687c7c20871574acbb6dbcf3"><code>2444cdd</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6700">#6700</a> from hugovk/security-samples_per_pixel-sec</li> <li><a href="https://github.com/python-pillow/Pillow/commit/744f455830871d61a8de0a5e629d4c2e33817cbb"><code>744f455</code></a> Added release notes</li> <li><a href="https://github.com/python-pillow/Pillow/commit/0846bfae48513f2f51ca8547ed3b8954fa501fda"><code>0846bfa</code></a> Add to release notes</li> <li><a href="https://github.com/python-pillow/Pillow/commit/799a6a01052cea3f417a571d7c64cd14acc18c64"><code>799a6a0</code></a> Fix linting</li> <li><a href="https://github.com/python-pillow/Pillow/commit/00b25fd3ac3648bc28eff5d4c4d816e605e3f05f"><code>00b25fd</code></a> Hide UserWarning in logs</li> <li><a href="https://github.com/python-pillow/Pillow/commit/05b175ef88c22f5c416bc9b8d5b897dea1abbf2c"><code>05b175e</code></a> Tighter test case</li> <li><a href="https://github.com/python-pillow/Pillow/commit/13f2c5ae14901c89c38f898496102afd9daeaf6d"><code>13f2c5a</code></a> Prevent DOS with large SAMPLESPERPIXEL in Tiff IFD</li> <li>Additional commits viewable in <a href="https://github.com/python-pillow/Pillow/compare/9.0.1...9.3.0">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pillow&package-manager=pip&previous-version=9.0.1&new-version=9.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20378/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20378", "html_url": "https://github.com/huggingface/transformers/pull/20378", "diff_url": "https://github.com/huggingface/transformers/pull/20378.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20378.patch", "merged_at": 1669122942000 }
https://api.github.com/repos/huggingface/transformers/issues/20377
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20377/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20377/comments
https://api.github.com/repos/huggingface/transformers/issues/20377/events
https://github.com/huggingface/transformers/issues/20377
1,459,543,046
I_kwDOCUB6oc5W_twG
20,377
PreTrainedModel: provide a more intuitive way of getting the current size of embeddings
{ "login": "damian0815", "id": 144366, "node_id": "MDQ6VXNlcjE0NDM2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/144366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/damian0815", "html_url": "https://github.com/damian0815", "followers_url": "https://api.github.com/users/damian0815/followers", "following_url": "https://api.github.com/users/damian0815/following{/other_user}", "gists_url": "https://api.github.com/users/damian0815/gists{/gist_id}", "starred_url": "https://api.github.com/users/damian0815/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/damian0815/subscriptions", "organizations_url": "https://api.github.com/users/damian0815/orgs", "repos_url": "https://api.github.com/users/damian0815/repos", "events_url": "https://api.github.com/users/damian0815/events{/privacy}", "received_events_url": "https://api.github.com/users/damian0815/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for flagging, I had kind of the same problem recently when implementing #20043. I used\r\n```\r\nmodel.get_input_embeddings().weight.shape[0]\r\n```\r\nto get the embedding size, not sure if there is anything easier but having a method that does this would certainly be more helpful if you want to contribute it!\r\n", "PR made", "I don't think you actually opened it, you just created a new branch :-)", "ok PR actually made :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
### Feature request Please add an intuitive and obvious method to get the size of the current token embeddings on `PreTrainedModel`. For example, a method called `get_token_embeddings_size()` that can be used as a complement to `resize_token_embeddings()`. ### Motivation Current API design around the `resize_token_embeddings()` method requires doing the following to increase the size of the token embeddings by 1: ```python # add 1 new embedding current_embeddings = my_xformer.resize_token_embeddings(None) new_embeddings_size = current_embeddings.num_embeddings + 1 my_xformer.resize_token_embeddings(new_embeddings_size) ``` This is counterintuitive and bleeds implementation details to the call site. It requires me to know 1. that calling a "resize" method with the argument `None` returns the object to be resized (which is not intuitive), and 2. that "size" means the property `num_embeddings` on the returned object (admittedly it's not difficult to guess, but it is still a *guess*, and is in fact an implementation detail that I shouldn't need to know). It would be better if I could do something like this: ```python # add 1 new embedding current_embeddings_size = my_xformer.get_token_embeddings_size() new_embeddings_size = current_embeddings_size + 1 my_xformer.resize_token_embeddings(new_embeddings_size) ``` This provides an intuitively-named method to determine the current "size", and appropriately hides the implementation detail that "size" means the `num_embeddings` property on the object to be resized. ### Your contribution Here's a proposed implementation: ```python def get_token_embeddings_size(self) -> int: return model_embeds.num_embeddings ``` i can make a PR if it would help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20377/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20377/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20376
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20376/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20376/comments
https://api.github.com/repos/huggingface/transformers/issues/20376/events
https://github.com/huggingface/transformers/pull/20376
1,459,529,527
PR_kwDOCUB6oc5DcRTd
20,376
support `t5` for `text-generation` pipeline
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? A tentative to support `T5` for `text-generation` pipeline. I don't really expect this PR to get merged as it is very hacky and IMO not a good idea to support `T5` for `text-generation` but I would love to have some insights on what we can potentially do to support `text-generation` pipeline for `T5` Probably the fix would be also to implement `T5ForCausalLM` but not sure if this makes sense!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20376/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20376", "html_url": "https://github.com/huggingface/transformers/pull/20376", "diff_url": "https://github.com/huggingface/transformers/pull/20376.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20376.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20375
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20375/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20375/comments
https://api.github.com/repos/huggingface/transformers/issues/20375/events
https://github.com/huggingface/transformers/pull/20375
1,459,502,398
PR_kwDOCUB6oc5DcLU4
20,375
Raised Exceptions under conditions that are contrary to specified conditions that assert statements
{ "login": "JuheonChu", "id": 35699839, "node_id": "MDQ6VXNlcjM1Njk5ODM5", "avatar_url": "https://avatars.githubusercontent.com/u/35699839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JuheonChu", "html_url": "https://github.com/JuheonChu", "followers_url": "https://api.github.com/users/JuheonChu/followers", "following_url": "https://api.github.com/users/JuheonChu/following{/other_user}", "gists_url": "https://api.github.com/users/JuheonChu/gists{/gist_id}", "starred_url": "https://api.github.com/users/JuheonChu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JuheonChu/subscriptions", "organizations_url": "https://api.github.com/users/JuheonChu/orgs", "repos_url": "https://api.github.com/users/JuheonChu/repos", "events_url": "https://api.github.com/users/JuheonChu/events{/privacy}", "received_events_url": "https://api.github.com/users/JuheonChu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for your PR. As you can see, it shows a diff of 191 files. Could you open a fresh PR with just your proposed changes?", "Yes, I will do that. Thanks for the feedback! ", "I will make a new PR after improving the validity checks on ci/circleci: check_code_quality." ]
1,669
1,669
1,669
CONTRIBUTOR
null
Co-author: @Batese2001 Test file: src/transformers/models/distilbert/modeling_distilbert.py Local testing: ![testDistilbert](https://user-images.githubusercontent.com/35699839/203280712-a8a9b823-0f4f-4d69-8913-d444d1cd9988.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20375/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20375/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20375", "html_url": "https://github.com/huggingface/transformers/pull/20375", "diff_url": "https://github.com/huggingface/transformers/pull/20375.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20375.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20374
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20374/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20374/comments
https://api.github.com/repos/huggingface/transformers/issues/20374/events
https://github.com/huggingface/transformers/issues/20374
1,459,499,929
I_kwDOCUB6oc5W_jOZ
20,374
[layoutlmv3] SER and RE task combined into one model
{ "login": "githublsk", "id": 77612906, "node_id": "MDQ6VXNlcjc3NjEyOTA2", "avatar_url": "https://avatars.githubusercontent.com/u/77612906?v=4", "gravatar_id": "", "url": "https://api.github.com/users/githublsk", "html_url": "https://github.com/githublsk", "followers_url": "https://api.github.com/users/githublsk/followers", "following_url": "https://api.github.com/users/githublsk/following{/other_user}", "gists_url": "https://api.github.com/users/githublsk/gists{/gist_id}", "starred_url": "https://api.github.com/users/githublsk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/githublsk/subscriptions", "organizations_url": "https://api.github.com/users/githublsk/orgs", "repos_url": "https://api.github.com/users/githublsk/repos", "events_url": "https://api.github.com/users/githublsk/events{/privacy}", "received_events_url": "https://api.github.com/users/githublsk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I think it's totally doable to share the Transformer encoder and use 2 separate heads. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closing this issue as it seems resolved. Feel free to reopen." ]
1,669
1,673
1,673
NONE
null
### System Info datasets==1.15.1 transformers==4.12.5 seqeval==1.2.2 deepspeed==0.5.7 tensorboard==2.7.0 seqeval==1.2.2 sentencepiece timm==0.4.12 Pillow einops textdistance shapely ### Who can help? @NielsRogge ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction combine SER and RE task into one model using layoutlmv3 ### Expected behavior Hi @NielsRogge, Now we want to train SER and RE task using layoutlmv3, but these two models is more heavy for deploy with tensorrt, so to solve this problem, if we combine two tasks into one model, the performance for SER and RE will decrease more compared to the seprated two models? we have no more experience for it, do you give us some advice? thank you very much.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20374/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20373
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20373/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20373/comments
https://api.github.com/repos/huggingface/transformers/issues/20373/events
https://github.com/huggingface/transformers/pull/20373
1,459,473,837
PR_kwDOCUB6oc5DcFEM
20,373
change the way sentinel tokens can retrived
{ "login": "raghavanone", "id": 115454562, "node_id": "U_kgDOBuGyYg", "avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raghavanone", "html_url": "https://github.com/raghavanone", "followers_url": "https://api.github.com/users/raghavanone/followers", "following_url": "https://api.github.com/users/raghavanone/following{/other_user}", "gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}", "starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions", "organizations_url": "https://api.github.com/users/raghavanone/orgs", "repos_url": "https://api.github.com/users/raghavanone/repos", "events_url": "https://api.github.com/users/raghavanone/events{/privacy}", "received_events_url": "https://api.github.com/users/raghavanone/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@SaulLu @sgugger Done", "As pointed above, you're still missing the stronger test to detect the sentinel tokens.", "Thanks a lot for working on this. Maybe I'm just nitpicking, but wouldn't it be great if the returned sentinel tokens were sorted?\r\n\r\n```\r\nself.tokenizer.get_sentinel_tokens()\r\n['<extra_id_1>', '<extra_id_26>', '<extra_id_55>', '<extra_id_87>', '<extra_id_50>', '<extra_id_49>', '<extra_id_74>', '<extra_id_66>', '<extra_id_83>', '<extra_id_30>', '<extra_id_3>', '<extra_id_0>', '<extra_id_90>', '<extra_id_14>', '<extra_id_71>', '<extra_id_6>', '<extra_id_18>', '<extra_id_4>', '<extra_id_75>', '<extra_id_99>', '<extra_id_63>', '<extra_id_58>', '<extra_id_48>', '<extra_id_62>', '<extra_id_73>', '<extra_id_20>', '<extra_id_70>', '<extra_id_21>', '<extra_id_38>', '<extra_id_34>', '<extra_id_88>', '<extra_id_28>', '<extra_id_97>', '<extra_id_91>', '<extra_id_65>', '<extra_id_81>', '<extra_id_98>', '<extra_id_23>', '<extra_id_96>', '<extra_id_12>', '<extra_id_19>', '<extra_id_79>', '<extra_id_78>', '<extra_id_68>', '<extra_id_95>', '<extra_id_35>', '<extra_id_42>', '<extra_id_27>', '<extra_id_85>', '<extra_id_67>', '<extra_id_17>', '<extra_id_36>', '<extra_id_93>', '<extra_id_37>', '<extra_id_60>', '<extra_id_77>', '<extra_id_32>', '<extra_id_92>', '<extra_id_33>', '<extra_id_40>', '<extra_id_86>', '<extra_id_53>', '<extra_id_10>', '<extra_id_31>', '<extra_id_72>', '<extra_id_24>', '<extra_id_80>', '<extra_id_13>', '<extra_id_45>', '<extra_id_61>', '<extra_id_52>', '<extra_id_8>', '<extra_id_44>', '<extra_id_57>', '<extra_id_16>', '<extra_id_84>', '<extra_id_51>', '<extra_id_56>', '<extra_id_41>', '<extra_id_64>', '<extra_id_39>', '<extra_id_9>', '<extra_id_25>', '<extra_id_59>', '<extra_id_46>', '<extra_id_7>', '<extra_id_54>', '<extra_id_29>', '<extra_id_89>', '<extra_id_22>', '<extra_id_15>', '<extra_id_43>', '<extra_id_47>', '<extra_id_76>', '<extra_id_5>', '<extra_id_94>', '<extra_id_11>', '<extra_id_82>', '<extra_id_2>', '<extra_id_69>']\r\n```" ]
1,669
1,671
1,669
CONTRIBUTOR
null
# What does this PR do? fixes the issue #19298 @SaulLu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20373/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20373", "html_url": "https://github.com/huggingface/transformers/pull/20373", "diff_url": "https://github.com/huggingface/transformers/pull/20373.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20373.patch", "merged_at": 1669214144000 }
https://api.github.com/repos/huggingface/transformers/issues/20372
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20372/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20372/comments
https://api.github.com/repos/huggingface/transformers/issues/20372/events
https://github.com/huggingface/transformers/issues/20372
1,459,422,372
I_kwDOCUB6oc5W_QSk
20,372
Community contribution - `BetterTransformer` integration for more models!
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "> NotImplementedError: The Better Transformers implementation for the model DebertaV2Model has not beenimplemented yet. Please open an issue requesting the addition of this model with its `BetterTransformer`implementation.\r\n\r\nIt's not on your list, but would you complain if I did this for DebertaV2Model?", "It is not in the list because `DebertaV2` does not have a regular attention mechanism, so it is not possible to use it with BetterTransformer.", "Yes I second what @michaelbenayoun said, please see related: https://github.com/huggingface/optimum/issues/487", "makes a lot of sense - sorry I should have thought about that a bit harder before posting!", "I noticed that Better Transformers for the T5 model has not been implemented yet. Will it be implemented in the future (if possible)? Thanks.", "Hi @GenVr \r\nThanks a lot for your reply! Unfortunately `T5` cannot be supported because of the nature of its attention mechanism. In fact `T5` uses attention bias and this is not supported for `BetterTransformer`\r\nThanks!", "Hi :) I would like to work on the implementation for [RemBertLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/rembert/modeling_rembert.py#L415).\r\n\r\nWhat are the next steps in getting started?\r\n\r\nThank you!", "Hey @RJZauner !\r\nThanks so much for your interest in helping us integrating more models for `BetterTransformer` ! \r\nRemBert seems to use the same attention mechanism as BERT, the only difference should be on the Embedding layer, which is fine for us! So I would say you can move ahead and start forking [optimum](https://github.com/huggingface/optimum) library, create a new branch and open a draft PR. Feel free to have some inspiration from what has been done by https://github.com/huggingface/optimum/pull/494 and https://github.com/huggingface/optimum/pull/508 to see what exactly needs to be done ;) Ping us (myself, @michaelbenayoun & @fxmarty) whenever you feel that you need help!", "Hi @younesbelkada, I would like to work on the easiest of the models mentioned above. Which one do you recommend? What I said might sound a bit weird but I want to tackle a simple one since I'm not very familiar with these models 🙏 ", "Hello, I would like to tackle the implementation for [TapasLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/tapas/modeling_tapas.py#L524).\r\n\r\nMay I ask you how I can start the further steps?\r\n\r\nThank you for your time.", "Hi @shogohida and @JuheonChu ,\r\n\r\nYou can read this [page](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute) for learning how to contribute. You can then open a PR with your code, and ask questions there, we will be glad to help!\r\n\r\nAlso @shogohida, I think they are all similar in terms of difficulty, so do not block on that, maybe choose a model with the modality the most familiar to you.", "Seconding what @michaelbenayoun said, feel free to check some example PRs https://github.com/huggingface/optimum/pull/508 or https://github.com/huggingface/optimum/pull/494 for reference! \r\n@shogohida , you can take RocBERT, actually it copies from Bert so the conversion will be very easy :) ", "Thanks guys for your replies! I will take RocBERT then!", "Thanks @michaelbenayoun ! I will take TapasLayer !", "Hi! Thank you so much for opening this issue. \r\n\r\n1. I was implementing the RemBERT and had some questions. But then I noticed that @RJZauner had already been working on that. I am going to hold my work on that and I am looking forward to see RJZauner's implementations!\r\n2. I will work on the mBART.\r\n3. I also found some dead links and some points unclear on this [page](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute). How should I report and help to solve the problems I found? ", "Hello @younesbelkada,\r\n\r\nI would like to take [DetrLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/detr/modeling_detr.py#L610). Nice tutorial btw 😀", "Hi @blakechi !\nSure you can take it ;) let me know if you need help opening a PR!", "Hi @ravenouse !\nThanks for your help! Yes you can take MBART ;) \nRegarding the dead link could you open an issue at optimum? \nThanks!", "\r\n\r\n\r\n\r\n> Hey @RJZauner !\r\n> Thanks so much for your interest in helping us integrating more models for `BetterTransformer` !\r\n> RemBert seems to use the same attention mechanism as BERT, the only difference should be on the Embedding layer, which is fine for us! So I would say you can move ahead and start forking [optimum](https://github.com/huggingface/optimum) library, create a new branch and open a draft PR. Feel free to have some inspiration from what has been done by [huggingface/optimum#494](https://github.com/huggingface/optimum/pull/494) and [huggingface/optimum#508](https://github.com/huggingface/optimum/pull/508) to see what exactly needs to be done ;) Ping us (myself, @michaelbenayoun & @fxmarty) whenever you feel that you need help!\r\n\r\nThank you for the info!", "Hello @michaelbenayoun and @younesbelkada ! \r\n\r\nFirst time contributing for me :) \r\n\r\nI would like to handle the implementation for [Speech2Text](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L350)\r\n\r\nWhat are the first steps ? Create a PR ?\r\n\r\nThanks in advance.", "> Hello @michaelbenayoun and @younesbelkada !\r\n> \r\n> First time contributing for me :)\r\n> \r\n> I would like to handle the implementation for [Speech2Text](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L350)\r\n> \r\n> What are the first steps ? Create a PR ?\r\n> \r\n> Thanks in advance.\r\n\r\nHello, I am absolutely sure that they will give you a better suggestion than what I have.\r\nI would like to share that it is good to read `CONTRIBUTING.md` in the transformer repository.\r\nI read through every content very carefully and made my first contribution!", "> > Hello @michaelbenayoun and @younesbelkada !\r\n> > First time contributing for me :)\r\n> > I would like to handle the implementation for [Speech2Text](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L350)\r\n> > What are the first steps ? Create a PR ?\r\n> > Thanks in advance.\r\n> \r\n> Hello, I am absolutely sure that they will give you a better suggestion than what I have. I would like to share that it is good to read `CONTRIBUTING.md` in the transformer repository. I read through every content very carefully and made my first contribution!\r\n\r\nHello @JuheonChu :) \r\n\r\nI am definitely have a look at it ! thanks", "Hi @lucaspct,\r\n\r\nYes the first steps would be to read [the guide explaining how to contribute to `optimum.bettertransformer`](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute), and then opening a [PR on Optimum](https://github.com/huggingface/optimum/pulls), we will support you there!", "Hi @younesbelkada @michaelbenayoun I'd love to take on the RoFormer model if it isn't claimed yet. Will open a PR after I read through the guide!", "I would like to take a crack at the ProphetNet encoder if it has not been claimed yet ", "Thank you very much @miyu386 & @adit299 !\r\nOf course yes you can give a try on that ;) feel free to start to open a PR on `optimum` and we'll guide you from there 💪 ", "I would like to work on the `ASTLayer` if no one has taken it!", "Hi @younesbelkada I'd like to tackle the `FlavaLayer` if it has not been taken!", "Hi @katiele47 \nSure no problem! Feel free to open a PR and tag us there! I will update the table above once the PRs are open ;) ", "Hi, @younesbelkada I'd like to take `GLPNLayer` if no one has claimed it. will open the PR soon for this :)" ]
1,669
1,706
1,696
CONTRIBUTOR
null
## `BetterTransformer` integration for more models! `BetterTransformer` API provides faster inference on CPU & GPU through a simple interface! Models can benefit from very interesting speedups using a one liner and by making sure to install the latest version of PyTorch. A complete guideline on how to convert a new model has been created on the [BetterTransformer documentation](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute)! Here is a list of models that could be potentially supported, pick one of the architecture below and let's discuss about the conversion! Text models 🖊️ : - [x] FSMT - [FSMTEncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/fsmt/modeling_fsmt.py#L397) / @Sumanth077 https://github.com/huggingface/optimum/pull/494 - [ ] MobileBERT - [MobileBertLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/mobilebert/modeling_mobilebert.py#L498) / @raghavanone https://github.com/huggingface/optimum/pull/506 - [x] MBart - [MBartEncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/mbart/modeling_mbart.py#L296) + [M2M100EncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/m2m_100/modeling_m2m_100.py#L345) / https://github.com/huggingface/optimum/pull/516 @ravenouse - [ ] ProphetNet - [ProphetNetEncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/prophetnet/modeling_prophetnet.py#L1130) - [x] RemBert - [RemBertLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/rembert/modeling_rembert.py#L415) / @hchings https://github.com/huggingface/optimum/pull/545 - [ ] RocBert - [RocBertLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/roc_bert/modeling_roc_bert.py#LL519C7-L519C19) / @shogohida https://github.com/huggingface/optimum/pull/542 - [ ] RoFormer - [RoFormerLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/roformer/modeling_roformer.py#L448) - [x] Tapas - [TapasLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/tapas/modeling_tapas.py#L524) / https://github.com/huggingface/optimum/pull/520 Vision models 📷 : - [ ] Detr - [DetrLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/detr/modeling_detr.py#L610) - [ ] Flava - [FlavaLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/flava/modeling_flava.py#L597) / https://github.com/huggingface/optimum/pull/538 - [x] GLPN - [GLPNLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/glpn/modeling_glpn.py#L292) (cannot be supported) - [x] ViLT - [ViLTLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/vilt/modeling_vilt.py#L472) / https://github.com/huggingface/optimum/pull/508 Audio models 🔉 : - [ ] Speech2Text - [Speech2TextLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L350) - [ ] NEW: Audio Speech Transformer - [ASTLayer](https://github.com/huggingface/transformers/blob/f2e7d270ec795be09e6187dd2459edb43bd861c1/src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py#L274) / @ravenouse https://github.com/huggingface/optimum/pull/548 Let us also know if you think that some architectures can be supported that we missed. Note that for encoder-decoder based models below, we expect to convert the encoder only. **Support for decoder-based models coming soon!** cc @michaelbenayoun @fxmarty https://github.com/huggingface/optimum/issues/488
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20372/reactions", "total_count": 9, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 6, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20372/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20371
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20371/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20371/comments
https://api.github.com/repos/huggingface/transformers/issues/20371/events
https://github.com/huggingface/transformers/pull/20371
1,459,412,219
PR_kwDOCUB6oc5Db3kt
20,371
Raised specific types of exceptions on distilbert model
{ "login": "JuheonChu", "id": 35699839, "node_id": "MDQ6VXNlcjM1Njk5ODM5", "avatar_url": "https://avatars.githubusercontent.com/u/35699839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JuheonChu", "html_url": "https://github.com/JuheonChu", "followers_url": "https://api.github.com/users/JuheonChu/followers", "following_url": "https://api.github.com/users/JuheonChu/following{/other_user}", "gists_url": "https://api.github.com/users/JuheonChu/gists{/gist_id}", "starred_url": "https://api.github.com/users/JuheonChu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JuheonChu/subscriptions", "organizations_url": "https://api.github.com/users/JuheonChu/orgs", "repos_url": "https://api.github.com/users/JuheonChu/repos", "events_url": "https://api.github.com/users/JuheonChu/events{/privacy}", "received_events_url": "https://api.github.com/users/JuheonChu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I made a different PR with the same request here: https://github.com/huggingface/transformers/pull/20375\r\n" ]
1,669
1,669
1,669
CONTRIBUTOR
null
Co-author: Batese @batese2001 Local test file: ![testDistilbert](https://user-images.githubusercontent.com/35699839/203268440-1b6a01e1-bc25-4a71-a59b-3af64175fe53.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20371/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20371", "html_url": "https://github.com/huggingface/transformers/pull/20371", "diff_url": "https://github.com/huggingface/transformers/pull/20371.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20371.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20370
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20370/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20370/comments
https://api.github.com/repos/huggingface/transformers/issues/20370/events
https://github.com/huggingface/transformers/issues/20370
1,459,389,596
I_kwDOCUB6oc5W_ISc
20,370
Hugging face community on fb group
{ "login": "Mohammed20201991", "id": 59222637, "node_id": "MDQ6VXNlcjU5MjIyNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/59222637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mohammed20201991", "html_url": "https://github.com/Mohammed20201991", "followers_url": "https://api.github.com/users/Mohammed20201991/followers", "following_url": "https://api.github.com/users/Mohammed20201991/following{/other_user}", "gists_url": "https://api.github.com/users/Mohammed20201991/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mohammed20201991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mohammed20201991/subscriptions", "organizations_url": "https://api.github.com/users/Mohammed20201991/orgs", "repos_url": "https://api.github.com/users/Mohammed20201991/repos", "events_url": "https://api.github.com/users/Mohammed20201991/events{/privacy}", "received_events_url": "https://api.github.com/users/Mohammed20201991/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey there! Thanks a lot for creating and sharing the group :fire: \r\nFYI, there is also a community Discord server with over 16000 members. You can join in [hf.co/join/discord](http://hf.co/join/discord) :hugs: ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
Hello everyone you can join our group to discuss new state-of-the-art methods in deep learning, machine learning, and natural language processing through the below : [link](https://fb.me/g/p_Qe5xCLF7zxgu4sYn/NRWMjdeM)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20370/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20369
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20369/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20369/comments
https://api.github.com/repos/huggingface/transformers/issues/20369/events
https://github.com/huggingface/transformers/pull/20369
1,459,345,939
PR_kwDOCUB6oc5DbpDg
20,369
typo
{ "login": "WrRan", "id": 7569098, "node_id": "MDQ6VXNlcjc1NjkwOTg=", "avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WrRan", "html_url": "https://github.com/WrRan", "followers_url": "https://api.github.com/users/WrRan/followers", "following_url": "https://api.github.com/users/WrRan/following{/other_user}", "gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}", "starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WrRan/subscriptions", "organizations_url": "https://api.github.com/users/WrRan/orgs", "repos_url": "https://api.github.com/users/WrRan/repos", "events_url": "https://api.github.com/users/WrRan/events{/privacy}", "received_events_url": "https://api.github.com/users/WrRan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20369). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20369/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20369", "html_url": "https://github.com/huggingface/transformers/pull/20369", "diff_url": "https://github.com/huggingface/transformers/pull/20369.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20369.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20368
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20368/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20368/comments
https://api.github.com/repos/huggingface/transformers/issues/20368/events
https://github.com/huggingface/transformers/pull/20368
1,459,247,074
PR_kwDOCUB6oc5DbS_l
20,368
Add Chinese-CLIP implementation
{ "login": "yangapku", "id": 17445544, "node_id": "MDQ6VXNlcjE3NDQ1NTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/17445544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangapku", "html_url": "https://github.com/yangapku", "followers_url": "https://api.github.com/users/yangapku/followers", "following_url": "https://api.github.com/users/yangapku/following{/other_user}", "gists_url": "https://api.github.com/users/yangapku/gists{/gist_id}", "starred_url": "https://api.github.com/users/yangapku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangapku/subscriptions", "organizations_url": "https://api.github.com/users/yangapku/orgs", "repos_url": "https://api.github.com/users/yangapku/repos", "events_url": "https://api.github.com/users/yangapku/events{/privacy}", "received_events_url": "https://api.github.com/users/yangapku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Regarding modeling, it somehow breaks our philosophy `one model - one file philosophy`. We usually copy the modeling code and add `# copied from`. cc @sgugger \r\n\r\n", "> Regarding modeling, it somehow breaks our philosophy `one model - one file philosophy`. We usually copy the modeling code and add `# copied from`. cc @sgugger\r\n\r\nSo in conclusion, all I need to is to copy all the modules imported from other models and add `# Copied from` statements. The use of feature extractor is still acceptable at this point. Am I right? @ydshieh ", "> > Regarding modeling, it somehow breaks our philosophy `one model - one file philosophy`. We usually copy the modeling code and add `# copied from`. cc @sgugger\r\n> \r\n> So in conclusion, all I need to is to copy all the modules imported from other models and add `# Copied from` statements. The use of feature extractor is still acceptable at this point. Am I right? @ydshieh\r\n\r\nYes :-)", "Hi, @ydshieh @sgugger I have updated my code to remove the import of the configuration and modeling files from other models. Is that able to be merged now? Meanwhile, happy Thanksgiving and best wishes ❤️ !", "> Hi @yangapku Thank you for adding this model! 非常感謝您!\r\n> \r\n> I left a few comments, but there might be more comments next week, as I haven't review all the current files.\r\n\r\nExcuse me, apart from the proposed comments on model implementation codes and documents, are there any more comments on the other code changes? Let me fix them together 😄. @ydshieh ", "Hi @yangapku \r\n\r\nI haven't continued the review process yet after back from the weekend. I will review what you have addressed according the our comments, as well as what I haven't reviewed last time.\r\n\r\nHope you can understand it's not super easy to have a full review in one-go for large PRs like this 🙏. Even for small PR, it's also normal for the review being a iterative process :-)\r\n\r\n (and we sometime get distracted to other tasks that pop up 😅).", "> Hi @yangapku\r\n> \r\n> I haven't continued the review process yet after back from the weekend. I will review what you have addressed according the our comments, as well as what I haven't reviewed last time.\r\n> \r\n> Hope you can understand it's not super easy to have a full review in one-go for large PRs like this 🙏. Even for small PR, it's also normal for the review being a iterative process :-)\r\n> \r\n> (and we sometime get distracted to other tasks that pop up 😅).\r\n\r\nI fully understand 👍 . Okay I will manage to address the existing comments and try to work out them all before the tomorrow's review.", "@ydshieh All the comments mentioned above have been addressed. ", "@ydshieh Hi, is the PR able to be merged now? Thank you very much! ❤️ ", "@sgugger I feel there must be very good reason we want to have `CLIPTextTransformer` and `CLIPVisionTransformer`, and use these components in `CLIPTextModel`, `CLIPVisionModel` and `CLIPModel`. \r\n\r\n(Potentially to avoid `CLIPPreTrainedModel` in another `CLIPPreTrainedModel` which might cause some issues - at least if we ever want to have a TF port).\r\n\r\nDo you think here we need to avoid this line\r\n\r\n```\r\nself.text_model = ChineseCLIPTextModel(text_config, add_pooling_layer=False)\r\n```\r\nand to create `ChineseCLIPTextTransformer` and use it? \r\n", "Hi @yangapku , other than the above comment, LGTM! But let's wait @sgugger 's response.\r\n\r\nThere are a few places where I believe we can still use `# copied from` (probably need some tweaks) - I can help on this before merge.", "@ydshieh I think this was done this way just to be more aligned with the original checkpoints in the CLIP case. Here it works fine with the checkpoint, so I wouldn't over-complexify things", "@yangapku Before we merge, could you run\r\n\r\n```python\r\nRUN_SLOW=1 python -m pytest -v tests/models/chinese_clip/\r\n```\r\n\r\nI got 5 failures. You can focus on the first/last ones at this moment. Let me know if you need help on fixing them 🙏 \r\n\r\n```\r\nFAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPTextModelTest::test_model_from_pretrained - AttributeError: 'ChineseCLIPConfig' object has no attribute 'vocab_size'\r\nFAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_attentions - AssertionError: Items in the second set but not the first:\r\nFAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_hidden_state - AssertionError: Items in the second set but not the first:\r\nFAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_simple - AssertionError: Items in the second set but not the first:\r\nFAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelIntegrationTest::test_inference - OSError: Can't load tokenizer for 'OFA-Sys/chinese-clip-vit-base-patch16'. If you were trying to load it\r\n```\r\n", "> @yangapku Before we merge, could you run\r\n> \r\n> ```python\r\n> RUN_SLOW=1 python -m pytest -v tests/models/chinese_clip/\r\n> ```\r\n> \r\n> I got 5 failures. You can focus on the first/last ones at this moment. Let me know if you need help on fixing them 🙏\r\n> \r\n> ```\r\n> FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPTextModelTest::test_model_from_pretrained - AttributeError: 'ChineseCLIPConfig' object has no attribute 'vocab_size'\r\n> FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_attentions - AssertionError: Items in the second set but not the first:\r\n> FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_hidden_state - AssertionError: Items in the second set but not the first:\r\n> FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_simple - AssertionError: Items in the second set but not the first:\r\n> FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelIntegrationTest::test_inference - OSError: Can't load tokenizer for 'OFA-Sys/chinese-clip-vit-base-patch16'. If you were trying to load it\r\n> ```\r\n\r\nOkay I will try to fix them today.", "> > @yangapku Before we merge, could you run\r\n> > ```python\r\n> > RUN_SLOW=1 python -m pytest -v tests/models/chinese_clip/\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > I got 5 failures. You can focus on the first/last ones at this moment. Let me know if you need help on fixing them 🙏\r\n> > ```\r\n> > FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPTextModelTest::test_model_from_pretrained - AttributeError: 'ChineseCLIPConfig' object has no attribute 'vocab_size'\r\n> > FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_attentions - AssertionError: Items in the second set but not the first:\r\n> > FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_output_hidden_state - AssertionError: Items in the second set but not the first:\r\n> > FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelTest::test_torchscript_simple - AssertionError: Items in the second set but not the first:\r\n> > FAILED tests/models/chinese_clip/test_modeling_chinese_clip.py::ChineseCLIPModelIntegrationTest::test_inference - OSError: Can't load tokenizer for 'OFA-Sys/chinese-clip-vit-base-patch16'. If you were trying to load it\r\n> > ```\r\n> \r\n> Okay I will try to fix them today.\r\n\r\n@ydshieh The first and last failed cases have been fixed. Now only the failed test cases with Torchscript still remain. Meanwhile, to fix the first failed case, I have to remove the copied from comment for `ChineseCLIPTextModel`, since it has diverged from `BertModel` with our customed config_class `ChineseCLIPTextConfig`.", "@ydshieh Hi, is the PR able to be merged now? Do I have to fix the test cases related with Torchscript? If so, more help is needed since I am not so familiar with it 😢 .", "I will take a look on those 3 tests @yangapku . ", "@yangapku I pushed the remaining fix. Will merge once the final CI is green 🚀 🚀 🚀 \r\n\r\n非常感謝您的工作! 💯 ", "Thank you very much for your brilliant support! @ydshieh @sgugger ", "Hi @yangapku Just a follow up. From your branch , I see the file\r\n```\r\nconvert_chinese_clip_original_pytorch_to_hf.py\r\n```\r\nis last modified on Nov 22. (The change on Nov 29 doesn't count). However, the modeling file changed quite a lot since then due to our review comments. I just want to make sure the conversion script still works correctly, and the original checkpoints and the the converted HF checkpoints still have the same outputs on some test examples.\r\n\r\nIt would be super nice if you can double check, but it's your call, it's just a suggestion.\r\n(It's always good to make sure the users get the right checkpoints to use :-))\r\n\r\n", "@ydshieh Hi, I have ensured that this conversion script works correctly 😄 . In fact, today we have also updated the other 3 model scales (ViT-L/14, ViT-L/14@336px, ViT-H/14) on our HF model hub, during which I have used this script to convert our original model to HF format. After the conversion, I have tested all the converted HF checkpoints (the `pytorch_model.bin`), all of them works in expectation.", "Thank you @yangapku !" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds Chinese-CLIP model into Transformers repo. The Chinese-CLIP model was introduced in [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335). Chinese CLIP is an implementation and adaptation of CLIP (Radford et al., 2021) on a large-scale dataset of Chinese image-text pairs. It is capable of performing Chinese-based cross-modal retrieval and also playing as a vision backbone for vision tasks like zero-shot image classification, open-domain object detection, etc. This model was contributed by [OFA-Sys](https://huggingface.co/OFA-Sys). The original Github repo of Chinese-CLIP can be found [at this link](https://github.com/OFA-Sys/Chinese-CLIP). Currently we have released our model weights on the [Huggingface ModelHub](https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16). Compared with original OpenAI CLIP, we changed the text encoder to Chinese Roberta encoder, thus we reimplemented the config, modeling and preprocessor modules of Chinese-CLIP. Necessary unit tests and documents have been added. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20368/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20368", "html_url": "https://github.com/huggingface/transformers/pull/20368", "diff_url": "https://github.com/huggingface/transformers/pull/20368.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20368.patch", "merged_at": 1669832543000 }
https://api.github.com/repos/huggingface/transformers/issues/20367
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20367/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20367/comments
https://api.github.com/repos/huggingface/transformers/issues/20367/events
https://github.com/huggingface/transformers/issues/20367
1,459,195,306
I_kwDOCUB6oc5W-Y2q
20,367
Can't assign custom vocab_files_names in Wav2Vec2Tokenizers
{ "login": "jp1924", "id": 93233241, "node_id": "U_kgDOBY6gWQ", "avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jp1924", "html_url": "https://github.com/jp1924", "followers_url": "https://api.github.com/users/jp1924/followers", "following_url": "https://api.github.com/users/jp1924/following{/other_user}", "gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}", "starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jp1924/subscriptions", "organizations_url": "https://api.github.com/users/jp1924/orgs", "repos_url": "https://api.github.com/users/jp1924/repos", "events_url": "https://api.github.com/users/jp1924/events{/privacy}", "received_events_url": "https://api.github.com/users/jp1924/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Might be of interest to @sanchit-gandhi ", "Hey @jp1924 \r\n\r\nI think there's some confusion between `vocab_file`, `vocab_files_names` and `VOCAB_FILES_NAMES`:\r\n* `vocab_file`: an **argument** that denotes the path to the file containing the tokeniser's vocabulary\r\n* `vocab_files_names`: an **attribute** of the class `Wav2Vec2CTCTokenizer` (not an input argument). Note that this is a required attribute of the base class `PreTrainedTokenizerBase`:\r\nhttps://github.com/huggingface/transformers/blob/afce73bd9d891b55dcb8d4d875d17718ffa01ff0/src/transformers/tokenization_utils_base.py#L1388-L1390\r\nWe set this attribute purely to correctly initialise the `Wav2Vec2CTCTokenizer`.\r\n* `VOCAB_FILES_NAMES`: a **dictionary** mapping. Maps the `vocab_file` to the correct name for saving (`vocab.json`)\r\n\r\nOf these three, the only one we should ever have to change is `vocab_file` (specifying the right path to our tokeniser vocabulary). The other two are used internally to correctly initialise the PreTrained tokeniser.\r\n\r\nIf you want to save two distinct vocabulary files, you have two options:\r\n1. Save them in different directories (`output_dir_1` and `output_dir_2`)\r\n2. Pass the argument `filename_prefix` when you save the vocabulary\r\n\r\nOption 1:\r\n```python\r\nfrom transformers import Wav2Vec2Tokenizer, Wav2Vec2CTCTokenizer\r\n\r\ndef main() -> None:\r\n output_dir_1 = r\"./output_dir_1\"\r\n output_dir_2 = r\"./output_dir_2\"\r\n\r\n encoder1_tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\n encoder2_tokenizer = Wav2Vec2Tokenizer.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\n\r\n encoder1_tokenizer.save_vocabulary(save_directory=output_dir_1)\r\n encoder2_tokenizer.save_vocabulary(save_directory=output_dir_2)\r\n\r\nif \"__main__\" in __name__:\r\n main()\r\n```\r\n-> this will save encoder1's vocab under `output_dir_1` and encoder2's vocab under `output_dir_2`. \r\n\r\nOption 2:\r\n```python\r\nfrom transformers import Wav2Vec2Tokenizer, Wav2Vec2CTCTokenizer\r\n\r\ndef main() -> None:\r\n output_dir = r\"./output_dir\"\r\n\r\n encoder1_tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\n encoder2_tokenizer = Wav2Vec2Tokenizer.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\n\r\n encoder1_tokenizer.save_vocabulary(save_directory=output_dir, filename_prefix=\"encoder1\")\r\n encoder2_tokenizer.save_vocabulary(save_directory=output_dir, filename_prefix=\"encoder2\")\r\n\r\nif \"__main__\" in __name__:\r\n main()\r\n```\r\n-> this will save encoder1's vocab as `encoder1-vocab.json` and encoder2's vocab as `encoder2-vocab.json` (both under `output_dir`)", "I didn't think I could save it using the prefix. Thank you for letting me know!" ]
1,669
1,669
1,669
NONE
null
### System Info python: 3.8 transformers: 4.23.1 ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import Wav2Vec2Tokenizer, Wav2Vec2CTCTokenizer def main() -> None: # [NOTE]: plsace insert save_dir path output_dir = r"./output_dir" encoder1_tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("facebook/wav2vec2-base-960h") encoder2_tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h") encoder1_tokenizer.vocab_files_name = { "vocab_file": "encoder1_vocab.json", "tokenizer_config_file": "encoder1_tokenizer_config.json", } encoder2_tokenizer.vocab_files_name = { "vocab_file": "encoder2_vocab.json", "tokenizer_config_file": "encoder2_tokenizer_config.json", } encoder1_tokenizer.save_vocabulary(save_diretory=output_dir) encoder2_tokenizer.save_vocabulary(save_diretory=output_dir) if "__main__" in __name__: main() ``` ### Expected behavior This is the problem that occurred when the bi-encoder had to be placed on Wav2Vec2. When i saving the model To prevent overwriting due to duplication when saving in tokenizer, i may have named files that are saved differently. I overridden vocab_files_name as shown in Reproduction. But later I found out that the file hasn't been renamed So I looked up the problem and found the following problem tokenization_wav2vec2.py > [L127-161](https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L127-L161) ```python class Wav2Vec2CTCTokenizer(PreTrainedTokenizer): """ Constructs a Wav2Vec2CTC tokenizer. This tokenizer inherits from [`PreTrainedTokenizer`] which contains some of the main methods. Users should refer to the superclass for more information regarding such methods. Args: vocab_file (`str`): File containing the vocabulary. bos_token (`str`, *optional*, defaults to `"<s>"`): The beginning of sentence token. eos_token (`str`, *optional*, defaults to `"</s>"`): The end of sentence token. unk_token (`str`, *optional*, defaults to `"<unk>"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (`str`, *optional*, defaults to `"<pad>"`): The token used for padding, for example when batching sequences of different lengths. word_delimiter_token (`str`, *optional*, defaults to `"|"`): The token used for defining the end of a word. do_lower_case (`bool`, *optional*, defaults to `False`): Whether or not to accept lowercase input and lowercase the output when decoding. **kwargs Additional keyword arguments passed along to [`PreTrainedTokenizer`] """ vocab_files_names = VOCAB_FILES_NAMES # [NOTE]: here pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES model_input_names = ["input_ids", "attention_mask"] def __init__( ``` VOCAB_FILES_NAMES sets the default value for vocab_files_names tokenization_wav2vec2.py > [L595-606](https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L595-L606) ```python def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: if not os.path.isdir(save_directory): logger.error(f"Vocabulary path ({save_directory}) should be a directory") return vocab_file = os.path.join( save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] # [NOTE]: here ) with open(vocab_file, "w", encoding="utf-8") as f: f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n") return (vocab_file,) ``` However, if you look at the save_vocabulary function, "vocab_files_names" is not included in the checked part, but "VOCAB_FILES_NAMES" is included, so the file name does not change even if you override "vocab_files_names" So I want to change that "VOCAB_FILES_NAMES" part to "vocab_files_names" And this was on Wav2Vec2CTCTokenizer and Wav2Vec2Tokenizer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20367/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20366
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20366/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20366/comments
https://api.github.com/repos/huggingface/transformers/issues/20366/events
https://github.com/huggingface/transformers/issues/20366
1,458,884,514
I_kwDOCUB6oc5W9M-i
20,366
trainer.evaluate infinite loop problem
{ "login": "jp1924", "id": 93233241, "node_id": "U_kgDOBY6gWQ", "avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jp1924", "html_url": "https://github.com/jp1924", "followers_url": "https://api.github.com/users/jp1924/followers", "following_url": "https://api.github.com/users/jp1924/following{/other_user}", "gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}", "starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jp1924/subscriptions", "organizations_url": "https://api.github.com/users/jp1924/orgs", "repos_url": "https://api.github.com/users/jp1924/repos", "events_url": "https://api.github.com/users/jp1924/events{/privacy}", "received_events_url": "https://api.github.com/users/jp1924/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The evaluation loop in the `Trainer` does not support un-padded outputs indeed, as it doesn't occur with any model of the library in our examples. Fixing it would be quite involved so I'd recommend using the `Accelerate` library which provides a method to pad across processes to evaluate such models." ]
1,669
1,669
1,669
NONE
null
### System Info system info OS: Ubuntu 18.04.6 LTS GPUS: RTX 3090 * 2 CUDA: 11.1 python: 3.8 transformers: 4.23.1 pytorch: 1.10.1+cu111 NCCL: 2.10.3+cuda11.1 ### Who can help? @sgugger @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import TrainingArguments, Trainer, BertTokenizerFast, HfArgumentParser from transformers.utils import ModelOutput from transformers.trainer_utils import is_main_process from datasets import load_dataset, Dataset import torch import torch.nn as nn import torch.distributed as dist class DummyModeloutput(ModelOutput): loss: torch.FloatTensor = None logits: torch.FloatTensor = None class DummyModel(nn.Module): def __init__(self) -> None: super(DummyModel, self).__init__() self.dummy_layer = nn.Linear(10, 10) self.count = 0 def forward(self, input_ids, labels, *args, **kwargs): rank = dist.get_rank() device = torch.device(rank) if is_main_process(rank): logits = torch.zeros((2, 512, 42 + self.count, 111), device=device) else: logits = torch.ones((2, 231, 70 + self.count, 111), device=device) loss = torch.tensor([0.5], device=device) self.count += 1 return DummyModeloutput(loss=loss, logits=logits) def main(parser: HfArgumentParser) -> None: args, _ = parser.parse_args_into_dataclasses(return_remaining_strings=True) def imdb_preprocesser(dataset: Dataset) -> dict: text = dataset["text"] label = dataset["label"] encoded_data = tokenizer(text, return_attention_mask=False) encoded_data["label"] = label return encoded_data tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") model = DummyModel() imdb_data = load_dataset("imdb") train_data = imdb_data["train"].train_test_split(0.02)["test"] valid_data = imdb_data["test"] train_data = train_data.map(imdb_preprocesser, num_proc=3) valid_data = valid_data.map(imdb_preprocesser, num_proc=3) trainer = Trainer( model=model, tokenizer=tokenizer, train_dataset=train_data, eval_dataset=valid_data, args=args, compute_metrics=lambda x: x, ) trainer.evaluate(eval_dataset=valid_data) if "__main__" in __name__: parser = HfArgumentParser([TrainingArguments]) main(parser) """ for vscode user launch.json { "name": "Python: infinite_loop", "type": "python", "request": "launch", "module": "torch.distributed.launch", "console": "integratedTerminal", "justMyCode": false, "env": { "CUDA_VISIBLE_DEVICES": "0, 2", "WANDB_DISABLED": "true", "TORCH_CPP_LOG_LEVEL": "DEBUG", "NCCL_DEBUG": "INFO", "NCCL_DEBUG_SUBSYS": "COLL", // "TORCH_DISTRIBUTED_DEBUG": "DETAIL", }, "args": [ "--standalone", "--nnodes=1", "--nproc_per_node=2", "", "--output_dir=", "--do_train=true", "--do_eval=true", "--do_eval=true", "--per_device_train_batch_size=2", "--learning_rate=1e-5", "--evaluation_strategy=steps", "--eval_steps=2", "--save_strategy=no" ] }, """ ``` ### Expected behavior --- This issue occurred during the implementation of the Streaming model called [Transformer-Transducer](https://arxiv.org/abs/2002.02562) as HuggingFace. Before explaining this issue, it is first necessary to know the loss used by this model. this model uses a loss function called [RNN-T loss](https://pytorch.org/audio/stable/generated/torchaudio.functional.rnnt_loss.html#torchaudio.functional.rnnt_loss) provided by torchaudio. Unlike CTC-loss, RNN-T loss uses logits in 4 dimensions tensors like this ``` >>> logits.shape (batch, max seq length, max target length + 1, class) ``` Depending on the data entered here, mel_seq and max_target_length will vary ex) [cuda:0]output_logits shape: (4, 512, 42, 111) [cuda:1]output_logits shape: (4, 286, 32, 111) and this model uses LogMel-Spectrogram as train_data --- This issue occurs in evaluation_loop when training using single-node DDP in the Trainer. When i evaluating this model, issue occurred like below ```python Detected mismatch between collectives on ranks. Rank 1 is running inconsistent collective: CollectiveFingerPrint(     OpType=ALLGATHER,     TensorShape=[1, 279, 44, 72],     TensorDtypes=Float,     TensorDeviceTypes=TensorOptions(         dtype=float (default),         device=cuda,         layout=Strided (default),         requires_grad=false (default),         pinned_memory=false (default),         memory_format=(nullopt)     ) )   File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 2003, in all_gather     work = default_pg.allgather([tensor_list], [tensor])   File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 212, in distributed_concat     dist.all_gather(output_tensors, tensor)   File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 3101, in _nested_gather     tensors = distributed_concat(tensors)   File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 2987, in evaluation_loop     logits = self._nested_gather(logits)   File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 2774, in evaluate     output = eval_loop(   File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 2052, in _maybe_log_save_evaluate     metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)   File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 1819, in _inner_training_loop     self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)   File "[My_folder_path]/venv_for_transducer/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train     return inner_training_loop(   File "[My_folder_path]/transformer-transducer/main.py", line 115, in train     outputs = trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)   File "[My_folder_path]/transformer-transducer/main.py", line 96, in main     train(trainer, train_args)   File "[My_folder_path]/transformer-transducer/main.py", line 160, in <module>     main(parser) ``` This is a issue that arises from the [all_gather](https://pytorch.org/docs/stable/distributed.html#torch.distributed.all_gather) feature of DDP. The all_gather has the function of receiving a tensors from all devices belonging to the group However, this issue occurs in the process of importing the tensors ```python from transformers.trainer_utils import is_main_process import torch.distributed as dist import torch import os def main() -> None:     dist.init_process_group("nccl")     rank = dist.get_rank()     device = torch.device(rank)     if is_main_process(rank):         tensor = torch.zeros((2, 100, 100), device=device)     else:         tensor = torch.ones((2, 100, 70), device=device)     output_tensors = [tensor.clone() for _ in range(dist.get_world_size())]     dist.all_gather(output_tensors, tensor) if "__main__" in __name__:     os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL"     os.enviton["CUDA_VISIBLE_DEVICES"] = "0,1"     os.environ['TORCH_CPP_LOG_LEVEL']="DEBUG"     main() ``` the size of the "output_tensors" is smaller than the size of the "tensors", the same "mismatch between collectives" problem occurs as above. In above code, "TORCH_DISTRIBUTED_DEBUG" is set to "DETAIL", but if it isn't done, an error will not be printed. all_gather just returns "output_tensors" to None. But evaluation_loop all_gather returns "output_tensor" and then does "torch.concat" with the existing tensor In particular, in the process of "torch.concat " "output_tensors" in the None state with an existing tensor, i found a problem that does not output errors and takes on infinite loop. --- In fact, i know that Transformer-Transducer is a model that is not supported by Huggingface, and this problem occurs by using a model that is not suitable for Huggingface Trainer But I think it would be cool to add a streaming ASR model such as Transformer-Transducer to the huggingface, so it's an issue i found during the experiment. So if there's any way or idea to solve this problem, I'd like you to know
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20366/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20365
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20365/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20365/comments
https://api.github.com/repos/huggingface/transformers/issues/20365/events
https://github.com/huggingface/transformers/pull/20365
1,458,852,363
PR_kwDOCUB6oc5DZ6E4
20,365
Replace assertion with ValueError exceptions in run_image_captioning_flax.py
{ "login": "katiele47", "id": 54815905, "node_id": "MDQ6VXNlcjU0ODE1OTA1", "avatar_url": "https://avatars.githubusercontent.com/u/54815905?v=4", "gravatar_id": "", "url": "https://api.github.com/users/katiele47", "html_url": "https://github.com/katiele47", "followers_url": "https://api.github.com/users/katiele47/followers", "following_url": "https://api.github.com/users/katiele47/following{/other_user}", "gists_url": "https://api.github.com/users/katiele47/gists{/gist_id}", "starred_url": "https://api.github.com/users/katiele47/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/katiele47/subscriptions", "organizations_url": "https://api.github.com/users/katiele47/orgs", "repos_url": "https://api.github.com/users/katiele47/repos", "events_url": "https://api.github.com/users/katiele47/events{/privacy}", "received_events_url": "https://api.github.com/users/katiele47/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks, but the issue was intended for library code, not examples. Let's see if @sanchit-gandhi doesn't mind!", "Thank you for your suggestions @sanchit-gandhi! @sgugger ahh I should have known that. I'll update the library code next time.\r\n\r\n[Updated] @sanchit-gandhi After incorporating the suggestions I ran into the failed CI tests below, which I'm not sure how to fix. I assume it has to do with missing some code reformatting. I tried running `make style` on the target folder and received the following output:\r\n\r\n<img width=\"470\" alt=\"Screen Shot 2022-11-22 at 4 05 56 PM\" src=\"https://user-images.githubusercontent.com/54815905/203421123-e0a20022-f62e-4e78-a6dc-903a826cd6cf.png\">\r\n\r\nI'm also new to open source, but I'll look into this issue in the meantime. Would appreciate any of your inputs! Thanks.", "Hey @katiele47, sorry for the late reply! \r\n\r\nCould you check that you've installed Transformers form source? https://huggingface.co/docs/transformers/pr_checks#checks-on-a-pull-request\r\n\r\nYou can make sure you've installed the dev version of transformers by uninstalling transformers then pip installing from within the transformers repo:\r\n```\r\npip uninstall transformers\r\npip install -e .[dev]\r\n```\r\n\r\nYou should then be able to run `make style` for code quality fixes!", "Hello @katiele47, in my case, \r\n\r\nI used \r\n- `pip install black`\r\n- `black [your filepath]`\r\n\r\n\r\nand this enabled CircleCi to pass the code quality tests.\r\n\r\n**Output**\r\n\r\n![reformat output](https://user-images.githubusercontent.com/35699839/204171957-d8659cb6-963b-4f10-ae94-aec44a715817.png)\r\n", "Thank you so much for reviewing my change @sanchit-gandhi!" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? Replaces 4 `assert` with `ValueError` exeception in run_image_captioning_flax.py. Co-author: @AdiaWu Related to [#12789](https://github.com/huggingface/transformers/issues/12789). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20365/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20365", "html_url": "https://github.com/huggingface/transformers/pull/20365", "diff_url": "https://github.com/huggingface/transformers/pull/20365.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20365.patch", "merged_at": 1669647986000 }
https://api.github.com/repos/huggingface/transformers/issues/20364
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20364/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20364/comments
https://api.github.com/repos/huggingface/transformers/issues/20364/events
https://github.com/huggingface/transformers/issues/20364
1,458,785,626
I_kwDOCUB6oc5W801a
20,364
Link to Google Colab notebook not working
{ "login": "jaimebw", "id": 22820466, "node_id": "MDQ6VXNlcjIyODIwNDY2", "avatar_url": "https://avatars.githubusercontent.com/u/22820466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaimebw", "html_url": "https://github.com/jaimebw", "followers_url": "https://api.github.com/users/jaimebw/followers", "following_url": "https://api.github.com/users/jaimebw/following{/other_user}", "gists_url": "https://api.github.com/users/jaimebw/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaimebw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaimebw/subscriptions", "organizations_url": "https://api.github.com/users/jaimebw/orgs", "repos_url": "https://api.github.com/users/jaimebw/repos", "events_url": "https://api.github.com/users/jaimebw/events{/privacy}", "received_events_url": "https://api.github.com/users/jaimebw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ohmeow who wrote the notebook.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
Hi, it seems that the link to ```A notebook on how to [finetune BART for summarization with fastai using blurr](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb).``` is broken. @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20364/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20363
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20363/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20363/comments
https://api.github.com/repos/huggingface/transformers/issues/20363/events
https://github.com/huggingface/transformers/pull/20363
1,458,763,958
PR_kwDOCUB6oc5DZmmb
20,363
Bump tensorflow from 2.8.1 to 2.9.3 in /examples/research_projects/decision_transformer
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it." ]
1,669
1,669
1,669
CONTRIBUTOR
null
Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.8.1 to 2.9.3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/releases">tensorflow's releases</a>.</em></p> <blockquote> <h2>TensorFlow 2.9.3</h2> <h1>Release 2.9.3</h1> <p>This release introduces several vulnerability fixes:</p> <ul> <li>Fixes an overflow in <code>tf.keras.losses.poisson</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41887">CVE-2022-41887</a>)</li> <li>Fixes a heap OOB failure in <code>ThreadUnsafeUnigramCandidateSampler</code> caused by missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41880">CVE-2022-41880</a>)</li> <li>Fixes a segfault in <code>ndarray_tensor_bridge</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41884">CVE-2022-41884</a>)</li> <li>Fixes an overflow in <code>FusedResizeAndPadConv2D</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41885">CVE-2022-41885</a>)</li> <li>Fixes a overflow in <code>ImageProjectiveTransformV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41886">CVE-2022-41886</a>)</li> <li>Fixes an FPE in <code>tf.image.generate_bounding_box_proposals</code> on GPU (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41888">CVE-2022-41888</a>)</li> <li>Fixes a segfault in <code>pywrap_tfe_src</code> caused by invalid attributes (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41889">CVE-2022-41889</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>BCast</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41890">CVE-2022-41890</a>)</li> <li>Fixes a segfault in <code>TensorListConcat</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41891">CVE-2022-41891</a>)</li> <li>Fixes a <code>CHECK_EQ</code> fail in <code>TensorListResize</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41893">CVE-2022-41893</a>)</li> <li>Fixes an overflow in <code>CONV_3D_TRANSPOSE</code> on TFLite (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41894">CVE-2022-41894</a>)</li> <li>Fixes a heap OOB in <code>MirrorPadGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41895">CVE-2022-41895</a>)</li> <li>Fixes a crash in <code>Mfcc</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41896">CVE-2022-41896</a>)</li> <li>Fixes a heap OOB in <code>FractionalMaxPoolGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41897">CVE-2022-41897</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>SparseFillEmptyRowsGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41898">CVE-2022-41898</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>SdcaOptimizer</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41899">CVE-2022-41899</a>)</li> <li>Fixes a heap OOB in <code>FractionalAvgPool</code> and <code>FractionalMaxPool</code>(<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41900">CVE-2022-41900</a>)</li> <li>Fixes a <code>CHECK_EQ</code> in <code>SparseMatrixNNZ</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41901">CVE-2022-41901</a>)</li> <li>Fixes an OOB write in grappler (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41902">CVE-2022-41902</a>)</li> <li>Fixes a overflow in <code>ResizeNearestNeighborGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41907">CVE-2022-41907</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>PyFunc</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41908">CVE-2022-41908</a>)</li> <li>Fixes a segfault in <code>CompositeTensorVariantToComponents</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41909">CVE-2022-41909</a>)</li> <li>Fixes a invalid char to bool conversion in printing a tensor (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41911">CVE-2022-41911</a>)</li> <li>Fixes a heap overflow in <code>QuantizeAndDequantizeV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41910">CVE-2022-41910</a>)</li> <li>Fixes a <code>CHECK</code> failure in <code>SobolSample</code> via missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935">CVE-2022-35935</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>TensorListScatter</code> and <code>TensorListScatterV2</code> in eager mode (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935">CVE-2022-35935</a>)</li> </ul> <h2>TensorFlow 2.9.2</h2> <h1>Release 2.9.2</h1> <p>This releases introduces several vulnerability fixes:</p> <ul> <li>Fixes a <code>CHECK</code> failure in tf.reshape caused by overflows (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35934">CVE-2022-35934</a>)</li> <li>Fixes a <code>CHECK</code> failure in <code>SobolSample</code> caused by missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935">CVE-2022-35935</a>)</li> <li>Fixes an OOB read in <code>Gather_nd</code> op in TF Lite (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35937">CVE-2022-35937</a>)</li> <li>Fixes a <code>CHECK</code> failure in <code>TensorListReserve</code> caused by missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35960">CVE-2022-35960</a>)</li> <li>Fixes an OOB write in <code>Scatter_nd</code> op in TF Lite (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35939">CVE-2022-35939</a>)</li> <li>Fixes an integer overflow in <code>RaggedRangeOp</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35940">CVE-2022-35940</a>)</li> <li>Fixes a <code>CHECK</code> failure in <code>AvgPoolOp</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35941">CVE-2022-35941</a>)</li> <li>Fixes a <code>CHECK</code> failures in <code>UnbatchGradOp</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35952">CVE-2022-35952</a>)</li> <li>Fixes a segfault TFLite converter on per-channel quantized transposed convolutions (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-36027">CVE-2022-36027</a>)</li> <li>Fixes a <code>CHECK</code> failures in <code>AvgPool3DGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35959">CVE-2022-35959</a>)</li> <li>Fixes a <code>CHECK</code> failures in <code>FractionalAvgPoolGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35963">CVE-2022-35963</a>)</li> <li>Fixes a segfault in <code>BlockLSTMGradV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35964">CVE-2022-35964</a>)</li> <li>Fixes a segfault in <code>LowerBound</code> and <code>UpperBound</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35965">CVE-2022-35965</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md">tensorflow's changelog</a>.</em></p> <blockquote> <h1>Release 2.9.3</h1> <p>This release introduces several vulnerability fixes:</p> <ul> <li>Fixes an overflow in <code>tf.keras.losses.poisson</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41887">CVE-2022-41887</a>)</li> <li>Fixes a heap OOB failure in <code>ThreadUnsafeUnigramCandidateSampler</code> caused by missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41880">CVE-2022-41880</a>)</li> <li>Fixes a segfault in <code>ndarray_tensor_bridge</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41884">CVE-2022-41884</a>)</li> <li>Fixes an overflow in <code>FusedResizeAndPadConv2D</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41885">CVE-2022-41885</a>)</li> <li>Fixes a overflow in <code>ImageProjectiveTransformV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41886">CVE-2022-41886</a>)</li> <li>Fixes an FPE in <code>tf.image.generate_bounding_box_proposals</code> on GPU (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41888">CVE-2022-41888</a>)</li> <li>Fixes a segfault in <code>pywrap_tfe_src</code> caused by invalid attributes (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41889">CVE-2022-41889</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>BCast</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41890">CVE-2022-41890</a>)</li> <li>Fixes a segfault in <code>TensorListConcat</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41891">CVE-2022-41891</a>)</li> <li>Fixes a <code>CHECK_EQ</code> fail in <code>TensorListResize</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41893">CVE-2022-41893</a>)</li> <li>Fixes an overflow in <code>CONV_3D_TRANSPOSE</code> on TFLite (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41894">CVE-2022-41894</a>)</li> <li>Fixes a heap OOB in <code>MirrorPadGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41895">CVE-2022-41895</a>)</li> <li>Fixes a crash in <code>Mfcc</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41896">CVE-2022-41896</a>)</li> <li>Fixes a heap OOB in <code>FractionalMaxPoolGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41897">CVE-2022-41897</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>SparseFillEmptyRowsGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41898">CVE-2022-41898</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>SdcaOptimizer</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41899">CVE-2022-41899</a>)</li> <li>Fixes a heap OOB in <code>FractionalAvgPool</code> and <code>FractionalMaxPool</code>(<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41900">CVE-2022-41900</a>)</li> <li>Fixes a <code>CHECK_EQ</code> in <code>SparseMatrixNNZ</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41901">CVE-2022-41901</a>)</li> <li>Fixes an OOB write in grappler (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41902">CVE-2022-41902</a>)</li> <li>Fixes a overflow in <code>ResizeNearestNeighborGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41907">CVE-2022-41907</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>PyFunc</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41908">CVE-2022-41908</a>)</li> <li>Fixes a segfault in <code>CompositeTensorVariantToComponents</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41909">CVE-2022-41909</a>)</li> <li>Fixes a invalid char to bool conversion in printing a tensor (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41911">CVE-2022-41911</a>)</li> <li>Fixes a heap overflow in <code>QuantizeAndDequantizeV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41910">CVE-2022-41910</a>)</li> <li>Fixes a <code>CHECK</code> failure in <code>SobolSample</code> via missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935">CVE-2022-35935</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>TensorListScatter</code> and <code>TensorListScatterV2</code> in eager mode (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-35935">CVE-2022-35935</a>)</li> </ul> <h1>Release 2.8.4</h1> <p>This release introduces several vulnerability fixes:</p> <ul> <li>Fixes a heap OOB failure in <code>ThreadUnsafeUnigramCandidateSampler</code> caused by missing validation (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41880">CVE-2022-41880</a>)</li> <li>Fixes a segfault in <code>ndarray_tensor_bridge</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41884">CVE-2022-41884</a>)</li> <li>Fixes an overflow in <code>FusedResizeAndPadConv2D</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41885">CVE-2022-41885</a>)</li> <li>Fixes a overflow in <code>ImageProjectiveTransformV2</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41886">CVE-2022-41886</a>)</li> <li>Fixes an FPE in <code>tf.image.generate_bounding_box_proposals</code> on GPU (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41888">CVE-2022-41888</a>)</li> <li>Fixes a segfault in <code>pywrap_tfe_src</code> caused by invalid attributes (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41889">CVE-2022-41889</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>BCast</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41890">CVE-2022-41890</a>)</li> <li>Fixes a segfault in <code>TensorListConcat</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41891">CVE-2022-41891</a>)</li> <li>Fixes a <code>CHECK_EQ</code> fail in <code>TensorListResize</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41893">CVE-2022-41893</a>)</li> <li>Fixes an overflow in <code>CONV_3D_TRANSPOSE</code> on TFLite (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41894">CVE-2022-41894</a>)</li> <li>Fixes a heap OOB in <code>MirrorPadGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41895">CVE-2022-41895</a>)</li> <li>Fixes a crash in <code>Mfcc</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41896">CVE-2022-41896</a>)</li> <li>Fixes a heap OOB in <code>FractionalMaxPoolGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41897">CVE-2022-41897</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>SparseFillEmptyRowsGrad</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41898">CVE-2022-41898</a>)</li> <li>Fixes a <code>CHECK</code> fail in <code>SdcaOptimizer</code> (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41899">CVE-2022-41899</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/tensorflow/tensorflow/commit/a5ed5f39b675a1c6f315e0caf3ad4b38478fa571"><code>a5ed5f3</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/58584">#58584</a> from tensorflow/vinila21-patch-2</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/258f9a1251346d93e129c53f82d21732df6067f5"><code>258f9a1</code></a> Update py_func.cc</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/cd27cfb438b78a019ff8a215a9d6c58d10c062c3"><code>cd27cfb</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/58580">#58580</a> from tensorflow-jenkins/version-numbers-2.9.3-24474</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/3e75385ee6c9ef8f06d6848244e1421c603dd4a1"><code>3e75385</code></a> Update version numbers to 2.9.3</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/bc72c39774b0a0cb38ed03e5ee09fa78103ed749"><code>bc72c39</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/58482">#58482</a> from tensorflow-jenkins/relnotes-2.9.3-25695</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/3506c90f5ac0f471a6b1d60d4055b14ca3da170b"><code>3506c90</code></a> Update RELEASE.md</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/8dcb48e384cd3914458f3c494f1da878ae8dc6d5"><code>8dcb48e</code></a> Update RELEASE.md</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/4f34ec84994e63cf47c1d13748a404edd3d5a0d3"><code>4f34ec8</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/58576">#58576</a> from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/6fc67e408f239384d26acabc34d287911af92dc8"><code>6fc67e4</code></a> Replace CHECK with returning an InternalError on failing to create python tuple</li> <li><a href="https://github.com/tensorflow/tensorflow/commit/5dbe90ad21068007cbc31a56e8ed514ec27e0b26"><code>5dbe90a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/tensorflow/tensorflow/issues/58570">#58570</a> from tensorflow/r2.9-7b174a0f2e4</li> <li>Additional commits viewable in <a href="https://github.com/tensorflow/tensorflow/compare/v2.8.1...v2.9.3">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tensorflow&package-manager=pip&previous-version=2.8.1&new-version=2.9.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20363/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20363", "html_url": "https://github.com/huggingface/transformers/pull/20363", "diff_url": "https://github.com/huggingface/transformers/pull/20363.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20363.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20362
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20362/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20362/comments
https://api.github.com/repos/huggingface/transformers/issues/20362/events
https://github.com/huggingface/transformers/issues/20362
1,458,578,372
I_kwDOCUB6oc5W8CPE
20,362
Add FlexiBERT
{ "login": "shikhartuli", "id": 40000988, "node_id": "MDQ6VXNlcjQwMDAwOTg4", "avatar_url": "https://avatars.githubusercontent.com/u/40000988?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shikhartuli", "html_url": "https://github.com/shikhartuli", "followers_url": "https://api.github.com/users/shikhartuli/followers", "following_url": "https://api.github.com/users/shikhartuli/following{/other_user}", "gists_url": "https://api.github.com/users/shikhartuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/shikhartuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shikhartuli/subscriptions", "organizations_url": "https://api.github.com/users/shikhartuli/orgs", "repos_url": "https://api.github.com/users/shikhartuli/repos", "events_url": "https://api.github.com/users/shikhartuli/events{/privacy}", "received_events_url": "https://api.github.com/users/shikhartuli/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "The PR was closed by bot. Please reopen the PR and let me know if anything needs to be done to merge the PR. The PR would add the FlexiBERT suite of models to 🤗 Transformers." ]
1,669
1,672
null
CONTRIBUTOR
null
### Model description FlexiBERT is a suite of *flexible* and *heterogeneous* models. The design space was proposed in this [paper](https://arxiv.org/abs/2205.11656) accepted for publication in the Journal of Artificial Intelligence Research. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The weights are available [here](https://huggingface.co/shikhartuli/flexibert-mini).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20362/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20362/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/20361
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20361/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20361/comments
https://api.github.com/repos/huggingface/transformers/issues/20361/events
https://github.com/huggingface/transformers/issues/20361
1,458,542,806
I_kwDOCUB6oc5W75jW
20,361
error loading facebook/opt-30b with text generation pipeline using 8bit mixed precision
{ "login": "morrisalp", "id": 8263996, "node_id": "MDQ6VXNlcjgyNjM5OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/8263996?v=4", "gravatar_id": "", "url": "https://api.github.com/users/morrisalp", "html_url": "https://github.com/morrisalp", "followers_url": "https://api.github.com/users/morrisalp/followers", "following_url": "https://api.github.com/users/morrisalp/following{/other_user}", "gists_url": "https://api.github.com/users/morrisalp/gists{/gist_id}", "starred_url": "https://api.github.com/users/morrisalp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/morrisalp/subscriptions", "organizations_url": "https://api.github.com/users/morrisalp/orgs", "repos_url": "https://api.github.com/users/morrisalp/repos", "events_url": "https://api.github.com/users/morrisalp/events{/privacy}", "received_events_url": "https://api.github.com/users/morrisalp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "If you can create your tokenizer and model, just send them to the pipeline function as a quick workaround :-)", "@sgugger This actually leads to another error:\r\n\r\n```\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nmodel = \"facebook/opt-30b\"\r\nmodel_kwargs = {\"device_map\": \"auto\", \"load_in_8bit\": True}\r\nm = AutoModelForCausalLM.from_pretrained(model, device_map=\"auto\")\r\ntokenizer = AutoTokenizer.from_pretrained(model, use_fast=False)\r\ngenerator = pipeline(task=\"text-generation\", model=m, tokenizer=tokenizer, device=0, model_kwargs=model_kwargs)\r\n```\r\n\r\nYields `NotImplementedError`:\r\n\r\n```\r\nFile /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:980, in Module.to.<locals>.convert(t)\r\n 977 if convert_to_format is not None and t.dim() in (4, 5):\r\n 978 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,\r\n 979 non_blocking, memory_format=convert_to_format)\r\n--> 980 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\r\n\r\nNotImplementedError: Cannot copy out of meta tensor; no data!\r\n```", "Please provide the full traceback, as we can't see what's happening otherwise especially since I can't reproduce locally on my side. cc @younesbelkada who might have better luck reproducing the bug!", "hi @morrisalp \r\nThanks for the heads up and for flagging the issue! \r\nCan you please try:\r\n\r\n```\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\r\nmodel = \"facebook/opt-30b\"\r\n\r\nmodel_kwargs = {\"device_map\": \"auto\", \"load_in_8bit\": True}\r\nm = AutoModelForCausalLM.from_pretrained(model, **model_kwargs)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model, use_fast=False)\r\ngenerator = pipeline(task=\"text-generation\", model=m, tokenizer=tokenizer)\r\n```\r\nNo need to add `model_kwargs` and `device=0` in addition to what you have added ;) this should work! let us know here", "Full traceback for first error (using pipeline only):\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In [7], line 1\r\n----> 1 generator = pipeline(task=\"text-generation\", model=model, device=0, model_kwargs=model_kwargs)\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/transformers/pipelines/__init__.py:727, in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)\r\n 723 # Infer the framework from the model\r\n 724 # Forced if framework already defined, inferred if it's None\r\n 725 # Will load the correct model if possible\r\n 726 model_classes = {\"tf\": targeted_task[\"tf\"], \"pt\": targeted_task[\"pt\"]}\r\n--> 727 framework, model = infer_framework_load_model(\r\n 728 model,\r\n 729 model_classes=model_classes,\r\n 730 config=config,\r\n 731 framework=framework,\r\n 732 task=task,\r\n 733 **hub_kwargs,\r\n 734 **model_kwargs,\r\n 735 )\r\n 737 model_config = model.config\r\n 738 hub_kwargs[\"_commit_hash\"] = model.config._commit_hash\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py:266, in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)\r\n 263 continue\r\n 265 if isinstance(model, str):\r\n--> 266 raise ValueError(f\"Could not load model {model} with any of the following classes: {class_tuple}.\")\r\n 268 framework = \"tf\" if model.__class__.__name__.startswith(\"TF\") else \"pt\"\r\n 269 return framework, model\r\n\r\nValueError: Could not load model facebook/opt-30b with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>, <class 'transformers.models.opt.modeling_opt.OPTForCausalLM'>).\r\n```\r\n", "Full traceback for second error (creating tokenizer and model & passing them to pipeline):\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\nFile <timed exec>:1\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/transformers/pipelines/__init__.py:873, in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)\r\n 870 if device is not None:\r\n 871 kwargs[\"device\"] = device\r\n--> 873 return pipeline_class(model=model, framework=framework, task=task, **kwargs)\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/transformers/pipelines/text_generation.py:49, in TextGenerationPipeline.__init__(self, *args, **kwargs)\r\n 48 def __init__(self, *args, **kwargs):\r\n---> 49 super().__init__(*args, **kwargs)\r\n 50 self.check_model_type(\r\n 51 TF_MODEL_FOR_CAUSAL_LM_MAPPING if self.framework == \"tf\" else MODEL_FOR_CAUSAL_LM_MAPPING\r\n 52 )\r\n 53 if \"prefix\" not in self._preprocess_params:\r\n 54 # This is very specific. The logic is quite complex and needs to be done\r\n 55 # as a \"default\".\r\n 56 # It also defines both some preprocess_kwargs and generate_kwargs\r\n 57 # which is why we cannot put them in their respective methods.\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/transformers/pipelines/base.py:778, in Pipeline.__init__(self, model, tokenizer, feature_extractor, modelcard, framework, task, args_parser, device, binary_output, **kwargs)\r\n 776 # Special handling\r\n 777 if self.framework == \"pt\" and self.device.type != \"cpu\":\r\n--> 778 self.model = self.model.to(self.device)\r\n 780 # Update config with task specific parameters\r\n 781 task_specific_params = self.model.config.task_specific_params\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:982, in Module.to(self, *args, **kwargs)\r\n 978 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,\r\n 979 non_blocking, memory_format=convert_to_format)\r\n 980 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\r\n--> 982 return self._apply(convert)\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:635, in Module._apply(self, fn)\r\n 633 def _apply(self, fn):\r\n 634 for module in self.children():\r\n--> 635 module._apply(fn)\r\n 637 def compute_should_use_set_data(tensor, tensor_applied):\r\n 638 if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):\r\n 639 # If the new tensor has compatible tensor type as the existing tensor,\r\n 640 # the current behavior is to change the tensor in-place using `.data =`,\r\n (...)\r\n 645 # global flag to let the user control whether they want the future\r\n 646 # behavior of overwriting the existing tensor or not.\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:635, in Module._apply(self, fn)\r\n 633 def _apply(self, fn):\r\n 634 for module in self.children():\r\n--> 635 module._apply(fn)\r\n 637 def compute_should_use_set_data(tensor, tensor_applied):\r\n 638 if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):\r\n 639 # If the new tensor has compatible tensor type as the existing tensor,\r\n 640 # the current behavior is to change the tensor in-place using `.data =`,\r\n (...)\r\n 645 # global flag to let the user control whether they want the future\r\n 646 # behavior of overwriting the existing tensor or not.\r\n\r\n [... skipping similar frames: Module._apply at line 635 (3 times)]\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:635, in Module._apply(self, fn)\r\n 633 def _apply(self, fn):\r\n 634 for module in self.children():\r\n--> 635 module._apply(fn)\r\n 637 def compute_should_use_set_data(tensor, tensor_applied):\r\n 638 if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):\r\n 639 # If the new tensor has compatible tensor type as the existing tensor,\r\n 640 # the current behavior is to change the tensor in-place using `.data =`,\r\n (...)\r\n 645 # global flag to let the user control whether they want the future\r\n 646 # behavior of overwriting the existing tensor or not.\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:658, in Module._apply(self, fn)\r\n 654 # Tensors stored in modules are graph leaves, and we don't want to\r\n 655 # track autograd history of `param_applied`, so we have to use\r\n 656 # `with torch.no_grad():`\r\n 657 with torch.no_grad():\r\n--> 658 param_applied = fn(param)\r\n 659 should_use_set_data = compute_should_use_set_data(param, param_applied)\r\n 660 if should_use_set_data:\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:980, in Module.to.<locals>.convert(t)\r\n 977 if convert_to_format is not None and t.dim() in (4, 5):\r\n 978 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,\r\n 979 non_blocking, memory_format=convert_to_format)\r\n--> 980 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\r\n\r\nNotImplementedError: Cannot copy out of meta tensor; no data!\r\n```", "@younesbelkada That code gives me the following error, but if this is a GPU OOM error then that is progress :)\r\n\r\n```\r\nm = AutoModelForCausalLM.from_pretrained(model, **model_kwargs)\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In [5], line 1\r\n----> 1 m = AutoModelForCausalLM.from_pretrained(model, **model_kwargs)\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:463, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 461 elif type(config) in cls._model_mapping.keys():\r\n 462 model_class = _get_model_class(config, cls._model_mapping)\r\n--> 463 return model_class.from_pretrained(\r\n 464 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs\r\n 465 )\r\n 466 raise ValueError(\r\n 467 f\"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\\n\"\r\n 468 f\"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}.\"\r\n 469 )\r\n\r\nFile /opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py:2280, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 2276 device_map_without_lm_head = {\r\n 2277 key: device_map[key] for key in device_map.keys() if key not in modules_to_not_convert\r\n 2278 }\r\n 2279 if \"cpu\" in device_map_without_lm_head.values() or \"disk\" in device_map_without_lm_head.values():\r\n-> 2280 raise ValueError(\r\n 2281 \"\"\"\r\n 2282 Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit\r\n 2283 the quantized model. If you have set a value for `max_memory` you should increase that. To have\r\n 2284 an idea of the modules that are set on the CPU or RAM you can print model.hf_device_map.\r\n 2285 \"\"\"\r\n 2286 )\r\n 2287 del device_map_without_lm_head\r\n 2289 if from_tf:\r\n\r\nValueError: \r\n Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit\r\n the quantized model. If you have set a value for `max_memory` you should increase that. To have\r\n an idea of the modules that are set on the CPU or RAM you can print model.hf_device_map.\r\n```", "Nice! Yes this error is indeed due to the fact that your int8 model does not fit your available GPU memory. Could you share here what is the hardware you are using (with the avail GPU RAM)? thanks!\r\n\r\nYou can do something like `nvidia-smi` and post the output here", "Sure, here are the specs (with nothing running currently):\r\n\r\n```\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 470.103.01 Driver Version: 470.103.01 CUDA Version: 11.8 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|===============================+======================+======================|\r\n| 0 NVIDIA RTX A5000 On | 00000000:C3:00.0 Off | Off |\r\n| 30% 21C P5 29W / 230W | 2MiB / 24256MiB | 0% Default |\r\n| | | N/A |\r\n+-------------------------------+----------------------+----------------------+\r\n```", "I see, the 30B parameters model needs at least around `30GB` to be loaded in 8-bit. So you'll need more GPU RAM to fit the model here sadly.\r\nHowever, if you absolutely want to run it, there is hacky solution. You can do:\r\n`pip install --upgrade git+https://github.com/younesbelkada/transformers@bnb_add_custom_map`\r\nand run:\r\n```\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\r\nmodel = \"facebook/opt-30b\"\r\n\r\ndevice_map = {\r\n \"model.decoder.embed_tokens\": 0,\r\n \"model.decoder.embed_positions\": 0,\r\n \"model.decoder.final_layer_norm\": 0,\r\n \"lm_head\": 0,\r\n \"model.decoder.layers.0\": 0,\r\n \"model.decoder.layers.1\": 0,\r\n \"model.decoder.layers.2\": 0,\r\n \"model.decoder.layers.3\": 0,\r\n \"model.decoder.layers.4\": 0,\r\n \"model.decoder.layers.5\": 0,\r\n \"model.decoder.layers.6\": 0,\r\n \"model.decoder.layers.7\": 0,\r\n \"model.decoder.layers.8\": 0,\r\n \"model.decoder.layers.9\": 0,\r\n \"model.decoder.layers.10\": 0,\r\n \"model.decoder.layers.11\": 0,\r\n \"model.decoder.layers.12\": 0,\r\n \"model.decoder.layers.13\": 0,\r\n \"model.decoder.layers.14\": 0,\r\n \"model.decoder.layers.15\": 0,\r\n \"model.decoder.layers.16\": 0,\r\n \"model.decoder.layers.17\": 0,\r\n \"model.decoder.layers.18\": 0,\r\n \"model.decoder.layers.19\": 0,\r\n \"model.decoder.layers.20\": 0,\r\n \"model.decoder.layers.21\": 0,\r\n \"model.decoder.layers.22\": 0,\r\n \"model.decoder.layers.23\": 0,\r\n \"model.decoder.layers.24\": 0,\r\n \"model.decoder.layers.25\": 0,\r\n \"model.decoder.layers.26\": 0,\r\n \"model.decoder.layers.27\": 0,\r\n \"model.decoder.layers.28\": 0,\r\n \"model.decoder.layers.29\": 0,\r\n \"model.decoder.layers.30\": 0,\r\n \"model.decoder.layers.31\": 0,\r\n \"model.decoder.layers.32\": 0,\r\n \"model.decoder.layers.33\": 0,\r\n \"model.decoder.layers.34\": 0,\r\n \"model.decoder.layers.35\": 0,\r\n \"model.decoder.layers.36\": 0,\r\n \"model.decoder.layers.37\": 0,\r\n \"model.decoder.layers.38\": 0,\r\n \"model.decoder.layers.39\": 0,\r\n \"model.decoder.layers.40\": 0,\r\n \"model.decoder.layers.41\": 0,\r\n \"model.decoder.layers.42\": \"cpu\",\r\n \"model.decoder.layers.43\": \"cpu\",\r\n \"model.decoder.layers.44\": \"cpu\",\r\n \"model.decoder.layers.45\": \"cpu\",\r\n \"model.decoder.layers.46\": \"cpu\",\r\n \"model.decoder.layers.47\": \"cpu\",\r\n}\r\n\r\nmodel_kwargs = {\"device_map\": device_map, \"load_in_8bit\": True}\r\nm = AutoModelForCausalLM.from_pretrained(model, **model_kwargs)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model, use_fast=False)\r\ngenerator = pipeline(task=\"text-generation\", model=m, tokenizer=tokenizer) \r\n```\r\nBut keep in mind that the layers that will be set on `cpu` will be kept in their native `dtype` and not converted in `int8`. Also this feature is not supported yet as the integration should done on `bitsandbytes` side, so you may encounter unexpected behaviours but you can always give it a try! \r\n\r\nRelated: #19090", "Thanks! I mainly wanted to see what the largest LLM I could fit on one of my GPUs would be using mixed precision, and I couldn't tell previously if the 30B model would be OOM due to the other errors...", "I see now thanks a lot!\nClosing this issue as I consider to be completed, don't hesitate to reopen it if you have more questions !" ]
1,669
1,669
1,669
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.13.0a0+d0d6b1f (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplaten, @Narsil, @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Running the following on a system with one (NVIDIA A5000) GPU: ``` from transformers import pipeline model = "facebook/opt-30b" model_kwargs = {"device_map": "auto", "load_in_8bit": True} generator = pipeline(task="text-generation", model=model, device=0, model_kwargs=model_kwargs) ``` yields error: `ValueError: Could not load model facebook/opt-30b with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>, <class 'transformers.models.opt.modeling_opt.OPTForCausalLM'>).` ### Expected behavior Should be able to create generator with no problem and generate text with `generator.__call__`. The code works with no error when using smaller opt model checkpoints: "facebook/opt-2.7b", "facebook/opt-6.7b". Can create model, tokenizer, and generate without pipeline using `AutoModelForCausalLM.from_pretrained(model, device_map="auto")` with `model="facebook/opt-30b"` despite the error message.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20361/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20360
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20360/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20360/comments
https://api.github.com/repos/huggingface/transformers/issues/20360/events
https://github.com/huggingface/transformers/pull/20360
1,458,537,757
PR_kwDOCUB6oc5DY0Pq
20,360
Fix toctree for Section 3 in Spanish Documentation
{ "login": "donelianc", "id": 7807897, "node_id": "MDQ6VXNlcjc4MDc4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7807897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donelianc", "html_url": "https://github.com/donelianc", "followers_url": "https://api.github.com/users/donelianc/followers", "following_url": "https://api.github.com/users/donelianc/following{/other_user}", "gists_url": "https://api.github.com/users/donelianc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donelianc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donelianc/subscriptions", "organizations_url": "https://api.github.com/users/donelianc/orgs", "repos_url": "https://api.github.com/users/donelianc/repos", "events_url": "https://api.github.com/users/donelianc/events{/privacy}", "received_events_url": "https://api.github.com/users/donelianc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? Fixes #20359 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20360/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20360", "html_url": "https://github.com/huggingface/transformers/pull/20360", "diff_url": "https://github.com/huggingface/transformers/pull/20360.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20360.patch", "merged_at": 1669067074000 }
https://api.github.com/repos/huggingface/transformers/issues/20359
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20359/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20359/comments
https://api.github.com/repos/huggingface/transformers/issues/20359/events
https://github.com/huggingface/transformers/issues/20359
1,458,534,719
I_kwDOCUB6oc5W73k_
20,359
Missing sections in Spanish documentation
{ "login": "donelianc", "id": 7807897, "node_id": "MDQ6VXNlcjc4MDc4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7807897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donelianc", "html_url": "https://github.com/donelianc", "followers_url": "https://api.github.com/users/donelianc/followers", "following_url": "https://api.github.com/users/donelianc/following{/other_user}", "gists_url": "https://api.github.com/users/donelianc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donelianc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donelianc/subscriptions", "organizations_url": "https://api.github.com/users/donelianc/orgs", "repos_url": "https://api.github.com/users/donelianc/repos", "events_url": "https://api.github.com/users/donelianc/events{/privacy}", "received_events_url": "https://api.github.com/users/donelianc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @sgugger. Maybe you can help me to verify this issue I just opened. I attached a PR to fix it (you might also be the right person to review and approve it). Thanks!" ]
1,669
1,669
1,669
CONTRIBUTOR
null
**Expected output**: Nested sections with proper documents in Spanish translation. **Current output**: Some sections are missing (but docs are present), so documents are not correctly organized ----- After contributing to #15947, I noticed `_toctree.yml` for the Spanish translation is not following the proper order as the original documentation. For example, there is no `General Usage` section in the Spanish version, but the `Create a custom architecture` ([create_a_model](https://huggingface.co/docs/transformers/v4.24.0/es/create_a_model)) document Is present. Contributions to #15947 missed adding the right sections updating `_toctree.yml`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20359/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20358
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20358/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20358/comments
https://api.github.com/repos/huggingface/transformers/issues/20358/events
https://github.com/huggingface/transformers/issues/20358
1,458,519,016
I_kwDOCUB6oc5W7zvo
20,358
Integrate Timm models as vision encoders in Vision encoder decoder models
{ "login": "gagan3012", "id": 49101362, "node_id": "MDQ6VXNlcjQ5MTAxMzYy", "avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gagan3012", "html_url": "https://github.com/gagan3012", "followers_url": "https://api.github.com/users/gagan3012/followers", "following_url": "https://api.github.com/users/gagan3012/following{/other_user}", "gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}", "starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions", "organizations_url": "https://api.github.com/users/gagan3012/orgs", "repos_url": "https://api.github.com/users/gagan3012/repos", "events_url": "https://api.github.com/users/gagan3012/events{/privacy}", "received_events_url": "https://api.github.com/users/gagan3012/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@NielsRogge ", "Hi,\r\n\r\nI don't think that makes sense as for the Vision encoder-decoder framework, one needs a Transformer-based encoder or at least an encoder that outputs a sequence of hidden states, which can be used for cross-attention with the Transformer-based language decoder. Given that most backbones in timm are convolution-based, which output 3D feature maps, one would first need to flatten/project them into a sequence of hidden states.\r\n\r\nThe VisionEncoderDecoderModel [framework](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) currently supports ViT, DeiT, BEiT and Swin Transformer, which is already a fair amount of models.", "I was more looking to use Swin with non-square image sizes and the Timm implementation allows us to do that and I would like to implement SwinV2 with bert", "> I was more looking to use Swin with non-square image sizes\r\n\r\nOur implementation also allows that, did you try it?", "I havent tried it can you share an example?\r\n", "```\r\nfrom transformers import AutoModelForImageClassification\r\nimport torch\r\n\r\nmodel = AutoModelForImageClassification.from_pretrained(\"microsoft/swinv2-tiny-patch4-window8-256\")\r\n\r\npixel_values = torch.randn(1, 3, 244, 522)\r\n\r\noutputs = model(pixel_values)\r\n```" ]
1,669
1,669
1,669
NONE
null
### Feature request Hello, I would like to use timm models as vision encoders in vision encoder-decoder model who can I do so? ### Motivation Timm models are very powerful and allow a lot of image processing which would allow us to make better document AI models ### Your contribution I would be down to help out with PR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20358/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20357
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20357/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20357/comments
https://api.github.com/repos/huggingface/transformers/issues/20357/events
https://github.com/huggingface/transformers/issues/20357
1,458,474,778
I_kwDOCUB6oc5W7o8a
20,357
BART-large + JAX produce nan loss during training/eval
{ "login": "fen-deepscribe", "id": 112418559, "node_id": "U_kgDOBrNe_w", "avatar_url": "https://avatars.githubusercontent.com/u/112418559?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fen-deepscribe", "html_url": "https://github.com/fen-deepscribe", "followers_url": "https://api.github.com/users/fen-deepscribe/followers", "following_url": "https://api.github.com/users/fen-deepscribe/following{/other_user}", "gists_url": "https://api.github.com/users/fen-deepscribe/gists{/gist_id}", "starred_url": "https://api.github.com/users/fen-deepscribe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fen-deepscribe/subscriptions", "organizations_url": "https://api.github.com/users/fen-deepscribe/orgs", "repos_url": "https://api.github.com/users/fen-deepscribe/repos", "events_url": "https://api.github.com/users/fen-deepscribe/events{/privacy}", "received_events_url": "https://api.github.com/users/fen-deepscribe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "cc @sanchit-gandhi ", "Hey @fen-deepscribe! Sorry for the late reply here!\r\n\r\nLooks like there could be two issues at play:\r\n\r\n1. The BART-large weights are indeed stored in fp16 on the HF Hub. The Flax `.from_pretrained` method respects the dtype of the stored params (no upcast/downcast operations), so when we load from the checkpoint at [facebook/bart-large](https://huggingface.co/facebook/bart-large), we load the weights in fp16. You can read more about this here: https://github.com/huggingface/transformers/issues/16736. Loading the weights in fp16 precision might be causing undesirable behaviour during training: Flax doesn't expect a dtype of fp16 (only fp32 or bf16). This could be messing up the dtypes of the activations, giving exploding grads and updates. If you want to load the weights in fp32, you can use the checkpoint at [patrickvonplaten/bart-large-fp32](https://huggingface.co/patrickvonplaten/bart-large-fp32).\r\n\r\n2. BART-large is know to have numerical instabilities during fine-tuning (see https://github.com/huggingface/transformers/issues/15559 and https://github.com/huggingface/transformers/issues/15559#issuecomment-1062880564 and https://github.com/huggingface/transformers/issues/15559#issuecomment-1294457798). If you're fine-tuning the model for summarisation, you can try loading the checkpoint [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) -> this checkpoint is stable and should be less prone to exploding gradients! I would give this checkpoint a go with your fine-tuning experiments. It tends to be a much easier fix than those linked in the aforementioned thread!\r\n\r\nLet me know how you get on! More than happy to dig into this further if the exploding loss persists!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Well, I have the same problem. After a period of training ,Bart's forecast output is starting to get really weird. No matter what the model inputs, Bart will produce the same text. Whether I use FP16 or not. ", "Have you tried using the checkpoint [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn)?" ]
1,669
1,681
1,675
NONE
null
### System Info - `transformers` version: 4.25.0.dev0 - Platform: Linux-5.15.0-1023-aws-x86_64-with-glibc2.35 - Python version: 3.9.12 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.1 (gpu) - Jax version: 0.3.24 - JaxLib version: 0.3.24 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No - CUDA: NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 ### Who can help? @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I first experienced with a customized dataset with facebook/bart-large. The model trained well under torch+fp16. For speed purpose, I was trying to use jax using the provided example, but experienced nan loss in the evaluation loop. Sometime in the training loop as well. There is no issue with facebook/bart-base model. After that, I switched to the original example provided in the main repo and trained on a public dataset. Yet no luck. Here is the process of reproduction: 1. Install transformer under development mode with `pip install -e`. 2. going into example folder `cd transformers/examples/flax/summarization`. 3. run training script using following command: ``` python run_summarization_flax.py \ --output_dir ~/data/test-cnn_dailymail \ --model_name_or_path facebook/bart-large \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \ --dataset_name cnn_dailymail \ --dataset_config_name 3.0.0 \ --do_train \ --do_eval \ --max_train_samples 10000 \ --max_eval_samples 100 \ --max_source_length 512 \ --max_target_length 200 ``` 4. adding fp16 with `--dtype float16` resulting in the same issue see log below: ``` INFO:__main__:***** Running training ***** INFO:__main__: Num examples = 10000 INFO:__main__: Num Epochs = 3 INFO:__main__: Instantaneous batch size per device = 2 INFO:__main__: Total train batch size (w. parallel & distributed) = 2 INFO:__main__: Total optimization steps = 15000 Epoch... (1/3 | Loss: nan, Learning Rate: 3.3336666092509404e-05) Epoch... (1/3 | Eval Loss: nan | ) ``` I also noticed that the torch model is saved in float16. I've tried to convert the model to float32 and load with `from_pt=True`. Receive same nan loss problem. Not sure if it related to this issue or not. ``` You should probably UPCAST the model weights to float32 if this was not intended. See [`~FlaxPreTrainedModel.to_fp32`] for further information on how to do this. ``` I will keep digging on this and share more information. Please kindly let me know if there is any recommend steps to debug this. :) ### Expected behavior bart-large can be trained and evaluated with normal loss with JAX.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20357/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20356
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20356/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20356/comments
https://api.github.com/repos/huggingface/transformers/issues/20356/events
https://github.com/huggingface/transformers/pull/20356
1,458,392,145
PR_kwDOCUB6oc5DYUoJ
20,356
Add FlexiBERT
{ "login": "shikhartuli", "id": 40000988, "node_id": "MDQ6VXNlcjQwMDAwOTg4", "avatar_url": "https://avatars.githubusercontent.com/u/40000988?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shikhartuli", "html_url": "https://github.com/shikhartuli", "followers_url": "https://api.github.com/users/shikhartuli/followers", "following_url": "https://api.github.com/users/shikhartuli/following{/other_user}", "gists_url": "https://api.github.com/users/shikhartuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/shikhartuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shikhartuli/subscriptions", "organizations_url": "https://api.github.com/users/shikhartuli/orgs", "repos_url": "https://api.github.com/users/shikhartuli/repos", "events_url": "https://api.github.com/users/shikhartuli/events{/privacy}", "received_events_url": "https://api.github.com/users/shikhartuli/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "The PR was closed by bot. Please reopen the PR and let me know if anything needs to be done to merge the PR." ]
1,669
1,672
1,672
CONTRIBUTOR
null
# What does this PR do? Implements the FlexiBERT suite of 3.32 billion transformer architectures (and also FlexiBERT 2.0 design space with 1.7 $\times$ 10<sup>88</sup> architectures). The design space supports *flexible* and *heterogeneous* transformer models with diverse attention types. The [paper](https://arxiv.org/abs/2205.11656) has been accepted for publication at the Journal of Artificial Intelligence Research. Fixes #20362 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @sgugger ## Tasks Completed From those provided in the [add new model](https://huggingface.co/docs/transformers/add_new_model) contribute section - [x] (Optional) Understood the model’s theoretical aspects - [x] Prepared 🤗 Transformers dev environment - [x] Set up debugging environment of the original repository - [x] Created script that successfully runs the forward() pass using the original repository and checkpoint (Available in Demo Colab) - [x] Successfully added the model skeleton to 🤗 Transformers - [x] Successfully converted original checkpoint to 🤗 Transformers checkpoint - [x] Successfully ran forward() pass in 🤗 Transformers that gives identical output to original checkpoint - [x] Finished model tests in 🤗 Transformers - [x] Successfully added tokenizer in 🤗 Transformers - [x] Run end-to-end integration tests - [x] Finished docs - [x] Uploaded model weights to the Hub (at this [link](https://huggingface.co/shikhartuli/flexibert-mini), shall add more models soon) - [x] Submitted the pull request - [ ] (Optional) Added a demo notebook Thanks so much!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20356/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20356/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20356", "html_url": "https://github.com/huggingface/transformers/pull/20356", "diff_url": "https://github.com/huggingface/transformers/pull/20356.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20356.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20355
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20355/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20355/comments
https://api.github.com/repos/huggingface/transformers/issues/20355/events
https://github.com/huggingface/transformers/pull/20355
1,458,361,639
PR_kwDOCUB6oc5DYN7F
20,355
Add missing tokenizer tests - RemBert
{ "login": "IMvision12", "id": 88665786, "node_id": "MDQ6VXNlcjg4NjY1Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IMvision12", "html_url": "https://github.com/IMvision12", "followers_url": "https://api.github.com/users/IMvision12/followers", "following_url": "https://api.github.com/users/IMvision12/following{/other_user}", "gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}", "starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions", "organizations_url": "https://api.github.com/users/IMvision12/orgs", "repos_url": "https://api.github.com/users/IMvision12/repos", "events_url": "https://api.github.com/users/IMvision12/events{/privacy}", "received_events_url": "https://api.github.com/users/IMvision12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@IMvision12 It looks like the added tests do not pass. You can try locally with\r\n```\r\npytest tests/models/rembert/test_tokenization_rembert.py\r\n```", "While using sentencepiece.model :\r\n\r\n`SAMPLE_VOCAB = get_tests_dir(\"fixtures/test_sentencepiece.model\")`\r\nthese tests are failing : \r\n\r\n- FAILED tests/models/rembert/test_tokenization_rembert.py::RemBertTokenizationTest::test_convert_token_and_id - AssertionError: '<unk>' != '[PAD]'\r\n- FAILED tests/models/rembert/test_tokenization_rembert.py::RemBertTokenizationTest::test_get_vocab - AssertionError: '<unk>' != '[PAD]'" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? Fixes : #16627 Added tokenizer tests for Rembert. I took reference from test_tokenization_camembert.py @SaulLu @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20355/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20355", "html_url": "https://github.com/huggingface/transformers/pull/20355", "diff_url": "https://github.com/huggingface/transformers/pull/20355.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20355.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20354
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20354/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20354/comments
https://api.github.com/repos/huggingface/transformers/issues/20354/events
https://github.com/huggingface/transformers/pull/20354
1,458,301,290
PR_kwDOCUB6oc5DYBDG
20,354
Generate: shorter XLA contrastive search tests
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "And as always, if a one-line short comment in the code could keep your explanation more visible for other (HF) developers, don't hesitate." ]
1,669
1,669
1,669
MEMBER
null
# What does this PR do? Makes XLA contrastive search tests shorter (in terms of tokens), to avoid flaky tests. This is due to our recently failing CI for some models. The XLA code path passes the test with XLA compilation off -- i.e. the XLA code path returns the same as the non-XLA code path. However, with XLA compilation on, there is a chance of obtaining different results. I couldn't pinpoint the issue, but there is a possible explanation. This may be due to numerical stability issues: contrastive search takes the maximum of a cosine distance between two hidden states [built from randomly initialized weights] as a penalty to the logits, which combined with the logits' low values [because the test model was untrained] could explain the mismatch. 👉 In any case, I was already planning on reinforcing contrastive search XLA testing with real examples on key models, like T5 and OPT.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20354/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20354", "html_url": "https://github.com/huggingface/transformers/pull/20354", "diff_url": "https://github.com/huggingface/transformers/pull/20354.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20354.patch", "merged_at": 1669117633000 }
https://api.github.com/repos/huggingface/transformers/issues/20353
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20353/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20353/comments
https://api.github.com/repos/huggingface/transformers/issues/20353/events
https://github.com/huggingface/transformers/pull/20353
1,458,123,151
PR_kwDOCUB6oc5DXaip
20,353
Generate: `model_kwargs` can also be an input to `prepare_inputs_for_generation`
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger added a minor test.\r\n\r\nI'd rather spend the energy removing the `**kwargs` and `**model_kwargs`, which are only used as a lazy pattern (as opposed to being forwarded to the model or similar). It would allow for much stricter checking :)" ]
1,669
1,669
1,669
MEMBER
null
# What does this PR do? Fixes #20347 `model_kwargs` can also be a model input in `prepare_inputs_for_generation` -- in some models it is `kwargs`, in others `model_kwargs`. This PR updates the input validation function to reflect that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20353/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20353", "html_url": "https://github.com/huggingface/transformers/pull/20353", "diff_url": "https://github.com/huggingface/transformers/pull/20353.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20353.patch", "merged_at": 1669047627000 }
https://api.github.com/repos/huggingface/transformers/issues/20352
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20352/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20352/comments
https://api.github.com/repos/huggingface/transformers/issues/20352/events
https://github.com/huggingface/transformers/pull/20352
1,458,105,768
PR_kwDOCUB6oc5DXWxK
20,352
Fix nightly runs
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
COLLABORATOR
null
# What does this PR do? The pipeline that runs the nightly tests exits with `Unexpected argument(s): nightly` instead of running all tests. I think we just need to add it in the custom config as an argument (unused) so circleCI doesn't complain anymore. Manually triggered the tests on this branch with `nightly` at `true` and it ran all tests (see workflow nightly in the tests).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20352/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20352", "html_url": "https://github.com/huggingface/transformers/pull/20352", "diff_url": "https://github.com/huggingface/transformers/pull/20352.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20352.patch", "merged_at": 1669131518000 }
https://api.github.com/repos/huggingface/transformers/issues/20351
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20351/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20351/comments
https://api.github.com/repos/huggingface/transformers/issues/20351/events
https://github.com/huggingface/transformers/pull/20351
1,457,885,338
PR_kwDOCUB6oc5DWnQ6
20,351
[VideoMAE] TensorFlow implementation
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false } ]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20351). All of your documentation changes will be reflected on that endpoint.", "I am hitting an issue and thought of asking for help.\r\n\r\nThe issue stems from the port of `VideoMAESelfAttention` i.e., `TFVideoMAESelfAttention`. Upon analyzing deeper, I think it's because of the mismatch between `nn.functional.linear` and my custom `linear_transformation()`. This is my investigative [notebook](https://colab.research.google.com/gist/sayakpaul/d50d013f59674b943ef7e2b6ed9d2f91/scratchpad.ipynb) where I have tried debugging the issue but still no luck so far. The assertion errors seem to be within low tolerance but can easily sum up.\r\n\r\nCc: @amyeroberts @gante @Rocketknight1 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Added WIP label to prevent PR from being closed", "No point in extending it further. " ]
1,669
1,705
1,705
MEMBER
null
Closes #18641 ## TODO - [x] modeling_tf_videomae.py - [x] integration tests - [x] rest of the tests - [x] documentation
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20351/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20351", "html_url": "https://github.com/huggingface/transformers/pull/20351", "diff_url": "https://github.com/huggingface/transformers/pull/20351.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20351.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20350
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20350/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20350/comments
https://api.github.com/repos/huggingface/transformers/issues/20350/events
https://github.com/huggingface/transformers/pull/20350
1,457,818,193
PR_kwDOCUB6oc5DWYhn
20,350
Add missing tokenizer tests - RemBert
{ "login": "IMvision12", "id": 88665786, "node_id": "MDQ6VXNlcjg4NjY1Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/88665786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IMvision12", "html_url": "https://github.com/IMvision12", "followers_url": "https://api.github.com/users/IMvision12/followers", "following_url": "https://api.github.com/users/IMvision12/following{/other_user}", "gists_url": "https://api.github.com/users/IMvision12/gists{/gist_id}", "starred_url": "https://api.github.com/users/IMvision12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IMvision12/subscriptions", "organizations_url": "https://api.github.com/users/IMvision12/orgs", "repos_url": "https://api.github.com/users/IMvision12/repos", "events_url": "https://api.github.com/users/IMvision12/events{/privacy}", "received_events_url": "https://api.github.com/users/IMvision12/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for adding this! Can you first rebase your branch on main? TensorFlow new release broke a lot of things so tests won't pass unless you do this :-)", "Looks like something went wrong and GitHub adds a lot of diff. If force-pushing doesn't solve the issue, you might need to close this PR and open a fresh one." ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? Added tokenizer tests for Rembert. I took reference from `test_tokenization_camembert.py` @SaulLu @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20350/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20350", "html_url": "https://github.com/huggingface/transformers/pull/20350", "diff_url": "https://github.com/huggingface/transformers/pull/20350.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20350.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20349
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20349/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20349/comments
https://api.github.com/repos/huggingface/transformers/issues/20349/events
https://github.com/huggingface/transformers/pull/20349
1,457,793,683
PR_kwDOCUB6oc5DWTT0
20,349
[Don't merge] debug nightly CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,675
1,669
COLLABORATOR
null
[Don't merge] debug nightly CI
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20349/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20349", "html_url": "https://github.com/huggingface/transformers/pull/20349", "diff_url": "https://github.com/huggingface/transformers/pull/20349.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20349.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20348
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20348/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20348/comments
https://api.github.com/repos/huggingface/transformers/issues/20348/events
https://github.com/huggingface/transformers/issues/20348
1,457,792,153
I_kwDOCUB6oc5W5CSZ
20,348
Ability to fine-tune whisper large on a GPU with 24 gb of ram
{ "login": "BirgerMoell", "id": 1704131, "node_id": "MDQ6VXNlcjE3MDQxMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BirgerMoell", "html_url": "https://github.com/BirgerMoell", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @BirgerMoell - thanks for opening this feature request and for your interest in the Whisper model 🗣🇸🇪 I've made the code in your original post a drop-down for ease of reading.\r\n\r\nThe examples script [run_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py) has recently been updated to handle Whisper (https://github.com/huggingface/transformers/pull/19519), so you can use this as an end-to-end script for training your system! All you have to do is modify the example training config given in the README for your language of choice: [examples/pytorch/speech-recognition#whisper-model](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#whisper-model)\r\nAnd then execute the command! The rest will be taken care for you 🤗\r\n\r\nA couple of things:\r\n* They're not joking when they say 'large' for the [large checkpoint](https://huggingface.co/openai/whisper-large)! The model is 1.6 billion parameters, which is extremely big! Have you tried using the [medium checkpoint](https://huggingface.co/openai/whisper-medium)? It's about half the size, but gets comparable results to the large checkpoint under zero-shot conditions. It'll most likely surpass the large zero-shot results with fine-tuning. I've managed to train the medium checkpoint on a V100 16GB with a batch size of 32 (`per_device_batch_size=2` and `gradient_accumulation_steps=16`). There are some things we can try to make the model / training more memory efficient if you want to use the medium or large checkpoints! (see below)\r\n* The audio samples are **padded / truncated to 30s** before getting the log-Mel features. So setting the max length of audio samples to 2.5s will mean the **audio samples are padded to 30s**, and then the log-Mel features calculated. So the memory usage will be the same as using a max length of 30s! I explain this briefly in the blog: https://huggingface.co/blog/fine-tune-whisper#load-whisperfeatureextractor\r\n\r\nNow, assuming that you do want to train a bigger model than the 'small' checkpoint, you can either try the training script with the medium checkpoint and a `per_device_batch_size` of 2 or 4, **or** you can try using the large checkpoint with some memory hacks:\r\n1. The Adam optimiser requires two params (betas) for every model parameter. So the memory requirement of the optimiser is two times that of the model! You can switch to using an **8bit version** of the Adam optimiser from [bitsandbytes](https://github.com/TimDettmers/bitsandbytes). This will save you a lot of memory. You need to pip install bitsandbytes:\r\n```\r\npip install bitsandbytes\r\n```\r\nand then set `optim=\"adamw_bnb_8bit\"` when you instantiate the `Seq2SeqTrainingArguments`:\r\n```python\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir=\"./whisper-large-sv-test2\", # change to a repo name of your choice\r\n per_device_train_batch_size=1,\r\n gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size\r\n learning_rate=1e-5,\r\n warmup_steps=1,\r\n max_steps=10,\r\n gradient_checkpointing=True,\r\n fp16=True,\r\n group_by_length=True,\r\n evaluation_strategy=\"steps\",\r\n per_device_eval_batch_size=1,\r\n predict_with_generate=True,\r\n generation_max_length=225,\r\n save_steps=5, # set to < max_steps\r\n eval_steps=5, # set to < max_steps\r\n logging_steps=1, # set to < max_steps\r\n report_to=[\"tensorboard\"],\r\n load_best_model_at_end=True,\r\n metric_for_best_model=\"wer\",\r\n greater_is_better=False,\r\n push_to_hub=True,\r\n optim=\"adamw_bnb_8bit\", # set the optimiser!\r\n)\r\n```\r\nCheck out the docs for more details: (https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainingArguments.optim)\r\n\r\n2. You can use a different optimiser all together. Adam requires two optimiser params per one model param, but Adafactor uses only one. This time, set `optim=\"adafactor\"`. This is untested for fine-tuning Whisper, so I'm not sure how Adafactor performance compares to Adam.\r\n\r\nNeither 1 or 2 are tested, so I can't guarantee that they'll work, but they're easy approaches to try! One line code changes for each. I'd try 1 first then 2, as there shouldn't be a performance degradation trying 1, but there might be with 2.\r\n\r\nI'll reiterate again that the medium checkpoint is a good option for a device < 80GB memory!", "Thank you so much for taking the time to write explain this. I will definitely try it out. I will also try out training on the medium model size. ", "1. Using adamw_bnb_8bit I ran out of memory.\r\n2. I managed to get it to work with adafactor. I just did a test so I'm not sure how it affected the performance but I can try running it longer to see what happens. The eval_wer was 30.935251798561154 after just 5 epochs. Thanks for the help!\r\n\r\n", "Here is the trained model. I haven't evaluated it but the WER is a Wer: 30.9353 which is not so good considering the model size.\r\nhttps://huggingface.co/birgermoell/whisper-large-sv", "Hey @BirgerMoell - glad to see it worked! I would deffo give the medium model a run as well, has been quite performant in my experiments to date!\r\n\r\nFor the large model, it looks like you trained for only 0.08 epochs / 5 training steps:\r\n\r\n| Training Loss | Epoch | Step | Validation Loss | Wer |\r\n|:-------------:|:-----:|:----:|:---------------:|:-------:|\r\n| 4.5521 | 0.04 | 5 | 3.5048 | 48.2014 |\r\n| 1.8009 | 0.08 | 10 | 1.5259 | 30.9353 |\r\n\r\nI would definitely train for at least 2k training steps to get a reasonable WER. You can update the `Seq2SeqTrainingArguments` accordingly:\r\n\r\n```python\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir=\"./whisper-large-sv-test2\",\r\n per_device_train_batch_size=1,\r\n gradient_accumulation_steps=1,\r\n learning_rate=1e-5,\r\n warmup_steps=1,\r\n max_steps=2000, # set max steps to > 2k\r\n gradient_checkpointing=True,\r\n fp16=True,\r\n group_by_length=True,\r\n evaluation_strategy=\"steps\",\r\n per_device_eval_batch_size=1,\r\n predict_with_generate=True,\r\n generation_max_length=225,\r\n save_steps=500, \r\n eval_steps=500, \r\n logging_steps=50, \r\n report_to=[\"tensorboard\"],\r\n load_best_model_at_end=True,\r\n metric_for_best_model=\"wer\",\r\n greater_is_better=False,\r\n push_to_hub=True,\r\n optim=\"adafactor\",\r\n)\r\n```\r\n\r\nI would also **strongly** recommend using `gradient_accumulation_steps` to increase your effective batch size - a batch-size of 1 will likely give you noisy gradient updates. If `per_device_train_batch_size=1` is the biggest you can fit, you can try `gradient_accumulation_steps=16` or even `gradient_accumulation_steps=32`.\r\n\r\nI'm confident you'll get good results training for longer and with a bigger batch size!", "Hi @sanchit-gandhi,\r\n\r\nif instead of putting the Adam optimizer in the 8-bit version (your proposal 1), why not download Whisper in the 8-bit version?\r\n\r\nI did try with the following code but it did not work. Do you know why?\r\n\r\n```\r\n#!pip install accelerate\r\n#!pip install bitsandbytes\r\n#!pip install git+https://github.com/huggingface/transformers.git\r\n\r\nfrom transformers import WhisperForConditionalGeneration\r\n\r\nmodel_name = \"openai/whisper-medium\"\r\nmodel = WhisperForConditionalGeneration.from_pretrained(model_name, device_map=\"auto\", load_in_8bit=True)\r\n```\r\nError message:\r\n```\r\nDownloading: 100%\r\n1.97k/1.97k [00:00<00:00, 56.5kB/s]\r\nDownloading: 100%\r\n3.06G/3.06G [00:49<00:00, 76.6MB/s]\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n[<ipython-input-6-58c82c91d282>](https://localhost:8080/#) in <module>\r\n 1 from transformers import WhisperForConditionalGeneration\r\n 2 \r\n----> 3 model = WhisperForConditionalGeneration.from_pretrained(model_name, device_map=\"auto\", load_in_8bit=True)\r\n\r\n[/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 2404 # Dispatch model with hooks on all devices if necessary\r\n 2405 if device_map is not None:\r\n-> 2406 dispatch_model(model, device_map=device_map, offload_dir=offload_folder, offload_index=offload_index)\r\n 2407 \r\n 2408 if output_loading_info:\r\n\r\nTypeError: dispatch_model() got an unexpected keyword argument 'offload_index'\r\n```", "cc @younesbelkada the 8bit master\r\n\r\nIn general though, the 8bit model will be slower. Hence the suggestion for changing the optimiser first.", "Can you try to install `accelerate` from the master branch? `pip install git+https://github.com/huggingface/accelerate.git@main` this should fix your issue and you'll be able to run whisper in 8bit", "Hi @younesbelkada,\r\n\r\nThanks for your answer but I'm still with an error. See code below and the error message:\r\n\r\n```\r\n#!pip install git+https://github.com/huggingface/accelerate.git@main\r\n#!pip install bitsandbytes\r\n#!pip install git+https://github.com/huggingface/transformers.git\r\n\r\nfrom transformers import WhisperForConditionalGeneration\r\n\r\nmodel_name = \"openai/whisper-medium\"\r\nfrom transformers import WhisperForConditionalGeneration\r\n\r\nmodel = WhisperForConditionalGeneration.from_pretrained(model_name, device_map=\"auto\", load_in_8bit=True, use_cache = False) \r\n\r\nfrom transformers import Seq2SeqTrainingArguments\r\n\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir=\"./whisper-medium-hi\", # change to a repo name of your choice\r\n per_device_train_batch_size=4,\r\n gradient_accumulation_steps=8, # increase by 2x for every 2x decrease in batch size\r\n learning_rate=1e-5,\r\n warmup_steps=500,\r\n max_steps=4000,\r\n gradient_checkpointing=True,\r\n fp16=True,\r\n group_by_length=True, \r\n evaluation_strategy=\"steps\",\r\n per_device_eval_batch_size=8,\r\n predict_with_generate=True,\r\n generation_max_length=225,\r\n save_steps=1000,\r\n eval_steps=1000,\r\n logging_steps=25,\r\n report_to=[\"tensorboard\"],\r\n load_best_model_at_end=True,\r\n metric_for_best_model=\"wer\",\r\n greater_is_better=False,\r\n push_to_hub=True,\r\n optim=\"adamw_bnb_8bit\", # set the optimiser\r\n)\r\n\r\nfrom transformers import Seq2SeqTrainer\r\n\r\ntrainer = Seq2SeqTrainer(\r\n args=training_args,\r\n model=model,\r\n train_dataset=common_voice[\"train\"],\r\n eval_dataset=common_voice[\"test\"],\r\n data_collator=data_collator,\r\n compute_metrics=compute_metrics,\r\n tokenizer=processor.feature_extractor,\r\n)\r\n```\r\nError message:\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-26-69786f5d74d5> in <module>\r\n 1 from transformers import Seq2SeqTrainer\r\n 2 \r\n----> 3 trainer = Seq2SeqTrainer(\r\n 4 args=training_args,\r\n 5 model=model,\r\n\r\n2 frames\r\n/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py in to(self, *args, **kwargs)\r\n 1675 # Checks if the model has been loaded in 8-bit\r\n 1676 if getattr(self, \"is_loaded_in_8bit\", False):\r\n-> 1677 raise ValueError(\r\n 1678 \"`.to` is not supported for `8-bit` models. Please use the model as it is, since the\"\r\n 1679 \" model has already been set to the correct devices and casted to the correct `dtype`.\"\r\n\r\nValueError: `.to` is not supported for `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.\r\n```", "Hi @piegu \r\nThanks for your message - the error message is a bit misleading.\r\nActually it is not possible to pass an 8-bit model to a Trainer, please see the PR above this message :/ ", "cc @Vaibhavs10 ", "@younesbelkada DOes 8-bit model means both activation's and weights are in int8 ?\r\n\r\nMy goal to to generate whisper-tiny tflite model in int8 for both activation and weights\r\n```\r\nfrom transformers import WhisperForConditionalGeneration\r\n\r\nmodel_name = \"openai/whisper-tiny\"\r\nmodel = WhisperForConditionalGeneration.from_pretrained(model_name, device_map=\"auto\", load_in_8bit=True)\r\n```\r\n", "Hi @nyadla-sys \r\nThanks for the message\r\nCurrently it's the LLM.int8: https://arxiv.org/abs/2208.07339 algorithm that is implemented, specifically the weights are in int8 whereas the activations are in float16.\r\nThe script that you shared should work out of the box with the latest version of `transformers` & `accelerate`", "@younesbelkada, if activations are in float16/float32, the TFLite Whisper model works well. I am more interested in implementing an int8 version of the TFLite Whisper model. If you have any input, please share it with me\r\ncolab [notebook](https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/generate_tflite_from_whisper.ipynb) for this ", "here is my full int8 [notebook](https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/whisper_to_onnx_tflite_int8.ipynb) and [model](https://github.com/usefulsensors/openai-whisper/blob/main/models/whisper-int8.tflite) ,but am not really sure how to run inference and transcript the generated output by the model.\r\nWith this tiny.en int8 model size comes around ~36MB", "Hey @nyadla-sys - looks like you're using TFWhisperModel. To get logits over the vocabulary (and thus transcriptions), you'll need to use TFWhisperForConditionalGeneration (as explained here: https://github.com/huggingface/transformers/issues/19691#issuecomment-1412440369)" ]
1,669
1,676
1,669
NONE
null
### Feature request I've been trying to fine-tune whisper large on a GPU with 24gb of ram (both single GPU and multi GPU) and I run out of memory while training (with batch size set to 1 and max-length of audio set to 2.5 seconds). I made this a feature request not a bug report since I don't believe there is a problem with the code. ## Training script <details> <summary> Training code </summary> ```python from datasets import load_dataset, DatasetDict common_voice = DatasetDict() #common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train+validation", use_auth_token=True) #common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test", use_auth_token=True) common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train[:1%]+validation[:1%]", use_auth_token=True) common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test[:1%]", use_auth_token=True) print(common_voice) common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"]) print(common_voice) from transformers import WhisperFeatureExtractor feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-large") from transformers import WhisperTokenizer tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-large", language="swedish", task="transcribe") from transformers import WhisperProcessor processor = WhisperProcessor.from_pretrained("openai/whisper-large", language="swedish", task="transcribe") print(common_voice["train"][0]) from datasets import Audio common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000)) common_voice = common_voice.filter(lambda example: len(example["audio"]["array"]) < 2.5 * 16000, load_from_cache_file=False) print(common_voice["train"][0]) def prepare_dataset(batch): # load and resample audio data from 48 to 16kHz audio = batch["audio"] # compute log-Mel input features from input audio array batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0] # encode target text to label ids batch["labels"] = tokenizer(batch["sentence"]).input_ids return batch common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1) import torch from dataclasses import dataclass from typing import Any, Dict, List, Union @dataclass class DataCollatorSpeechSeq2SeqWithPadding: processor: Any def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lengths and need different padding methods # first treat the audio inputs by simply returning torch tensors input_features = [{"input_features": feature["input_features"]} for feature in features] batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt") # get the tokenized label sequences label_features = [{"input_ids": feature["labels"]} for feature in features] # pad the labels to max length labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt") # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) # if bos token is appended in previous tokenization step, # cut bos token here as it's append later anyways if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item(): labels = labels[:, 1:] batch["labels"] = labels return batch """Let's initialise the data collator we've just defined:""" data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor) import evaluate metric = evaluate.load("wer") def compute_metrics(pred): pred_ids = pred.predictions label_ids = pred.label_ids # replace -100 with the pad_token_id label_ids[label_ids == -100] = tokenizer.pad_token_id # we do not want to group tokens when computing the metrics pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True) wer = 100 * metric.compute(predictions=pred_str, references=label_str) return {"wer": wer} from transformers import WhisperForConditionalGeneration model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large") model.config.forced_decoder_ids = None model.config.suppress_tokens = [] from transformers import Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( output_dir="./whisper-large-sv-test2", # change to a repo name of your choice per_device_train_batch_size=1, gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size learning_rate=1e-5, warmup_steps=1, max_steps=10, gradient_checkpointing=True, fp16=True, group_by_length=True, evaluation_strategy="steps", per_device_eval_batch_size=1, predict_with_generate=True, generation_max_length=225, save_steps=5, # set to < max_steps eval_steps=5, # set to < max_steps logging_steps=1, # set to < max_steps report_to=["tensorboard"], load_best_model_at_end=True, metric_for_best_model="wer", greater_is_better=False, push_to_hub=True, ) from transformers import Seq2SeqTrainer trainer = Seq2SeqTrainer( args=training_args, model=model, train_dataset=common_voice["train"], eval_dataset=common_voice["test"], data_collator=data_collator, compute_metrics=compute_metrics, tokenizer=processor.feature_extractor, ) processor.save_pretrained(training_args.output_dir) trainer.train() kwargs = { "dataset_tags": "mozilla-foundation/common_voice_11_0", "dataset": "Common Voice 11.0", # a 'pretty' name for the training dataset "language": "sv", "model_name": "whisper-large-sv-test2", # a 'pretty' name for our model "finetuned_from": "openai/whisper-large", "tasks": "automatic-speech-recognition", "tags": "hf-asr-leaderboard", } trainer.push_to_hub(**kwargs) ``` </details> Example of error <img width="1131" alt="Screenshot 2022-11-21 at 12 32 36" src="https://user-images.githubusercontent.com/1704131/203040642-9b7dabb8-e76c-4786-bd4b-48e706a70563.png"> ### Motivation It would be great if it would be able to fine-tune the large model on a 24gb GPU since that would make it much more easy to train the larger mode.. ### Your contribution I would love to help out with this issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20348/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20347
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20347/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20347/comments
https://api.github.com/repos/huggingface/transformers/issues/20347/events
https://github.com/huggingface/transformers/issues/20347
1,457,735,683
I_kwDOCUB6oc5W40gD
20,347
past_key_values not accepted in generate with GPTNeoX
{ "login": "ValeKnappich", "id": 39188710, "node_id": "MDQ6VXNlcjM5MTg4NzEw", "avatar_url": "https://avatars.githubusercontent.com/u/39188710?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ValeKnappich", "html_url": "https://github.com/ValeKnappich", "followers_url": "https://api.github.com/users/ValeKnappich/followers", "following_url": "https://api.github.com/users/ValeKnappich/following{/other_user}", "gists_url": "https://api.github.com/users/ValeKnappich/gists{/gist_id}", "starred_url": "https://api.github.com/users/ValeKnappich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ValeKnappich/subscriptions", "organizations_url": "https://api.github.com/users/ValeKnappich/orgs", "repos_url": "https://api.github.com/users/ValeKnappich/repos", "events_url": "https://api.github.com/users/ValeKnappich/events{/privacy}", "received_events_url": "https://api.github.com/users/ValeKnappich/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "Hey @ValeKnappich 👋 \r\n\r\nYeah, `model_kwargs` needs to be added to `_validate_model_kwargs`. I'm on it :)", "Great, thanks :)", "@gante @sgugger \r\n\r\nThe kwarg validation was only a superficial issue. In fact, now it does not throw an error anymore, however, the `past_key_values` are still not passed on to the forward method. Looks like the `prepare_inputs_for_generation` method is at the core of the problem:\r\n\r\n``` \r\n def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, **model_kwargs):\r\n input_shape = input_ids.shape\r\n\r\n # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly\r\n if attention_mask is None:\r\n attention_mask = input_ids.new_ones(input_shape)\r\n\r\n # cut decoder_input_ids if past is used\r\n if past and past[0] is not None:\r\n input_ids = input_ids[:, -1:]\r\n\r\n return {\"input_ids\": input_ids, \"attention_mask\": attention_mask, \"past_key_values\": past}\r\n```\r\n\r\nNote that model_kwargs is simply swallowed here. I will create a PR shortly", "@gante @ArthurZucker I think we should rename all occurrences of `\"past\"` to `\"past_key_values\"` in `prepare_inputs_for_generation` and deprecate \"past\" if necessary.\r\n\r\n`\"past\"` was simply the name for the past key values states before we renamed everything to `past_key_values`, so this is just a left-over.", "Agreed ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,673
1,673
CONTRIBUTOR
null
### System Info Python 3.7.13 transformers 4.22.2 ### Who can help? @LysandreJik @patrickvonplaten ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The `past_key_values` kwarg is not accepted when calling `model.generate(..., past_key_values=pkv)` on a `GPTNeoxForCausalLM`, even though the `model.forward` does accept this kwarg. It does seem to work fine with other model classes like GPT2. Minimal example to reproduce error: ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch import transformers model_id = "NinedayWang/PolyCoder-160M" # small model with GPTNeoXForCausalLM class model = AutoModelForCausalLM.from_pretrained(model_id) tok = AutoTokenizer.from_pretrained(model_id) assert isinstance(model, transformers.models.gpt_neox.modeling_gpt_neox.GPTNeoXForCausalLM) pkv = torch.rand( ( 1, # batch size 10, # number of tokens 2 * model.config.num_hidden_layers, model.config.num_attention_heads, model.config.hidden_size // model.config.num_attention_heads ) ) out = model.generate(**tok("Hello world"), past_key_values=pkv) ``` Error message: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/transformers/generation_utils.py", line 1146, in generate self._validate_model_kwargs(model_kwargs.copy()) File "/home/st/st_us-052400/st_st175337/conda/envs/thesis/lib/python3.7/site-packages/transformers/generation_utils.py", line 862, in _validate_model_kwargs f"The following `model_kwargs` are not used by the model: {unused_model_args} (note: typos in the" ValueError: The following `model_kwargs` are not used by the model: ['past_key_values'] (note: typos in the generate arguments will also show up in this list) ``` I checked the error location and located the bug ("transformers/generation_utils.py", line 862, in _validate_model_kwargs): ``` unused_model_args = [] model_args = set(inspect.signature(self.prepare_inputs_for_generation).parameters) # `kwargs` if often used to handle optional forward pass inputs like `attention_mask`. If # `prepare_inputs_for_generation` doesn't accept `kwargs`, then a stricter check can be made ;) if "kwargs" in model_args: model_args |= set(inspect.signature(self.forward).parameters) for key, value in model_kwargs.items(): if value is not None and key not in model_args: unused_model_args.append(key) if unused_model_args: raise ValueError( f"The following `model_kwargs` are not used by the model: {unused_model_args} (note: typos in the" " generate arguments will also show up in this list)" ) ``` It first checks the args of `prepare_inputs_for_generation` and only adds the args of `forward` to the accepted list if `"kwargs"` is in the args of `prepare_inputs_for_generation`. However, contrary to GPT2, it only contains `model_kwargs` instead of `kwargs` for GPTNeox. So either the GPTNeoX class should be adapted, or the _validate_model_kwargs method in generation_utils.py. ### Expected behavior `generate` should be able to pass along all valid `model_kwargs`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20347/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20346
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20346/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20346/comments
https://api.github.com/repos/huggingface/transformers/issues/20346/events
https://github.com/huggingface/transformers/pull/20346
1,457,724,377
PR_kwDOCUB6oc5DWEOA
20,346
[Switch Transformers] Fix failing slow test
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks very much for the explanation! This should have been fixed now 💪 " ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? This PR should fix the currently failing slow test for `SwitchTransformers`. Putting the model on GPU and running the inference on GPU should fix the test. I have tested the test on PyTorch 1.12 (same as the daily CI runner), but I need to test it on the daily CI runner first before merging cc @ydshieh @ArthurZucker @sgugger EDIT: it passes on the CI daily runner, marking it as ready for review
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20346/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20346", "html_url": "https://github.com/huggingface/transformers/pull/20346", "diff_url": "https://github.com/huggingface/transformers/pull/20346.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20346.patch", "merged_at": 1669041410000 }
https://api.github.com/repos/huggingface/transformers/issues/20345
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20345/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20345/comments
https://api.github.com/repos/huggingface/transformers/issues/20345/events
https://github.com/huggingface/transformers/issues/20345
1,457,671,352
I_kwDOCUB6oc5W4ky4
20,345
Loading transformer on AWS Lambda throws OMP errno 38
{ "login": "djdevdev", "id": 118698817, "node_id": "U_kgDOBxMzQQ", "avatar_url": "https://avatars.githubusercontent.com/u/118698817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/djdevdev", "html_url": "https://github.com/djdevdev", "followers_url": "https://api.github.com/users/djdevdev/followers", "following_url": "https://api.github.com/users/djdevdev/following{/other_user}", "gists_url": "https://api.github.com/users/djdevdev/gists{/gist_id}", "starred_url": "https://api.github.com/users/djdevdev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/djdevdev/subscriptions", "organizations_url": "https://api.github.com/users/djdevdev/orgs", "repos_url": "https://api.github.com/users/djdevdev/repos", "events_url": "https://api.github.com/users/djdevdev/events{/privacy}", "received_events_url": "https://api.github.com/users/djdevdev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "There is little we can do without knowing which specific code in Transformers you are running. Loading a model with Transformers in general does not use Python multiprocessing for instance, so it's a bit hard for us to know what you want us to fix without a clear reproducer (using Transformers only and not a third-party library).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,672
1,672
NONE
null
### System Info Apologies if this is the wrong place to post but we're looking for pointers on tracking down what appears to be a transformers-related error. We have trained a Spacy 3.3.1 transformer textcat which we're deploying as an AWS Python 3.9 Docker image to AWS Lambda. The model loads and infers correctly on the Linux development host (both using a test Python script and also using AWS SAM local), but fails in the Lambda runtime with OpenMP runtime error no 38 (see Lambda error output below). A web search suggests this error occurs because Lambda doesn't support Python multiprocessing, specifically it doesn't mount /dev/shm, leading to the error (see links below). The Spacy team have confirmed they do not directly invoke multiprocessing but that transformers does (see https://github.com/explosion/spaCy/discussions/11836#discussioncomment-4193368). Further testing revealed that loading a blank Spacy model inside the Lambda runtime works perfectly, but loading the transformer on Python 3.7 gives the error, as does the base transformer model spacy.load("en_core_web_trf"). We conclude that transformers is using multiprocessing incompatible with AWS Lambda. A solution could be to disable transformer multiprocessing when loading the Spacy model. Any suggestions how we can disable OpenMP multiprocessing through a runtime setting? Or as a last resort we may need to override multiprocessing.Pool/Queue with multiprocessing.Process/Pipe which apparently do work on Lamda (suggested in links below). **Lambda error output** ``` OMP: Error #179: Function Can't open SHM2 failed: OMP: System error #38: Function not implemented OMP: Error #179: Function Can't open SHM2 failed: OMP: System error #38: Function not implemented START RequestId: XYZ Version: $LATEST RequestId: XYZ Error: Runtime exited with error: signal: aborted Runtime.ExitError END RequestId: XYZ REPORT RequestId: XYZ Duration: 547.37 ms Billed Duration: 548 ms Memory Size: 3008 MB Max Memory Used: 142 MB ``` **Relevant links** https://aws.amazon.com/blogs/compute/parallel-processing-in-python-with-aws-lambda/ https://spacy.io/usage/processing-pipelines#multiprocessing https://forum.opennmt.net/t/unable-to-create-ctranslate2-translator-in-aws-lambda/4922 https://stackoverflow.com/questions/34005930/multiprocessing-semlock-is-not-implemented-when-running-on-aws-lambda ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Create conda environment.yml file with Spacy 3.3.1 (which installs transformers=4.18.0 as a dependency) ``` channels: - defaults dependencies: - python=3.9.15 - spacy-transformers=1.1.5 - spacy-model-en_core_web_sm=3.3.0 - spacy-model-en_core_web_trf=3.3.0 ``` 2. Create Dockerfile (relevant extract shown below) ``` FROM public.ecr.aws/lambda/python:3.9 RUN wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh RUN conda env update -n base -f environment.yml ``` 3. Create Lambda Python handler ``` import spacy def lambda_handler(event, context): # Works in AWS Lambda Python 3.9 runtime nlp = spacy.load("en_core_web_sm") # Throws OMP errno 38 in AWS Lambda Python 3.9 runtime nlp = spacy.load("en_core_web_trf") return { "statusCode": 200 } ``` ### Expected behavior Lambda execution completes successfully and returns code 200.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20345/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20344
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20344/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20344/comments
https://api.github.com/repos/huggingface/transformers/issues/20344/events
https://github.com/huggingface/transformers/pull/20344
1,457,629,894
PR_kwDOCUB6oc5DVvpE
20,344
[Maskformer] Add MaskFormerSwin backbone
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,669
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? This is part 2 of 3 of the big #20204 PR. This PR adds MaskFormerSwin to the AutoBackbone API. This ensures that the model can be used as backbone with the MaskFormer framework. As it makes more sense to move MaskFormerSwin to its own modeling files, this PR implements it in a separate `modeling_maskformer_swin.py` file, along with a configuration implemented in` configuration_maskformer_swin.py`. To do: - [x] wait for #20407 to be merged to make the backbone get tested by the common tests, add support for hidden states and attentions
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20344/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20344", "html_url": "https://github.com/huggingface/transformers/pull/20344", "diff_url": "https://github.com/huggingface/transformers/pull/20344.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20344.patch", "merged_at": 1669664030000 }
https://api.github.com/repos/huggingface/transformers/issues/20343
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20343/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20343/comments
https://api.github.com/repos/huggingface/transformers/issues/20343/events
https://github.com/huggingface/transformers/issues/20343
1,457,394,029
I_kwDOCUB6oc5W3hFt
20,343
DataCollator that allows strings in datasets and untouch the strings
{ "login": "ShengdingHu", "id": 32740627, "node_id": "MDQ6VXNlcjMyNzQwNjI3", "avatar_url": "https://avatars.githubusercontent.com/u/32740627?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShengdingHu", "html_url": "https://github.com/ShengdingHu", "followers_url": "https://api.github.com/users/ShengdingHu/followers", "following_url": "https://api.github.com/users/ShengdingHu/following{/other_user}", "gists_url": "https://api.github.com/users/ShengdingHu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShengdingHu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShengdingHu/subscriptions", "organizations_url": "https://api.github.com/users/ShengdingHu/orgs", "repos_url": "https://api.github.com/users/ShengdingHu/repos", "events_url": "https://api.github.com/users/ShengdingHu/events{/privacy}", "received_events_url": "https://api.github.com/users/ShengdingHu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Data collators inside Transformers exist for the Trainer, which in turn send all inputs to the model directly. The model won't be able to accept the raw strings, so the use case is not obvious.\r\n\r\nTransformers is a library of models at its core, so while we provide some functionality to train/fine-tune them easily, our goal isn't to be comprehensive :-)", "But it's very easy to pop those raw strings out of the batch before passing into the model. If the raw strings are not passed into the batch, it will be hard to align the raw string to tokenized data because the data is randomly shuffled.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Is there are reason for \"labels_ids\" to be constrained to numeric values? (torch.long and torch.float). I'm working on a use case where the labels are represented as alphanumeric and it would be nice to preserve the original ID as it traces back to the original source of data. You can always convert alphanumeric values to numeric, but then you have to keep a dictionary offline to map it back. The other workaround would be to build our custom data collator as suggested in the original post but why write another function for this small change. I agree that it doesn't make sense to allow strings for a model that won't accept them. Can we make an exception for the \"labels_ids\"?" ]
1,669
1,685
1,672
NONE
null
### Feature request A default data_collator that allows strings to pass through and untouch the strings. ### Motivation When I use `remove_unused_columns=True`, I can't get the raw input sequences in string format in my batch. When I use `remove_unused_columns=False`, I can't use the default data collator, because it raises the following error. ``` ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`translation` in this case) have excessive nesting (inputs type `list` where type `int` is expected). ``` I constantly fail into this issue and I have to implement my own data collator each time. I wonder if there will be/has been an official implementation for this feature. ### Your contribution I can submit a PR, but I am not sure if the feature already exists somewhere that I did not know.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20343/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20342
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20342/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20342/comments
https://api.github.com/repos/huggingface/transformers/issues/20342/events
https://github.com/huggingface/transformers/issues/20342
1,457,339,386
I_kwDOCUB6oc5W3Tv6
20,342
BrokenPipe Error while training GPT-2 from scratch with run_clm.py
{ "login": "sanprit", "id": 109079338, "node_id": "U_kgDOBoBrKg", "avatar_url": "https://avatars.githubusercontent.com/u/109079338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanprit", "html_url": "https://github.com/sanprit", "followers_url": "https://api.github.com/users/sanprit/followers", "following_url": "https://api.github.com/users/sanprit/following{/other_user}", "gists_url": "https://api.github.com/users/sanprit/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanprit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanprit/subscriptions", "organizations_url": "https://api.github.com/users/sanprit/orgs", "repos_url": "https://api.github.com/users/sanprit/repos", "events_url": "https://api.github.com/users/sanprit/events{/privacy}", "received_events_url": "https://api.github.com/users/sanprit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @sanprit,\r\n\r\nWhat exactly is `2_run_clm.py`? Was this a typo and supposed to be `run_clm.py`? Also we would need to be able to re-run this command to reproduce the error => how can we access `train_file.txt`?", "2_run_clm.py is run_clm.py only with some logging comments added in it. Regarding train_file it is too large for me to share, can you please sun it on random 230 GB data", "Now, I have diagnosed the problem, \"why I was getting broken pipe error\". While tokenising and creating groups of the texts, in midway it requires huge disk size to store the cache, Although It deletes major part in the end. When working/testing with small dataset, I was checking only the final cache size, and I was extrapolating that size for big data. But keeping disk size by extrapolating the final cache size is not true because in the midway it creates very high memory cache.\r\n\r\none observation: Even for 20MB of text data, final cache size is only 138GB but in midway of tokenising and grouping, it forms 500GB cache, so we will need minimum 500GB disk. else broken pipe error will occur", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,669
1,673
1,673
NONE
null
### System Info Transformers version: v4.24.0 Python version: 3.9.13 system: EC2 with g5.12xlarge, Ubuntu, Pytorch (1.12.1) Data size : ~250GB ### Who can help? @patil-suraj, @patrickvonplaten, @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `python3 -m torch.distributed.launch --nproc_per_node 4 transformers/examples/pytorch/language-modeling/2_run_clm.py --tokenizer_name hf_tokenizer_model --model_type gpt2 --train_file train_file.txt --validation_split_percentage 10 --per_device_train_batch_size 32 --per_device_eval_batch_size 8 --gradient_accumulation_steps 8 --eval_accumulation_steps 8 --block_size 512 --do_train --do_eval --fp16 --output_dir MODELS --logging_dir logs --cache_dir cache --overwrite_output_dir yes --overwrite_cache yes --num_train_epochs 10 --no_cuda False --learning_rate 1e-5 --save_on_each_node False --seed 42 --disable_tqdm False --config_overrides="n_layer=12,vocab_size=96000,eos_token_id=0,bos_token_id=0" --auto_find_batch_size True --logging_strategy steps --save_strategy steps --evaluation_strategy steps --logging_steps 25000 --save_steps 25000 --eval_steps 25000 --preprocessing_num_workers 24 --dataloader_num_workers 24 --save_total_limit 15` ``` During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/process.py", line 315, in _bootstrap self.run() File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/pool.py", line 136, in worker put((job, i, (False, wrapped))) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/queues.py", line 380, in put self._writer.send_bytes(obj) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 208, in send_bytes self._send_bytes(m[offset:offset + size]) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 419, in _send_bytes self._send(header + buf) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 376, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe Running tokenizer on dataset #2: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉| 38169/38170 [3:20:47<00:00, 3.17ba/s] Process ForkPoolWorker-3: Traceback (most recent call last):00%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉| 38169/38170 [3:20:47<00:00, 5.32ba/s] File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/pool.py", line 131, in worker put((job, i, result)) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/queues.py", line 380, in put self._writer.send_bytes(obj) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 208, in send_bytes self._send_bytes(m[offset:offset + size]) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 419, in _send_bytes self._send(header + buf) File "/opt/conda/envs/pytorch/lib/python3.9/site-packages/multiprocess/connection.py", line 376, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe ``` This happend when tokenization was just about to complete ### Expected behavior After training, Grouping of texts (tokens) should start
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20342/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20341
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20341/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20341/comments
https://api.github.com/repos/huggingface/transformers/issues/20341/events
https://github.com/huggingface/transformers/pull/20341
1,457,129,430
PR_kwDOCUB6oc5DUCxX
20,341
Add `accelerate` support for LongT5 models
{ "login": "pszemraj", "id": 74869040, "node_id": "MDQ6VXNlcjc0ODY5MDQw", "avatar_url": "https://avatars.githubusercontent.com/u/74869040?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pszemraj", "html_url": "https://github.com/pszemraj", "followers_url": "https://api.github.com/users/pszemraj/followers", "following_url": "https://api.github.com/users/pszemraj/following{/other_user}", "gists_url": "https://api.github.com/users/pszemraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/pszemraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pszemraj/subscriptions", "organizations_url": "https://api.github.com/users/pszemraj/orgs", "repos_url": "https://api.github.com/users/pszemraj/repos", "events_url": "https://api.github.com/users/pszemraj/events{/privacy}", "received_events_url": "https://api.github.com/users/pszemraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @KMFODA for inputs on tests & more :crossed_fingers: ", "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the feedback & good catch on the Colab! I've updated the notebook - will run and resolve the slow tests/accelerate items later today/tomorrow and revert back 👌", "Hey @pszemraj ! \r\nHow is the integration going 💪 ? Let me know if I can help at some point to debug / make the tests pass ;) !", "Hi @pszemraj !\r\nIs it ok if I try to take over the PR? this addition could be very nice to the lib! Let me know what do you think :) ", "Hey! let me give it a stab today (I was sick for a week) if you don't see anything by tomorrow, feel free to take it home!\r\n\r\nEmail | ***@***.*** \r\nOn 12/6/2022 8:54:39 AM, Younes Belkada ***@***.***> wrote:\r\nHi @pszemraj [https://github.com/pszemraj] !\r\nIs it ok if I try to take over the PR? this addition could be very nice to the lib! Let me know what do you think :)\r\n—\r\nReply to this email directly, view it on GitHub [https://github.com/huggingface/transformers/pull/20341#issuecomment-1338921676], or unsubscribe [https://github.com/notifications/unsubscribe-auth/AR3GSMFN4MP444ZC72B4EN3WL3WL7ANCNFSM6AAAAAASGEAOLE].\r\nYou are receiving this because you were mentioned.Message ID: ***@***.***>\r\n[31e14b4b-28c3-4714-8081-803278962750]", "@younesbelkada hey - was trying to get the tests to pass and evaluate further but unfortunately the machine I _do_ have access to a GPU on and can work this was running into some install issues with the `dev` dependencies for `pytest` etc\n\nIf you're willing to finish this, that would probably be easiest 😅 I'll add the line for accelerate as you suggested and rebase as per the contrib guidelines, feel free to take whatever you find useful :) ", "Thanks a lot @pszemraj for your great efforts, will have a look ASAP ;) this is definitely in my TODO list", "thanks so much! I see you pushed so I will leave you to it (but feel free to let me know if questions or you need me to change anything on my end)\r\n\r\nthen we can get [this bad boi](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) usable on free Colab runtimes :) ", "Thanks for taking it home @younesbelkada! and thanks for the review @sgugger. Happy to help :) " ]
1,668
1,670
1,670
CONTRIBUTOR
null
Signed-off-by: peter szemraj <peterszemraj@gmail.com> # What does this PR do? This PR adds `accelerate` support for the longT5 models (i.e., make it possible to use `device_map="auto"`), so these models can be loaded in 8bit using load_in_8bit=True. This helps enable inference with trained/fine-tuned SoTA long summarization models using limited memory :relaxed: Took inspiration from reviewing similar PRs for other models: #19912 and #19927 cc @sgugger ## test results I made [a Colab notebook](https://colab.research.google.com/gist/pszemraj/6ea0a3046452fc51061f4bde2df0aa77/testing-accelerate-long-t5-tglobal-base-16384-book-summary.ipynb) that clones the branch from my fork to demo the `load_in_8bit=True` working. Everything else is the same for comparison purposes (_except the function that says the model size_) as [the fp32/standard notebook](https://colab.research.google.com/gist/pszemraj/d9a0495861776168fd5cdcd7731bc4ee/example-long-t5-tglobal-base-16384-book-summary.ipynb) listed on [my fine-tuned model card](https://huggingface.co/pszemraj/long-t5-tglobal-base-16384-book-summary). I also ran the tests for `longT5` locally: ```bash $ python -m pytest -n auto --dist=loadfile -s -v tests/models/longt5/test_modeling_longt5.py ( ... many things here ...) =================================================== 196 passed, 58 skipped, 118 warnings in 30.49s =================================================== ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20341/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20341/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20341", "html_url": "https://github.com/huggingface/transformers/pull/20341", "diff_url": "https://github.com/huggingface/transformers/pull/20341.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20341.patch", "merged_at": 1670855153000 }
https://api.github.com/repos/huggingface/transformers/issues/20340
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20340/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20340/comments
https://api.github.com/repos/huggingface/transformers/issues/20340/events
https://github.com/huggingface/transformers/pull/20340
1,457,119,611
PR_kwDOCUB6oc5DUAzg
20,340
[FLAX] Add dtype to embedding for bert/bart/opt/t5
{ "login": "merrymercy", "id": 15100009, "node_id": "MDQ6VXNlcjE1MTAwMDA5", "avatar_url": "https://avatars.githubusercontent.com/u/15100009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merrymercy", "html_url": "https://github.com/merrymercy", "followers_url": "https://api.github.com/users/merrymercy/followers", "following_url": "https://api.github.com/users/merrymercy/following{/other_user}", "gists_url": "https://api.github.com/users/merrymercy/gists{/gist_id}", "starred_url": "https://api.github.com/users/merrymercy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merrymercy/subscriptions", "organizations_url": "https://api.github.com/users/merrymercy/orgs", "repos_url": "https://api.github.com/users/merrymercy/repos", "events_url": "https://api.github.com/users/merrymercy/events{/privacy}", "received_events_url": "https://api.github.com/users/merrymercy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @sanchit-gandhi @younesbelkada @ArthurZucker here", "Can we merge this @ArthurZucker ? How to check whether the slow tests pass?", "Thanks @merrymercy !\r\nYou can run slow tests for `t5` for example by running: `RUN_SLOW=1 pytest tests/models/t5/test_modeling_flax_t5.py` - regarding what has been suggested by @sanchit-gandhi , you can just add a new test that initializes a model let's say in `bf16` and tests if the generated sequence is the same than the one that is expected:\r\n```\r\n @slow\r\n def test_small_generation_bf16(self):\r\n model = FlaxT5ForConditionalGeneration.from_pretrained(\"t5-small\", dtype=jnp.bfloat16)\r\n EXPECTED_OUTPUT = \"XXXX\"\r\n self.assertTrue(model.params[\"shared\"][\"embedding\"].dtype == jnp.bfloat16)\r\n model.config.max_length = 8\r\n model.config.num_beams = 1\r\n model.config.do_sample = False\r\n tokenizer = T5Tokenizer.from_pretrained(\"t5-small\")\r\n\r\n input_ids = tokenizer(\"summarize: Hello there\", return_tensors=\"np\").input_ids\r\n\r\n sequences = model.generate(input_ids).sequences\r\n\r\n output_str = tokenizer.batch_decode(sequences, skip_special_tokens=True)[0]\r\n self.assertTrue(output_str == EXPECTED_OUTPUT)\r\n\r\n```", "A bf16 test case is added as suggested by @younesbelkada . I checked that slow tests are also passed because most slow tests are in fp32 and this PR does not change the behavior of any fp32 tests.\r\n\r\nHowever, I do notice some slow tests fail on my machine (V100, jax=0.3.25, flax=0.6.2). I think this is not related to my PR, because they fail even with the transformers main branch and transformers v4.24.0. Fixing them is outside the scope of this PR. I can confirm that all tests passed on the main branch can still be passed after my PR.\r\n\r\nSince we get two approvals and the test case is added, can we merge it now?", "Thanks again for your contribution!" ]
1,668
1,669
1,669
CONTRIBUTOR
null
## What does this PR do? This PR is the follow-up of #18462. It adds dtype to `nn.Embed` for more common Flax models, including bert, bart, opt, t5, and their copies. This dtype is necessary for mixed precision training. ## Who can review? @patrickvonplaten, @LysandreJik, @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20340/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20340", "html_url": "https://github.com/huggingface/transformers/pull/20340", "diff_url": "https://github.com/huggingface/transformers/pull/20340.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20340.patch", "merged_at": 1669648903000 }
https://api.github.com/repos/huggingface/transformers/issues/20339
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20339/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20339/comments
https://api.github.com/repos/huggingface/transformers/issues/20339/events
https://github.com/huggingface/transformers/pull/20339
1,457,053,936
PR_kwDOCUB6oc5DTzey
20,339
Add Spanish translation of pr_checks.mdx
{ "login": "donelianc", "id": 7807897, "node_id": "MDQ6VXNlcjc4MDc4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7807897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donelianc", "html_url": "https://github.com/donelianc", "followers_url": "https://api.github.com/users/donelianc/followers", "following_url": "https://api.github.com/users/donelianc/following{/other_user}", "gists_url": "https://api.github.com/users/donelianc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donelianc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donelianc/subscriptions", "organizations_url": "https://api.github.com/users/donelianc/orgs", "repos_url": "https://api.github.com/users/donelianc/repos", "events_url": "https://api.github.com/users/donelianc/events{/privacy}", "received_events_url": "https://api.github.com/users/donelianc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @osanseviero. Can you help review this PR, please? ", "Only minor typos were spotted during the first review. I already addressed them with my latest commit. \r\n@sgugger, if you agree, we can skip a second review from [osanseviero](https://github.com/osanseviero) and merge this PR. \r\n\r\nThanks!" ]
1,668
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? Add the Spanish translation for `pr_checks.mdx` as part of the #15947 issue. Changes include the Spanish version of the original document and the updated `_toctree.yml` file. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? _Task assignment [here](https://github.com/huggingface/transformers/issues/15947#issuecomment-1321245149)_. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20339/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20339", "html_url": "https://github.com/huggingface/transformers/pull/20339", "diff_url": "https://github.com/huggingface/transformers/pull/20339.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20339.patch", "merged_at": 1669233989000 }
https://api.github.com/repos/huggingface/transformers/issues/20338
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20338/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20338/comments
https://api.github.com/repos/huggingface/transformers/issues/20338/events
https://github.com/huggingface/transformers/issues/20338
1,457,036,937
I_kwDOCUB6oc5W2J6J
20,338
TrOCR Encoder &Decoder Replacement
{ "login": "Mohammed20201991", "id": 59222637, "node_id": "MDQ6VXNlcjU5MjIyNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/59222637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mohammed20201991", "html_url": "https://github.com/Mohammed20201991", "followers_url": "https://api.github.com/users/Mohammed20201991/followers", "following_url": "https://api.github.com/users/Mohammed20201991/following{/other_user}", "gists_url": "https://api.github.com/users/Mohammed20201991/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mohammed20201991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mohammed20201991/subscriptions", "organizations_url": "https://api.github.com/users/Mohammed20201991/orgs", "repos_url": "https://api.github.com/users/Mohammed20201991/repos", "events_url": "https://api.github.com/users/Mohammed20201991/events{/privacy}", "received_events_url": "https://api.github.com/users/Mohammed20201991/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger #Bert #Good_new_ssue", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@NielsRogge ping on this one.", "> can I get the whole datasets for IAM as the format (processed) used during fine-tuning because I only can see the test set\r\n\r\nThe TrOCR authors have released the dataset [here](https://github.com/microsoft/unilm/tree/master/trocr), so I'd recommend taking a look at the unilm issues and perhaps ask the TrOCR authors to release it\r\n\r\n > I want to replace the Encoder with Vit or Swin or Diet and the Decoder with Bert or GPT-2 or another beast decoder In the original TrOCR or at least modify the decoder part\r\n\r\nNote that if you replace the encoder/decoder, you'll need to fine-tune the model on additional (image, text) pairs. For that I'd recommend checking out [this thread](https://github.com/huggingface/transformers/issues/15823). Also check my [demo notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/TrOCR) for tutorials on fine-tuning TrOCR.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,674
1,674
NONE
null
Hi @NielsRogge thanks for the great tutorial For TrOCR 1- can I get the whole datasets for IAM as the format (processed) used during fine-tuning because I only can see the test set ? 2- I want to replace the Encoder with **Vit or Swin or Diet** and the Decoder with **Bert** or **GPT-2** or another beast decoder In the original TrOCR or at least modify the decoder part but I got a huge CER= 76% Please can you suggest what a possibility to reach a beast result and later on wanna fine-tune with specific language import fun [# modifying the tokenizer processor.tokenizer = fun.AutoTokenizer.from_pretrained('bert-base-uncased') def trocr_model_config(model): # set decoder config to causal lm model.config.decoder.is_decoder = True model.config.decoder.add_cross_attention = True # set special tokens used for creating the decoder_input_ids from the labels model.config.decoder_start_token_id = processor.tokenizer.cls_token_id assert model.config.decoder_start_token_id == processor.tokenizer.cls_token_id model.config.pad_token_id = processor.tokenizer.pad_token_id # make sure vocab size is set correctly model.config.vocab_size = model.config.decoder.vocab_size # set beam search parameters model.config.eos_token_id = processor.tokenizer.sep_token_id model.config.max_length = 128 model.config.early_stopping = True model.config.no_repeat_ngram_size = 3 model.config.length_penalty = 2.0 model.config.num_beams = 4 return model def main(): df = load_dataset() print(df.head(4)) train_dataset, eval_dataset = create_datasets(df) print("Number of training examples:", len(train_dataset)) print("Number of validation examples:", len(eval_dataset)) encoding = train_dataset[0] for k,v in encoding.items(): print(k, v.shape) labels = encoding['labels'] labels[labels == -100] = processor.tokenizer.pad_token_id label_str = processor.decode(labels, skip_special_tokens=True) print(label_str) model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained('google/vit-base-patch16-384','bert-base-uncased') # setting model configuration configured_model = trocr_model_config(model) training_args = Seq2SeqTrainingArguments( predict_with_generate=True, evaluation_strategy="steps", learning_rate=2e-5, num_train_epochs=12, per_device_train_batch_size=16, per_device_eval_batch_size=16, fp16=True, output_dir= f'./models/vit_bert_IAM{datetime.now().strftime("%Y%m%d%H%M%S")}', logging_steps=100, save_steps=1000, eval_steps=500, ) # instantiate trainer trainer = Seq2SeqTrainer( model=configured_model, tokenizer= processor.feature_extractor, args=training_args, compute_metrics= compute_metrics, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator= default_data_collator, ) trainer.train() if __name__ == '__main__': main()
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20338/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/20338/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20337
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20337/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20337/comments
https://api.github.com/repos/huggingface/transformers/issues/20337/events
https://github.com/huggingface/transformers/issues/20337
1,456,943,380
I_kwDOCUB6oc5W1zEU
20,337
perpexity
{ "login": "emarcus255", "id": 28389723, "node_id": "MDQ6VXNlcjI4Mzg5NzIz", "avatar_url": "https://avatars.githubusercontent.com/u/28389723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emarcus255", "html_url": "https://github.com/emarcus255", "followers_url": "https://api.github.com/users/emarcus255/followers", "following_url": "https://api.github.com/users/emarcus255/following{/other_user}", "gists_url": "https://api.github.com/users/emarcus255/gists{/gist_id}", "starred_url": "https://api.github.com/users/emarcus255/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emarcus255/subscriptions", "organizations_url": "https://api.github.com/users/emarcus255/orgs", "repos_url": "https://api.github.com/users/emarcus255/repos", "events_url": "https://api.github.com/users/emarcus255/events{/privacy}", "received_events_url": "https://api.github.com/users/emarcus255/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,668
1,668
1,668
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20337/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20336
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20336/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20336/comments
https://api.github.com/repos/huggingface/transformers/issues/20336/events
https://github.com/huggingface/transformers/pull/20336
1,456,818,473
PR_kwDOCUB6oc5DTFcy
20,336
Fix issue 19904
{ "login": "raghavanone", "id": 115454562, "node_id": "U_kgDOBuGyYg", "avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raghavanone", "html_url": "https://github.com/raghavanone", "followers_url": "https://api.github.com/users/raghavanone/followers", "following_url": "https://api.github.com/users/raghavanone/following{/other_user}", "gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}", "starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions", "organizations_url": "https://api.github.com/users/raghavanone/orgs", "repos_url": "https://api.github.com/users/raghavanone/repos", "events_url": "https://api.github.com/users/raghavanone/events{/privacy}", "received_events_url": "https://api.github.com/users/raghavanone/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger Please tag the right persons for reviewing this PR . " ]
1,668
1,669
1,669
CONTRIBUTOR
null
Fixes issue #19904
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20336/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20336", "html_url": "https://github.com/huggingface/transformers/pull/20336", "diff_url": "https://github.com/huggingface/transformers/pull/20336.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20336.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20335
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20335/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20335/comments
https://api.github.com/repos/huggingface/transformers/issues/20335/events
https://github.com/huggingface/transformers/issues/20335
1,456,730,518
I_kwDOCUB6oc5W0_GW
20,335
Add MobileNetV1
{ "login": "atturaioe", "id": 76523524, "node_id": "MDQ6VXNlcjc2NTIzNTI0", "avatar_url": "https://avatars.githubusercontent.com/u/76523524?v=4", "gravatar_id": "", "url": "https://api.github.com/users/atturaioe", "html_url": "https://github.com/atturaioe", "followers_url": "https://api.github.com/users/atturaioe/followers", "following_url": "https://api.github.com/users/atturaioe/following{/other_user}", "gists_url": "https://api.github.com/users/atturaioe/gists{/gist_id}", "starred_url": "https://api.github.com/users/atturaioe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atturaioe/subscriptions", "organizations_url": "https://api.github.com/users/atturaioe/orgs", "repos_url": "https://api.github.com/users/atturaioe/repos", "events_url": "https://api.github.com/users/atturaioe/events{/privacy}", "received_events_url": "https://api.github.com/users/atturaioe/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "There's an open PR for it that is (almost) ready to merge. I just need to rebase it because it conflicts with MobileNetV2 that was recently added. I'll probably get around to this later this week.\r\n\r\nSee the PR: https://github.com/huggingface/transformers/pull/17799", "Oh, thanks. Closing this issue.", "Just a FYI: it has been merged now with the main branch of Transformers. :-) " ]
1,668
1,669
1,669
CONTRIBUTOR
null
### Model description MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation [Paper](https://arxiv.org/abs/1704.04861) Hi there, I wonder whether MobileNetV1 is going to be added or not, I see there's already a dedicated [card](https://huggingface.co/google/mobilenet_v1_1.0_224) on model's hub. @hollance
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20335/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20334
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20334/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20334/comments
https://api.github.com/repos/huggingface/transformers/issues/20334/events
https://github.com/huggingface/transformers/issues/20334
1,456,554,082
I_kwDOCUB6oc5W0UBi
20,334
longformer_content
{ "login": "wesleygmorris", "id": 82518780, "node_id": "MDQ6VXNlcjgyNTE4Nzgw", "avatar_url": "https://avatars.githubusercontent.com/u/82518780?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wesleygmorris", "html_url": "https://github.com/wesleygmorris", "followers_url": "https://api.github.com/users/wesleygmorris/followers", "following_url": "https://api.github.com/users/wesleygmorris/following{/other_user}", "gists_url": "https://api.github.com/users/wesleygmorris/gists{/gist_id}", "starred_url": "https://api.github.com/users/wesleygmorris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wesleygmorris/subscriptions", "organizations_url": "https://api.github.com/users/wesleygmorris/orgs", "repos_url": "https://api.github.com/users/wesleygmorris/repos", "events_url": "https://api.github.com/users/wesleygmorris/events{/privacy}", "received_events_url": "https://api.github.com/users/wesleygmorris/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Which new model are you talking about? Please fill the template properly otherwise no one will be able to help." ]
1,668
1,669
null
NONE
null
### Model description This model will generate a content score for a summary given the context and the summary itself. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20334/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/20333
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20333/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20333/comments
https://api.github.com/repos/huggingface/transformers/issues/20333/events
https://github.com/huggingface/transformers/pull/20333
1,456,501,962
PR_kwDOCUB6oc5DSFEj
20,333
Use tiny ONNX models for text modality
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lewtun Thank you for working on this 💯 \r\n\r\nI force pushed a commit to resolve a conflict on `main`.\r\n\r\nThank you for sharing the models that are not working. Keep in mind that there are some difficulty in the tiny model creation :\r\n- no model tester\r\n- can't convert tokenizer/processor correctly\r\n- no super easy to to set some config attributes correctly for a few particular models\r\n- etc.\r\n\r\nWe might need a few more iterations to make things more stable (i.e. need to create new tiny models), but I will take care of the failing ONNX tests if the new tiny models break the tests.", "Let me merge later today once the tests pass with the new set of tiny models (which is currently being built)", "> Let me merge later today once the tests pass with the new set of tiny models (which is currently being built)\r\n\r\nSounds great! Let me know if you need any help 🙏 ", "Running on CPU with only the new tiny random mdoels\r\n\r\n> 293 passed, 2 skipped, 23628 warnings in 760.95s (0:12:40)\r\n\r\n🚀 🚀 🚀 🚀 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20333). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,669
1,669
MEMBER
null
# What does this PR do? Partially addresses https://github.com/huggingface/transformers/issues/18819 This PR uses the new tiny random models from @ydshieh to speed up the ONNX tests for the **text** modality. This brings the slow ONNX tests down to about 1.5h. The other modalities will be added once we have tiny models that can run a forward pass in general. Note that the following text models didn't work when running ``` RUN_SLOW=1 pytest -v tests/onnx/test_onnx_v2.py -x -k "model_arch" ``` so have been excluded for now: **hf-internal-testing/tiny-random-IBertModel** * Cannot export model to ONNX **hf-internal-testing/tiny-random-LayoutLMv3Model** * Not strictly a text model, but caught this in my testing: ``` tests/onnx/test_onnx_v2.py:423: in test_pytorch_export self._onnx_export(test_name, name, model_name, feature, onnx_config_class_constructor) feature = 'default' model_name = 'hf-internal-testing/tiny-random-LayoutLMv3Model' name = 'layoutlmv3' onnx_config_class_constructor = functools.partial(<bound method OnnxConfig.from_model_config of <class 'transformers.models.layoutlmv3.configuration_layoutlmv3.LayoutLMv3OnnxConfig'>>, task='default') self = <tests.onnx.test_onnx_v2.OnnxExportTestCaseV2 testMethod=test_pytorch_export_091_layoutlmv3_default> test_name = 'layoutlmv3_default' tests/onnx/test_onnx_v2.py:351: in _onnx_export self.fail(f"{name}, {feature} -> {e}") E AssertionError: layoutlmv3, default -> The size of tensor a (12545) must match the size of tensor b (5) at non-singleton dimension 1 ``` **hf-internal-testing/tiny-random-LongformerModel** * Might be related to #20292 ``` self = <onnxruntime.capi.onnxruntime_inference_collection.InferenceSession object at 0x7fc37b5d0a90>, output_names = ['last_hidden_state', 'pooler_output'] input_feed = {'attention_mask': array([[1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1...put_ids': array([[0, 3, 3, 3, 3, 3, 3, 3, 2], [0, 3, 3, 3, 3, 3, 3, 3, 2], [0, 3, 3, 3, 3, 3, 3, 3, 2]])} run_options = None def run(self, output_names, input_feed, run_options=None): """ Compute the predictions. :param output_names: name of the outputs :param input_feed: dictionary ``{ input_name: input_value }`` :param run_options: See :class:`onnxruntime.RunOptions`. :return: list of results, every result is either a numpy array, a sparse tensor, a list or a dictionary. :: sess.run([output_name], {input_name: x}) """ num_required_inputs = len(self._inputs_meta) num_inputs = len(input_feed) # the graph may have optional inputs used to override initializers. allow for that. if num_inputs < num_required_inputs: raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs)) if not output_names: output_names = [output.name for output in self._outputs_meta] try: > return self._sess.run(output_names, input_feed, run_options) E onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_2943' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, onnxruntime::TensorShapeVector &, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{3,4,2,5}, requested shape:{3,1,9,5} ``` **hf-internal-testing/tiny-random-BlenderbotSmallModel** * Problem with tokenizer vocab ``` self = <[AttributeError("'ByteLevelBPETokenizer' object has no attribute '_tokenizer'") raised in repr()] ByteLevelBPETokenizer object at 0x7fc2a041e610> vocab = '/Users/lewtun/.cache/huggingface/hub/models--hf-internal-testing--tiny-random-BlenderbotSmallModel/snapshots/1eff1bdb5f97b473480b1ec8af85f58439e88906/vocab.json' merges = '/Users/lewtun/.cache/huggingface/hub/models--hf-internal-testing--tiny-random-BlenderbotSmallModel/snapshots/1eff1bdb5f97b473480b1ec8af85f58439e88906/merges.txt' add_prefix_space = False, lowercase = False, dropout = None, unicode_normalizer = None, continuing_subword_prefix = None, end_of_word_suffix = None, trim_offsets = True def __init__( self, vocab: Optional[Union[str, Dict[str, int]]] = None, merges: Optional[Union[str, Dict[Tuple[int, int], Tuple[int, int]]]] = None, add_prefix_space: bool = False, lowercase: bool = False, dropout: Optional[float] = None, unicode_normalizer: Optional[str] = None, continuing_subword_prefix: Optional[str] = None, end_of_word_suffix: Optional[str] = None, trim_offsets: bool = False, ): if vocab is not None and merges is not None: tokenizer = Tokenizer( > BPE( vocab, merges, dropout=dropout, continuing_subword_prefix=continuing_subword_prefix or "", end_of_word_suffix=end_of_word_suffix or "", ) ) E Exception: Error while initializing BPE: Token `_</w>` out of vocabulary ``` **hf-internal-testing/tiny-random-MarianModel** * Problem with config ``` self = <[AttributeError("'Embedding' object has no attribute 'padding_idx'") raised in repr()] Embedding object at 0x7fd700e63d00>, num_embeddings = 99, embedding_dim = 16 padding_idx = 58100, max_norm = None, norm_type = 2.0, scale_grad_by_freq = False, sparse = False, _weight = None, device = None, dtype = None def __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None, max_norm: Optional[float] = None, norm_type: float = 2., scale_grad_by_freq: bool = False, sparse: bool = False, _weight: Optional[Tensor] = None, device=None, dtype=None) -> None: factory_kwargs = {'device': device, 'dtype': dtype} super(Embedding, self).__init__() self.num_embeddings = num_embeddings self.embedding_dim = embedding_dim if padding_idx is not None: if padding_idx > 0: > assert padding_idx < self.num_embeddings, 'Padding_idx must be within num_embeddings' E AssertionError: Padding_idx must be within num_embeddings ``` **hf-internal-testing/tiny-random-MBartModel** * Problem with embedding config ``` def embedding( input: Tensor, weight: Tensor, padding_idx: Optional[int] = None, max_norm: Optional[float] = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, sparse: bool = False, ) -> Tensor: if has_torch_function_variadic(input, weight): return handle_torch_function( embedding, (input, weight), input, weight, padding_idx=padding_idx, max_norm=max_norm, norm_type=norm_type, scale_grad_by_freq=scale_grad_by_freq, sparse=sparse, ) if padding_idx is not None: if padding_idx > 0: assert padding_idx < weight.size(0), "Padding_idx must be within num_embeddings" elif padding_idx < 0: assert padding_idx >= -weight.size(0), "Padding_idx must be within num_embeddings" padding_idx = weight.size(0) + padding_idx else: padding_idx = -1 if max_norm is not None: # Note [embedding_renorm contiguous] # `embedding_renorm_` will call .contiguous() on input anyways, so we # call it here and take advantage of the improved locality in the # `embedding` call below too. input = input.contiguous() # Note [embedding_renorm set_grad_enabled] # XXX: equivalent to # with torch.no_grad(): # torch.embedding_renorm_ # remove once script supports set_grad_enabled _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) > return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) E IndexError: index out of range in self ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20333/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20333", "html_url": "https://github.com/huggingface/transformers/pull/20333", "diff_url": "https://github.com/huggingface/transformers/pull/20333.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20333.patch", "merged_at": 1669133478000 }
https://api.github.com/repos/huggingface/transformers/issues/20332
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20332/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20332/comments
https://api.github.com/repos/huggingface/transformers/issues/20332/events
https://github.com/huggingface/transformers/issues/20332
1,456,467,549
I_kwDOCUB6oc5Wz-5d
20,332
OPTForCausalLM - ValueError: The following `model_kwargs` are not used by the model: ['new_doc']
{ "login": "FurkanGozukara", "id": 19240467, "node_id": "MDQ6VXNlcjE5MjQwNDY3", "avatar_url": "https://avatars.githubusercontent.com/u/19240467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FurkanGozukara", "html_url": "https://github.com/FurkanGozukara", "followers_url": "https://api.github.com/users/FurkanGozukara/followers", "following_url": "https://api.github.com/users/FurkanGozukara/following{/other_user}", "gists_url": "https://api.github.com/users/FurkanGozukara/gists{/gist_id}", "starred_url": "https://api.github.com/users/FurkanGozukara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FurkanGozukara/subscriptions", "organizations_url": "https://api.github.com/users/FurkanGozukara/orgs", "repos_url": "https://api.github.com/users/FurkanGozukara/repos", "events_url": "https://api.github.com/users/FurkanGozukara/events{/privacy}", "received_events_url": "https://api.github.com/users/FurkanGozukara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@FurkanGozukara I do not see the new_doc parameter accepted neither in code nor in example scripts ? Can you point me to where you got example scripts with new_doc parameter ? ", "> @FurkanGozukara I do not see the new_doc parameter accepted neither in code nor in example scripts ? Can you point me to where you got example scripts with new_doc parameter ?\r\n\r\nhttps://github.com/paperswithcode/galai\r\n\r\n![image](https://user-images.githubusercontent.com/19240467/203018858-60dfd7fd-f186-4fe8-8217-ab3ab493f9cc.png)\r\n\r\n", "@FurkanGozukara I think there is an mix up, the huggingface api so far does not support new_doc parameter, the new_doc parameter is present in galai library released by paperwithcode .\r\n\r\nhttps://github.com/paperswithcode/galai/blob/f6d9b0a5b35a0eda53597a5ea7d51963bfc05de1/galai/model.py#L88", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,672
1,672
NONE
null
### System Info Hello. I am using this model : https://huggingface.co/facebook/galactica-6.7b The example is like this pretty straight ``` from transformers import AutoTokenizer, OPTForCausalLM tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-6.7b") model = OPTForCausalLM.from_pretrained("facebook/galactica-6.7b") input_text = "The Transformer architecture [START_REF]" input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` **new GALACTICA model supports the below inputs but I am getting error. This is the code I run** ``` from transformers import AutoTokenizer, OPTForCausalLM tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-6.7b") model = OPTForCausalLM.from_pretrained("facebook/galactica-6.7b") input_text = "The benefits of deadlifting\n\n" input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids,new_doc=True, top_p=0.7, max_length=2000) print(tokenizer.decode(outputs[0])) ``` ![image](https://user-images.githubusercontent.com/19240467/202852602-74833f48-a83b-406e-8d6f-0f51e38ef859.png) ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction test ### Expected behavior test
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20332/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20332/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20331
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20331/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20331/comments
https://api.github.com/repos/huggingface/transformers/issues/20331/events
https://github.com/huggingface/transformers/pull/20331
1,456,437,317
PR_kwDOCUB6oc5DR2-A
20,331
Fix a typo in BLOOM model docs
{ "login": "rajrajhans", "id": 32734049, "node_id": "MDQ6VXNlcjMyNzM0MDQ5", "avatar_url": "https://avatars.githubusercontent.com/u/32734049?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajrajhans", "html_url": "https://github.com/rajrajhans", "followers_url": "https://api.github.com/users/rajrajhans/followers", "following_url": "https://api.github.com/users/rajrajhans/following{/other_user}", "gists_url": "https://api.github.com/users/rajrajhans/gists{/gist_id}", "starred_url": "https://api.github.com/users/rajrajhans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajrajhans/subscriptions", "organizations_url": "https://api.github.com/users/rajrajhans/orgs", "repos_url": "https://api.github.com/users/rajrajhans/repos", "events_url": "https://api.github.com/users/rajrajhans/events{/privacy}", "received_events_url": "https://api.github.com/users/rajrajhans/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> `BigSicence` -> `BigScience` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20331/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20331", "html_url": "https://github.com/huggingface/transformers/pull/20331", "diff_url": "https://github.com/huggingface/transformers/pull/20331.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20331.patch", "merged_at": 1669041895000 }
https://api.github.com/repos/huggingface/transformers/issues/20330
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20330/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20330/comments
https://api.github.com/repos/huggingface/transformers/issues/20330/events
https://github.com/huggingface/transformers/issues/20330
1,456,383,028
I_kwDOCUB6oc5WzqQ0
20,330
Models cannot be loaded if they have dot "." in name
{ "login": "j-adamczyk", "id": 50807718, "node_id": "MDQ6VXNlcjUwODA3NzE4", "avatar_url": "https://avatars.githubusercontent.com/u/50807718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/j-adamczyk", "html_url": "https://github.com/j-adamczyk", "followers_url": "https://api.github.com/users/j-adamczyk/followers", "following_url": "https://api.github.com/users/j-adamczyk/following{/other_user}", "gists_url": "https://api.github.com/users/j-adamczyk/gists{/gist_id}", "starred_url": "https://api.github.com/users/j-adamczyk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/j-adamczyk/subscriptions", "organizations_url": "https://api.github.com/users/j-adamczyk/orgs", "repos_url": "https://api.github.com/users/j-adamczyk/repos", "events_url": "https://api.github.com/users/j-adamczyk/events{/privacy}", "received_events_url": "https://api.github.com/users/j-adamczyk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, @j-adamczyk. The problem here is that `transformers` version that includes `MobileNetV2` model has not yet been released to [PyPi](https://pypi.org/project/transformers/#history) etc., hence there's no `'mobilenet_v2'` key. \r\nThough you can install the `transformers` from source, and start using `MobileNetV2` right after.", "To install from [source](https://huggingface.co/transformers/v3.5.1/installation.html#installing-from-source), clone the repository and install with the following commands:\r\n```\r\ngit clone https://github.com/huggingface/transformers.git\r\ncd transformers\r\npip install -e .\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closing as this issue seems resolved." ]
1,668
1,671
1,671
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.31 - Python version: 3.9.15 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.0+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @NielsRogge @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Try to use `google/mobilenet_v2_1.0_224`: ``` model = AutoModelForImageClassification.from_pretrained( "google/mobilenet_v2_1.0_224", num_labels=2, ignore_mismatched_sizes=True, ) ``` I get: ``` Traceback (most recent call last): File "train.py", line 217, in <module> model = AutoModelForImageClassification.from_pretrained( File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 434, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 796, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 503, in __getitem__ raise KeyError(key) KeyError : 'mobilenet_v2' ``` I suspect that this happens due to line 790 in `configuration_auto.py`: ``` module_file, class_name = class_ref.split(".") ``` Since this model has a dot "." in name, it is splitted here and library is looing for `mobilenet_v2` instead, which does not exist. ### Expected behavior Load a model from HuggingFace Hub.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20330/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20329
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20329/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20329/comments
https://api.github.com/repos/huggingface/transformers/issues/20329/events
https://github.com/huggingface/transformers/issues/20329
1,456,321,585
I_kwDOCUB6oc5WzbQx
20,329
cannot import electratokenizer or model. It works on colabnotebook but not in python
{ "login": "prsatyal", "id": 101518921, "node_id": "U_kgDOBg0OSQ", "avatar_url": "https://avatars.githubusercontent.com/u/101518921?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prsatyal", "html_url": "https://github.com/prsatyal", "followers_url": "https://api.github.com/users/prsatyal/followers", "following_url": "https://api.github.com/users/prsatyal/following{/other_user}", "gists_url": "https://api.github.com/users/prsatyal/gists{/gist_id}", "starred_url": "https://api.github.com/users/prsatyal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prsatyal/subscriptions", "organizations_url": "https://api.github.com/users/prsatyal/orgs", "repos_url": "https://api.github.com/users/prsatyal/repos", "events_url": "https://api.github.com/users/prsatyal/events{/privacy}", "received_events_url": "https://api.github.com/users/prsatyal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Transformers is not currently compatible with TensorFlow 2.11 (which broke a lot of things compared to the previous versions). It should be fixed on main so you can either do:\r\n- an install of Transformers from source\r\n- or downgrade your TensorFlow to 2.10", "Hi There! I figured as much so is downgraded to tensorflow 2.10 and\ninstalled transformers from the source\n\nOn Mon, Nov 21, 2022 at 8:29 PM Sylvain Gugger ***@***.***>\nwrote:\n\n> Transformers is not currently compatible with TensorFlow 2.11 (which broke\n> a lot of things compared to the previous versions). It should be fixed on\n> main so you can either do:\n>\n> - an install of Transformers from source\n> - or downgrade your TensorFlow to 2.10\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/20329#issuecomment-1322165802>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AYGQ4SMR5SK5XB45Y5KRPYDWJODDNANCNFSM6AAAAAASFGNBJE>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,672
1,672
NONE
null
### System Info i have tensorflow version 2.11.0 and transformer version 4.24.0 and i cannot import TFElectraModel. It works for me in google colab but not in my pc. from transformers import TFElectraModel gives me the following error RuntimeError: Failed to import transformers.models.electra.modeling_tf_electra because of the following error (look up to see its traceback): No module named 'keras.saving.hdf5_format' ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction in windows install python 3.10 install tensorflow install numpy install scikit-learn install transformers import tensorflow as tf import numpy as np import transformers from transformers import ElectraTokenizer from transformers import TFElectraModel ### Expected behavior It should load the TFElectraModel, i have loaded this model a million times ins google colab and it still works there even now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20329/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20328
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20328/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20328/comments
https://api.github.com/repos/huggingface/transformers/issues/20328/events
https://github.com/huggingface/transformers/issues/20328
1,456,164,047
I_kwDOCUB6oc5Wy0zP
20,328
MAE Noise Was NOT Removed In Prediction For TFViTMAEModel / ViTMAEModel
{ "login": "zhoutong-fu", "id": 64811959, "node_id": "MDQ6VXNlcjY0ODExOTU5", "avatar_url": "https://avatars.githubusercontent.com/u/64811959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhoutong-fu", "html_url": "https://github.com/zhoutong-fu", "followers_url": "https://api.github.com/users/zhoutong-fu/followers", "following_url": "https://api.github.com/users/zhoutong-fu/following{/other_user}", "gists_url": "https://api.github.com/users/zhoutong-fu/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhoutong-fu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhoutong-fu/subscriptions", "organizations_url": "https://api.github.com/users/zhoutong-fu/orgs", "repos_url": "https://api.github.com/users/zhoutong-fu/repos", "events_url": "https://api.github.com/users/zhoutong-fu/events{/privacy}", "received_events_url": "https://api.github.com/users/zhoutong-fu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nViTMAE generates a random mask internally to mask out patches. Hence forwarding the same inputs twice through the model will result in different hidden states. If you want to ensure reproducability, you can pass the `noise` argument. See [here](https://github.com/huggingface/transformers/blob/8503cc755050c6ed5bc771e3244c29b71be1841e/tests/models/vit_mae/test_modeling_vit_mae.py#L317) for an example.", "Thanks @NielsRogge for the reply. I'm using this model to extract image features and I'm using some hacks to remove noise and masks for inference.\r\n\r\nAfter reading the [ViTMAE code](https://github.com/facebookresearch/mae/blob/main/FINETUNE.md), I found that the trained ViTMAE model can be loaded as a regular ViT model directly. Just tested with `TFViTMAEModel.from_pretrained(\"facebook/vit-mae-base\")` and it works. Could you help confirm is this the correct approach to use pre-trained MAE model? ", "Yes, that's also shown in the docs. Note that `TFViTMAEModel.from_pretrained(\"facebook/vit-mae-base\")` will just load the base Encoder, which does not include the decoder used during pre-training. For that you would need to load `TFViTMAEForPreTraining`.\r\n\r\nHowever, since you want to use the model for image features, it's indeed adviced to just load the Transformer encoder." ]
1,668
1,669
1,669
NONE
null
### System Info transformers version: v4.24.0 (latest) ### Who can help? @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import numpy as np from transformers import AutoFeatureExtractor, TFViTMAEModel url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/vit-mae-base") model = TFViTMAEModel.from_pretrained("facebook/vit-mae-base") inputs = feature_extractor(images=image, return_tensors="tf") outputs1 = model(**inputs) outputs2 = model(**inputs) res1 = outputs1.last_hidden_state.numpy() res2 = outputs2.last_hidden_state.numpy() np.allclose(res1, res2, rtol=1e-04, atol=1e-05) # This returns False as noises are introduced. ``` ### Expected behavior It should return True. It returns False as noises are added in `random_masking()` which hasn't been removed for prediction. There are no options to do that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20328/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20327
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20327/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20327/comments
https://api.github.com/repos/huggingface/transformers/issues/20327/events
https://github.com/huggingface/transformers/issues/20327
1,455,956,647
I_kwDOCUB6oc5WyCKn
20,327
Mismatch in torch size
{ "login": "devyndonahue", "id": 32913540, "node_id": "MDQ6VXNlcjMyOTEzNTQw", "avatar_url": "https://avatars.githubusercontent.com/u/32913540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/devyndonahue", "html_url": "https://github.com/devyndonahue", "followers_url": "https://api.github.com/users/devyndonahue/followers", "following_url": "https://api.github.com/users/devyndonahue/following{/other_user}", "gists_url": "https://api.github.com/users/devyndonahue/gists{/gist_id}", "starred_url": "https://api.github.com/users/devyndonahue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/devyndonahue/subscriptions", "organizations_url": "https://api.github.com/users/devyndonahue/orgs", "repos_url": "https://api.github.com/users/devyndonahue/repos", "events_url": "https://api.github.com/users/devyndonahue/events{/privacy}", "received_events_url": "https://api.github.com/users/devyndonahue/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As the error message told you, load your model with `ignore_mismatched_sizes=True`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,672
1,672
NONE
null
I am trying to fine-tune an NER model using "malduwais/distilbert-base-uncased-finetuned-ner" model, in the dataset I am finetuning on there are only 5 features as opposed to 9 in the one this model was trained on. I get this error when calling trainer.train() ``` RuntimeError: Error(s) in loading state_dict for DistilBertForTokenClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([9, 768]) from checkpoint, the shape in current model is torch.Size([5, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([9]) from checkpoint, the shape in current model is torch.Size([5]). You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method. ``` is there a way to adjust the size when finetuning? Thank you
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20327/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20326
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20326/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20326/comments
https://api.github.com/repos/huggingface/transformers/issues/20326/events
https://github.com/huggingface/transformers/pull/20326
1,455,908,079
PR_kwDOCUB6oc5DQFSM
20,326
[WIP] Update: ignore padding support for TransfoXL training when n_clusters==0
{ "login": "StefanHeng", "id": 43276957, "node_id": "MDQ6VXNlcjQzMjc2OTU3", "avatar_url": "https://avatars.githubusercontent.com/u/43276957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StefanHeng", "html_url": "https://github.com/StefanHeng", "followers_url": "https://api.github.com/users/StefanHeng/followers", "following_url": "https://api.github.com/users/StefanHeng/following{/other_user}", "gists_url": "https://api.github.com/users/StefanHeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/StefanHeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StefanHeng/subscriptions", "organizations_url": "https://api.github.com/users/StefanHeng/orgs", "repos_url": "https://api.github.com/users/StefanHeng/repos", "events_url": "https://api.github.com/users/StefanHeng/events{/privacy}", "received_events_url": "https://api.github.com/users/StefanHeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20326). All of your documentation changes will be reflected on that endpoint.", "Thank you for your comments! \r\n\r\n> Thanks a lot for working on this! Can you make sure to run `make style` on your branch? You will also need to rebase on main to fix all the TensorFlow-related tests (they broke with last TensorFlow release).\r\n\r\nSure will do. ", "> Though it doesn't work the examples where we encourage labels=inputs[\"input_ids\"]\r\n\r\n@sgugger Can you clarify? Not sure what do you mean ", "Please use -100 for padding in the labels as @patrickvonplaten indicated.", "Sure, do we keep the 2 branches (line 113), i.e. support a case where we don't filter out -100? \r\n\r\nI'm fine either way. I added padding as an option because creating that extra mask token would consume extra memory. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "comment to reactivate thread\r\n" ]
1,668
1,680
1,672
CONTRIBUTOR
null
# What does this PR do? This PR solves [an issue](https://github.com/huggingface/transformers/issues/17446) I raised about TransformerXL. As @sgugger mentioned in [another issue](https://github.com/huggingface/transformers/issues/19914) I raised, he [says](https://github.com/huggingface/transformers/issues/19914#issuecomment-1293656206) > I don't think TransformerXL supports FP16 as this is an old model with very specific code for the softmax layer. This won't be an issue we will fix ourselves given that Transformer-XL is not very used anymore, but if someone wants to make a PR, we'll review! I'm using TransformerXL in a [research project](https://github.com/StefanHeng/Symbolic-Music-Generation) and disabling the adaptive softmax is an option I would like to explore. So here I am. In the `n_clusters==0` branch, the current TransformerXL implementation does not work with padding (-100), it beaks at `.gather(1, labels)`. This PR solves that bug. I tested with my research data and confirmed my implementation is working. It's able to overfit the training data to up to 99% next-token prediction on multiple hyper-parameter setups, for #samples from 8 to 48, batch size from 48 to 64, epochs from 128 to 512. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) [Issue 19914](https://github.com/huggingface/transformers/issues/19914) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger @patrickvonplaten @thomwolf
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20326/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20326", "html_url": "https://github.com/huggingface/transformers/pull/20326", "diff_url": "https://github.com/huggingface/transformers/pull/20326.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20326.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20325
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20325/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20325/comments
https://api.github.com/repos/huggingface/transformers/issues/20325/events
https://github.com/huggingface/transformers/pull/20325
1,455,878,983
PR_kwDOCUB6oc5DP-u4
20,325
Add LayerScale to NAT/DiNAT
{ "login": "alihassanijr", "id": 68103095, "node_id": "MDQ6VXNlcjY4MTAzMDk1", "avatar_url": "https://avatars.githubusercontent.com/u/68103095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alihassanijr", "html_url": "https://github.com/alihassanijr", "followers_url": "https://api.github.com/users/alihassanijr/followers", "following_url": "https://api.github.com/users/alihassanijr/following{/other_user}", "gists_url": "https://api.github.com/users/alihassanijr/gists{/gist_id}", "starred_url": "https://api.github.com/users/alihassanijr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alihassanijr/subscriptions", "organizations_url": "https://api.github.com/users/alihassanijr/orgs", "repos_url": "https://api.github.com/users/alihassanijr/repos", "events_url": "https://api.github.com/users/alihassanijr/events{/privacy}", "received_events_url": "https://api.github.com/users/alihassanijr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,679
1,669
CONTRIBUTOR
null
# What does this PR do? This follows PR #20219 . I completely dropped the ball on LayerScale in the original PR. This is just an optional argument in both models, and is only activated for larger variants in order to provide training stability. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @NielsRogge .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20325/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20325", "html_url": "https://github.com/huggingface/transformers/pull/20325", "diff_url": "https://github.com/huggingface/transformers/pull/20325.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20325.patch", "merged_at": 1669039715000 }
https://api.github.com/repos/huggingface/transformers/issues/20324
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20324/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20324/comments
https://api.github.com/repos/huggingface/transformers/issues/20324/events
https://github.com/huggingface/transformers/pull/20324
1,455,830,334
PR_kwDOCUB6oc5DPz6Q
20,324
Added Luke Doctests
{ "login": "Tegzes", "id": 48134725, "node_id": "MDQ6VXNlcjQ4MTM0NzI1", "avatar_url": "https://avatars.githubusercontent.com/u/48134725?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tegzes", "html_url": "https://github.com/Tegzes", "followers_url": "https://api.github.com/users/Tegzes/followers", "following_url": "https://api.github.com/users/Tegzes/following{/other_user}", "gists_url": "https://api.github.com/users/Tegzes/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tegzes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tegzes/subscriptions", "organizations_url": "https://api.github.com/users/Tegzes/orgs", "repos_url": "https://api.github.com/users/Tegzes/repos", "events_url": "https://api.github.com/users/Tegzes/events{/privacy}", "received_events_url": "https://api.github.com/users/Tegzes/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ydshieh maybe :-) ", "The checkpoints in hf-internal-testing are only there for our tests, they shouldn't be used. Please use real checkpoints in the doc examples.", "@Tegzes You can search on the Hub to find if there are luke based models for the corresponding tasks. If most tasks lack such a checkpoint, we will have to skip Luke for doctest unfortunately.", "> The checkpoints in hf-internal-testing are only there for our tests, they shouldn't be used. Please use real checkpoints in the doc examples.\r\n\r\nI only used the hf-internal-testing checkpoint because these were mentioned in the comment section of the issue. But I will change them accordingly.", "@ydshieh I've tried most of the luke model checkpoints and unfortunately the only ones that exhibit a fully reproducible behavior are the ones from hf-internal-testing. \r\n\r\nIf it would be acceptable to you, I would propose the merge as it is, backed by the fact that these internal types of checkpoints were used in the past for multiple such doctests improvements, as suggested in the comments from the issue (https://github.com/huggingface/transformers/issues/16292).", "@Tegzes Thank you for the effort to check the checkpoints.\r\n\r\nWe have an internal discussion, and @sgugger strongly suggests we should not have used those tiny checkpoints in the first places, and we will try to figure out what changes we should proceed.\r\n\r\nSo unfortunately, we won't merge this PR. Sorry about this. It's possible to rework on the Luke docstrings in the future after some decision is made regarding the doctest + missing checkpoints.\r\n\r\nHowever, thank you again for the contribution!", "@ydshieh Thank you for the response. The situation is much clearer now\r\n\r\n", "Close as we have to decide what to do with doctest when a real checkpoint is missing" ]
1,668
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds improved Doctests for LUKE Issue: https://github.com/huggingface/transformers/issues/16292 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @ydshieh @patrickvonplaten @patil-suraj <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20324/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20324", "html_url": "https://github.com/huggingface/transformers/pull/20324", "diff_url": "https://github.com/huggingface/transformers/pull/20324.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20324.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20323
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20323/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20323/comments
https://api.github.com/repos/huggingface/transformers/issues/20323/events
https://github.com/huggingface/transformers/pull/20323
1,455,814,762
PR_kwDOCUB6oc5DPwan
20,323
Enhance HfArgumentParser functionality and ease of use
{ "login": "konstantinjdobler", "id": 28780372, "node_id": "MDQ6VXNlcjI4NzgwMzcy", "avatar_url": "https://avatars.githubusercontent.com/u/28780372?v=4", "gravatar_id": "", "url": "https://api.github.com/users/konstantinjdobler", "html_url": "https://github.com/konstantinjdobler", "followers_url": "https://api.github.com/users/konstantinjdobler/followers", "following_url": "https://api.github.com/users/konstantinjdobler/following{/other_user}", "gists_url": "https://api.github.com/users/konstantinjdobler/gists{/gist_id}", "starred_url": "https://api.github.com/users/konstantinjdobler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/konstantinjdobler/subscriptions", "organizations_url": "https://api.github.com/users/konstantinjdobler/orgs", "repos_url": "https://api.github.com/users/konstantinjdobler/repos", "events_url": "https://api.github.com/users/konstantinjdobler/events{/privacy}", "received_events_url": "https://api.github.com/users/konstantinjdobler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> I have enhanced the `HfArgumentParser` for my own work and decided to contribute my changes back to `transformers`. The changes are: - Allow aliases for command line flags to be created (by providing `aliases` inside the metadata dict) - Enable specification of one or multiple config files via a customizable command line flag (e.g. `--cfg ./path/to/basic-config.txt --cfg ./path/to/special.txt`). For my own use, I set the customizable command line flag to `"--cfg"` by default. I omitted this here to keep 100% backwards compatibility but a sensible default might make sense as well. - Enable use of the `Literal` type for easy ad-hoc specification of choices without the need to define an `Enum` (inspired by [typed-argument-parser](https://github.com/swansonk14/typed-argument-parser)): ```python @dataclass class Args: literal_arg: Literal["the meaning of life", 42, "hitchhiking"] = 42 ``` - Allow use of mixed types for `Enum` similar to `Literal` - Created `HfArg` helper for a more concise syntax when creating data class fields: ```python @dataclass class Args: field_arg: int = field(default=42, metadata={"aliases": ["--arg", "-a"], "help": "This is a bit verbose."}) hf_arg: int = HfArg(default=42, aliases=["--arg", "-a"], help="This is more concise.") ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20323/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20323/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20323", "html_url": "https://github.com/huggingface/transformers/pull/20323", "diff_url": "https://github.com/huggingface/transformers/pull/20323.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20323.patch", "merged_at": 1669052017000 }
https://api.github.com/repos/huggingface/transformers/issues/20322
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20322/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20322/comments
https://api.github.com/repos/huggingface/transformers/issues/20322/events
https://github.com/huggingface/transformers/issues/20322
1,455,795,528
I_kwDOCUB6oc5Wxa1I
20,322
Cannot load a model that saved locally
{ "login": "rcontesti", "id": 13105045, "node_id": "MDQ6VXNlcjEzMTA1MDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/13105045?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcontesti", "html_url": "https://github.com/rcontesti", "followers_url": "https://api.github.com/users/rcontesti/followers", "following_url": "https://api.github.com/users/rcontesti/following{/other_user}", "gists_url": "https://api.github.com/users/rcontesti/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcontesti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcontesti/subscriptions", "organizations_url": "https://api.github.com/users/rcontesti/orgs", "repos_url": "https://api.github.com/users/rcontesti/repos", "events_url": "https://api.github.com/users/rcontesti/events{/privacy}", "received_events_url": "https://api.github.com/users/rcontesti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You need to remove the quotes around model_dir:\r\n```\r\nmodel = AutoModelForQuestionAnswering.from_pretrained(model_dir)\r\ntokenizer = AutoTokenizer.from_pretrained(model_dir)\r\n```", "Thanks. Really embarrassing!" ]
1,668
1,668
1,668
NONE
null
### System Info ``` - `transformers` version: 4.24.0 - Platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.35 - Python version: 3.9.15 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.0+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoTokenizer, AutoModelForQuestionAnswering import torch model_name = "twmkn9/albert-base-v2-squad2" model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) model_dir="./models/"+model_name _ = model.save_pretrained(model_dir) _ = tokenizer.save_pretrained(model_dir) model = AutoModelForQuestionAnswering.from_pretrained("model_dir") tokenizer = AutoTokenizer.from_pretrained("model_dir") ``` ### Expected behavior I would expect that after successfully saving the model locally I'm able to load it. Unfortunately, I'm getting the following error: ``` Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?17b3235e-945e-4cb8-b7e0-e53f106f5fad) --------------------------------------------------------------------------- HTTPError Traceback (most recent call last) File ~/anaconda3/envs/nlp4diag/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:213, in hf_raise_for_status(response, endpoint_name) 212 try: --> 213 response.raise_for_status() 214 except HTTPError as e: File ~/anaconda3/envs/nlp4diag/lib/python3.9/site-packages/requests/models.py:1021, in Response.raise_for_status(self) 1020 if http_error_msg: -> 1021 raise HTTPError(http_error_msg, response=self) HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/model_dir/resolve/main/config.json The above exception was the direct cause of the following exception: RepositoryNotFoundError Traceback (most recent call last) File ~/anaconda3/envs/nlp4diag/lib/python3.9/site-packages/transformers/utils/hub.py:409, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash) 407 try: 408 # Load from URL or cache if already cached --> 409 resolved_file = hf_hub_download( 410 path_or_repo_id, 411 filename, 412 subfolder=None if len(subfolder) == 0 else subfolder, 413 revision=revision, 414 cache_dir=cache_dir, ... 434 f"'https://huggingface.co/{path_or_repo_id}' for available revisions." 435 ) OSError: model_dir is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20322/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20321
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20321/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20321/comments
https://api.github.com/repos/huggingface/transformers/issues/20321/events
https://github.com/huggingface/transformers/pull/20321
1,455,774,166
PR_kwDOCUB6oc5DPnZm
20,321
Safetensors offload
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,669
1,669
COLLABORATOR
null
# What does this PR do? This PRs make offload to disk more efficient for models that have a checkpoint in safetensors format: instead of re-saving everything as Numpy memory-mapped array, it uses directly the fact we can access the tensor in the checkpoint without loading the rest. Goes with https://github.com/huggingface/accelerate/pull/873
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20321/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20321", "html_url": "https://github.com/huggingface/transformers/pull/20321", "diff_url": "https://github.com/huggingface/transformers/pull/20321.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20321.patch", "merged_at": 1669649753000 }
https://api.github.com/repos/huggingface/transformers/issues/20320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20320/comments
https://api.github.com/repos/huggingface/transformers/issues/20320/events
https://github.com/huggingface/transformers/issues/20320
1,455,713,804
I_kwDOCUB6oc5WxG4M
20,320
Loading model OOMs with more GPUS
{ "login": "Dahoas", "id": 36314634, "node_id": "MDQ6VXNlcjM2MzE0NjM0", "avatar_url": "https://avatars.githubusercontent.com/u/36314634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dahoas", "html_url": "https://github.com/Dahoas", "followers_url": "https://api.github.com/users/Dahoas/followers", "following_url": "https://api.github.com/users/Dahoas/following{/other_user}", "gists_url": "https://api.github.com/users/Dahoas/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dahoas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dahoas/subscriptions", "organizations_url": "https://api.github.com/users/Dahoas/orgs", "repos_url": "https://api.github.com/users/Dahoas/repos", "events_url": "https://api.github.com/users/Dahoas/events{/privacy}", "received_events_url": "https://api.github.com/users/Dahoas/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's a bit hard to follow your Issue\r\n\r\nIs loading working when you use <=6 gpus?\r\n\r\nI can't quite see from your example of the model itself how you run it - I suppose some modified version of the HF Trainer example program? unless what you run is what you shared here.\r\n\r\nWhat you have shown doesn't use Deepspeed, you're just using the `deepspeed` launcher and the args are ignored since you're not parsing them. So this program simply runs this script you have shown on each gpu separately - no deepspeed.\r\n\r\nAlso have a look at the size of the saved model - to ensure that it was saved in half-precision or full precision, which could be a 2x multiplier if you aren't doing it correctly. ", "To use the HF Deepspeed integration you need to adapt one of the examples or write a new program following the examples as the guide. https://github.com/huggingface/transformers/tree/main/examples/pytorch\r\n\r\nThe integration is inside the HF Trainer, so once you switch to using the HF Trainer you will get the DS integration.", "Ah my apologies this is confusing. My training script is below. I'm only using the HF Trainer\r\n\r\n```python\r\nimport os\r\nimport pandas as pd\r\nimport torch\r\nfrom torch.utils.data import Dataset, random_split\r\nfrom transformers import AutoTokenizer, TrainingArguments, Trainer, AutoModelForCausalLM, IntervalStrategy, AutoModel, AutoConfig, PreTrainedModel\r\nimport json\r\nfrom reward_model import GPTRewardModel\r\nimport deepspeed\r\n\r\n\r\nclass PairwiseTrainer(Trainer):\r\n def compute_loss(self, model, inputs, return_outputs=False):\r\n # forward pass\r\n rewards = model(**inputs)\r\n rewards_chunked = rewards.view((2, -1))\r\n chosen_rewards = rewards_chunked[0]\r\n rejected_rewards = rewards_chunked[1]\r\n # compute pairwise loss\r\n loss = -torch.log(torch.sigmoid(chosen_rewards - rejected_rewards)).mean()\r\n return (loss, outputs) if return_outputs else loss\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"EleutherAI/gpt-neo-2.7B\")\r\ntokenizer.pad_token = tokenizer.eos_token\r\ntraining_args = TrainingArguments(output_dir='./results', num_train_epochs=4, logging_steps=100, save_strategy=IntervalStrategy.NO,\r\n per_device_train_batch_size=1, per_device_eval_batch_size=1, warmup_steps=100,\r\n weight_decay=0.01, logging_dir='./logs', fp16=True, bf16=False, learning_rate=5e-6, deepspeed='./ds_config_gpt_2.json')\r\n# gptneo trained in jaxh\r\n\r\nmodel = GPTRewardModel(\"EleutherAI/gpt-neo-2.7B\")\r\nload_checkpoint = True\r\nif load_checkpoint:\r\n model.load_state_dict(torch.load('ckpts/single_context_pairwise/model_fp16.pt'))\r\n#model.cuda()\r\n\r\n\r\ndata = []\r\ndataset_name = \"single_context_pairwise\"\r\nwith open(dataset_name + \".jsonl\", \"r\") as f:\r\n lines = f.readlines()\r\n for line in lines:\r\n loaded_line = json.loads(line)\r\n data.append(loaded_line)\r\n #data.append(loaded_line[\"prompt\"] + loaded_line[\"response\"])\r\nprint(\"Len data: \", len(data))\r\n\r\nmax_length = 1024\r\n#max_length = max([max(len(tokenizer.encode(text[\"chosen\"])), len(tokenizer.encode(text[\"rejected\"]))) for text in data])\r\nprint(\"Max length: {}\".format(max_length))\r\n\r\n\r\nclass PairwiseDataset(Dataset):\r\n def __init__(self, pairs, tokenizer, max_length):\r\n self.chosen_input_ids = []\r\n self.chosen_attn_masks = []\r\n self.rejected_input_ids = []\r\n self.rejected_attn_masks = []\r\n for pair in pairs:\r\n chosen, rejected = pair[\"chosen\"], pair[\"rejected\"]\r\n chosen_encodings_dict = tokenizer('<|startoftext|>' + chosen + '<|endoftext|>', truncation=True,\r\n max_length=max_length, padding=\"max_length\", return_tensors=\"pt\")\r\n rejected_encodings_dict = tokenizer('<|startoftext|>' + rejected + '<|endoftext|>', truncation=True,\r\n max_length=max_length, padding=\"max_length\", return_tensors=\"pt\")\r\n self.chosen_input_ids.append(chosen_encodings_dict['input_ids'])\r\n self.chosen_attn_masks.append(chosen_encodings_dict['attention_mask'])\r\n self.rejected_input_ids.append(rejected_encodings_dict['input_ids'])\r\n self.rejected_attn_masks.append(rejected_encodings_dict['attention_mask'])\r\n\r\n def __len__(self):\r\n return len(self.chosen_input_ids)\r\n\r\n def __getitem__(self, idx):\r\n return self.chosen_input_ids[idx], self.chosen_attn_masks[idx], self.rejected_input_ids[idx], self.rejected_attn_masks[idx]\r\n\r\ndef data_collator(data):\r\n return {'input_ids': torch.stack([f[0] for f in data] + [f[2] for f in data]),\r\n 'attention_mask': torch.stack([f[1] for f in data] + [f[3] for f in data])}\r\n\r\n\r\ndataset = PairwiseDataset(data, tokenizer, max_length=max_length)\r\ntrain_size = int(0.9 * len(dataset))\r\ntrain_dataset, val_dataset = random_split(dataset, [train_size, len(dataset) - train_size])\r\nPairwiseTrainer(model=model, args=training_args, train_dataset=train_dataset,\r\n eval_dataset=val_dataset, data_collator=data_collator).train()\r\n\r\n\r\nif torch.distributed.get_rank() == 0:\r\n print(\"SAVING MODEL\")\r\n dir_path = os.path.join(\"ckpts\", dataset_name)\r\n if not os.path.isdir(dir_path):\r\n\t os.mkdir(dir_path)\r\n torch.save(model.state_dict(), os.path.join(dir_path, \"model_fp16_8.pt\"))\r\n```\r\n\r\nYes loading works <= 6 gpus.\r\n\r\nGood point about saving in the wrong precision. I will check", "much better.\r\n\r\nAlso try first with a normal model of the same size? If it works just fine then it'd point to something being added with your code.\r\n\r\nIf there is problem with normal model then it's a different story..\r\n\r\nOne other thing to consider, is that if you resume from a saved deepspeed checkpoint, you can't change topology on fly, as it'll try to resume using the same sharded layout as the checkpoint was saved from. But if you were to try to change the topology on the existing DS checkpoint it'd normally fail to resume.\r\n\r\nSo typically in changing topology you need to extract the non-sharded weights and then start a new using those instead of using resume. Here since it appears you use zero-stage2 it's trivial, it's just the saved weights file as weights were never sharded in the first place (they do under stage3). so to test on topology change I'd move your `output_dir` elsewhere and simply pass the weights file as the `model_name_or_path` \r\n\r\nI am concerned that I'm wrote above is confusing, I'm just trying to guess what might be going wrong for you.", "Update: Indeed I was saving and loading fp16 weights when I meant to be saving/loading fp32. (Although I still do not understand why loading fp16 in the manner I do throws an OOM error).\r\n\r\nIn any case thanks for your help!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,672
1,672
NONE
null
### System Info - `transformers` version: 4.21.2 - Platform: Linux-5.10.135-122.509.amzn2.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.8.5 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? Hi all, I am modifying an arbitrary HF text model for reinforcement learning reward modeling by appending a scalar output head and overriding the forward method. As part of this procedure I'd prefer to retain the flexibility of using any model without committing to a particular model class (e.g. GPT2). I have not found a way to inherit the PreTrainedModel class while also retaining this flexibility so the result is just a nn.Module class. I find when I try to torch.load to continue training a reward model fine-tuned using GPTNeo2.7B as a base I OOM when with >6 gpus (A100). This is counter-intuitive to me as I would expect OOM issues in the opposite direction. To train the reward model I am using HF's deepspeed integration. Tagging @stas00 as deepspeed integration point of contact. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import os import pandas as pd import torch from torch.utils.data import Dataset, random_split from transformers import AutoTokenizer, TrainingArguments, Trainer, AutoModelForCausalLM, IntervalStrategy, AutoModel, AutoConfig, PreTrainedModel import json import deepspeed from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Model, PreTrainedModel, AutoModelForCausalLM, GPT2PreTrainedModel, GPT2Model from transformers.modeling_outputs import ModelOutput from torch import nn from torch.nn import Identity import torch.nn.functional as F import torch from dataclasses import dataclass from typing import Optional, Tuple class GPTRewardModel(nn.Module): def __init__(self, config): super().__init__() model = AutoModelForCausalLM.from_pretrained(config) self.config = model.config # gpt-neo models have hidden_size instead of n_embd self.config.n_embd = self.config.hidden_size if hasattr(self.config, "hidden_size") else self.config.n_embd self.transformer = model.transformer self.v_head = nn.Linear(self.config.n_embd, 1, bias=False) def forward( self, input_ids=None, past_key_values=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, mc_token_ids=None, lm_labels=None, mc_labels=None, return_dict=False, output_attentions=False, output_hidden_states=False, ): loss=None transformer_outputs = self.transformer( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, ) hidden_states = transformer_outputs[0] rewards = self.v_head(hidden_states).squeeze(-1) return rewards model = GPTRewardModel("EleutherAI/gpt-neo-2.7B") if torch.distributed.get_rank() == 0: torch.save(model.state_dict(), "model_fp16.pt") model.load_state_dict(torch.load('model_fp16.pt')) ``` ```yaml { "train_batch_size": 8, "fp16": { "enabled": "auto", "min_loss_scale": 1, "loss_scale_window": 1000, "hysteresis": 2, "initial_scale_power": 32 }, "bf16": { "enabled": "auto" }, "zero_optimization": { "stage": 2, "offload_param": { "device": "none" }, "offload_optimizer": { "device": "none" }, "allgather_partitions": true, "allgather_bucket_size": 5e8, "contiguous_gradients": true }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": [ 0.9, 0.999 ], "eps": 1e-08 } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": "auto", "warmup_num_steps": 100 } } } ``` To launch run `deepspeed --num_gpus=7 test_pretrained.py --deepspeed ds_config_gpt_2.json ` ### Expected behavior No OOM with more gpus
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20320/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20319/comments
https://api.github.com/repos/huggingface/transformers/issues/20319/events
https://github.com/huggingface/transformers/pull/20319
1,455,455,209
PR_kwDOCUB6oc5DOiqk
20,319
Pin TF 2.10 in docker file
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thansk!", "Merge now quickly to make push CI green" ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Pin TF 2.10 in docker file We have to rebuild the push-ci image manually once this PR is approved.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20319/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20319/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20319", "html_url": "https://github.com/huggingface/transformers/pull/20319", "diff_url": "https://github.com/huggingface/transformers/pull/20319.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20319.patch", "merged_at": 1668792275000 }
https://api.github.com/repos/huggingface/transformers/issues/20318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20318/comments
https://api.github.com/repos/huggingface/transformers/issues/20318/events
https://github.com/huggingface/transformers/pull/20318
1,455,395,770
PR_kwDOCUB6oc5DOV9p
20,318
Fix flakey no_trainer test with seed
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 1834088753, "node_id": "MDU6TGFiZWwxODM0MDg4NzUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tests", "name": "Tests", "color": "a6fcca", "default": false, "description": "Related to tests" }, { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" }, { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This PR adds a seed param to the squad no trainer test to ensure reproducibility. Ran 5x times on single and multi GPU to ensure that the test will pass. Fixes # (issue) Closes https://github.com/huggingface/transformers/issues/19733 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20318/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20318", "html_url": "https://github.com/huggingface/transformers/pull/20318", "diff_url": "https://github.com/huggingface/transformers/pull/20318.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20318.patch", "merged_at": 1668789206000 }
https://api.github.com/repos/huggingface/transformers/issues/20317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20317/comments
https://api.github.com/repos/huggingface/transformers/issues/20317/events
https://github.com/huggingface/transformers/pull/20317
1,455,306,746
PR_kwDOCUB6oc5DOCjs
20,317
TF: future proof our keras imports
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Other than changing `save_attributes_to_hdf5_group` to `hdf5_format.save_attributes_to_hdf5_group`, no changes are needed now -- but they will be in v2.12, so might as well 🤷 ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
MEMBER
null
# What does this PR do? On the release notes for [TF 2.11](https://github.com/tensorflow/tensorflow/releases/tag/v2.11.0), we can read: ``` tensorflow/python/keras code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import of tensorflow.python.keras and use the public API with `from tensorflow import keras` or `import tensorflow as tf; tf.keras`. ``` On top of that, `hdf5_format` was moved from `keras.saving` to `keras.saving.legacy` in v2.11 (this one is not in the patch notes). This PR prepares us for v2.11 and beyond, while keeping retrocompatibility through gated imports. Note: I've tested the v2.11 imports locally :) (the CI is running against v2.10)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20317/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20317", "html_url": "https://github.com/huggingface/transformers/pull/20317", "diff_url": "https://github.com/huggingface/transformers/pull/20317.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20317.patch", "merged_at": 1668793128000 }
https://api.github.com/repos/huggingface/transformers/issues/20316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20316/comments
https://api.github.com/repos/huggingface/transformers/issues/20316/events
https://github.com/huggingface/transformers/pull/20316
1,455,141,219
PR_kwDOCUB6oc5DNeSw
20,316
Pin TF 2.10.1
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20316). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,675
1,668
COLLABORATOR
null
# What does this PR do? Pin TF 2.10.1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20316/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20316", "html_url": "https://github.com/huggingface/transformers/pull/20316", "diff_url": "https://github.com/huggingface/transformers/pull/20316.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20316.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20315/comments
https://api.github.com/repos/huggingface/transformers/issues/20315/events
https://github.com/huggingface/transformers/pull/20315
1,455,133,203
PR_kwDOCUB6oc5DNckK
20,315
Pin TF 2.10.0
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20315). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,675
1,668
COLLABORATOR
null
# What does this PR do? Pin TF 2.10.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20315/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20315", "html_url": "https://github.com/huggingface/transformers/pull/20315", "diff_url": "https://github.com/huggingface/transformers/pull/20315.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20315.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20314/comments
https://api.github.com/repos/huggingface/transformers/issues/20314/events
https://github.com/huggingface/transformers/issues/20314
1,455,064,391
I_kwDOCUB6oc5WuoVH
20,314
nested_detach of trainer fails in evaluation_loop for labels formatted for YOLOs model through feature extractor.
{ "login": "gauravrajguru", "id": 24417856, "node_id": "MDQ6VXNlcjI0NDE3ODU2", "avatar_url": "https://avatars.githubusercontent.com/u/24417856?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gauravrajguru", "html_url": "https://github.com/gauravrajguru", "followers_url": "https://api.github.com/users/gauravrajguru/followers", "following_url": "https://api.github.com/users/gauravrajguru/following{/other_user}", "gists_url": "https://api.github.com/users/gauravrajguru/gists{/gist_id}", "starred_url": "https://api.github.com/users/gauravrajguru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gauravrajguru/subscriptions", "organizations_url": "https://api.github.com/users/gauravrajguru/orgs", "repos_url": "https://api.github.com/users/gauravrajguru/repos", "events_url": "https://api.github.com/users/gauravrajguru/events{/privacy}", "received_events_url": "https://api.github.com/users/gauravrajguru/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for reporting. There is nothing we can do without a code reproducer however. The `nested_detach` function is build to handle dictionaries of tensors, so if I pass it the example you give me above, I don't get an error.", "Thanks for reply. I can see latest PR managed this scenario and it's working with new version.", "@sgugger @vanpelt \r\n\r\nFacing anotjer issue while using COCO formatted data (please check actual sample above) with transformer trainer in eval_loop.\r\nIt is occurring while running the training job on multi gpu (DDP) set up.\r\nHowever, job succeeded on single gpu (without DDP) set up. Attaching call stack:\r\n\r\nFile \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py\", line 1501, in train\r\nreturn inner_training_loop(\r\nFile \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py\", line 1841, in _inner_training_loop\r\nself._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\nFile \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py\", line 2089, in _maybe_log_save_evaluate\r\nmetrics = self.evaluate(ignore_keys=ignore_keys_for_eval)\r\nFile \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py\", line 2796, in evaluate\r\noutput = eval_loop(\r\nFile \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py\", line 2986, in evaluation_loop\r\nlabels = self._nested_gather(labels)\r\nFile \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py\", line 3112, in _nested_gather\r\ntensors = distributed_concat(tensors)\r\nFile \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer_pt_utils.py\", line 191, in distributed_concat\r\nreturn type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)\r\nFile \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer_pt_utils.py\", line 191, in\r\nreturn type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)\r\nFile \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer_pt_utils.py\", line 193, in distributed_concat\r\noutput_tensors = [tensor.clone() for _ in range(dist.get_world_size())]\r\nFile \"/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer_pt_utils.py\", line 193, in\r\noutput_tensors = [tensor.clone() for _ in range(dist.get_world_size())]\r\nAttributeError: 'numpy.ndarray' object has no attribute 'clone'", "There is nothing we can do without a small reproducible example.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,673
1,673
NONE
null
### System Info @NielsRogge, @sgugger nested_detach of trainer fails in evaluation_loop for labels formatted for YOLOs model through feature extractor. OD label has dict format like 'labels': { 'boxes': tensor([[0.4575, 0.5120, 0.6450, 0.2726]]), 'class_labels': tensor([0]), 'image_id': tensor([8081]), 'area': tensor([172346.9688]), 'iscrowd': tensor([0]), 'orig_size': tensor([1086, 600]), 'size': tensor([1332, 736]) } nested_detach fails with error dict don't have detach method while taking back up of labels. ### Who can help? @NielsRogge, @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Defined above. ### Expected behavior Evalution loop should run properly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20314/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20313/comments
https://api.github.com/repos/huggingface/transformers/issues/20313/events
https://github.com/huggingface/transformers/pull/20313
1,454,978,874
PR_kwDOCUB6oc5DM6m7
20,313
Pin TensorFlow as new release breaks `pip install transformers["all"]`
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Merging as it unlocks the styling checks, will further fix the rest of the failing tests in a separate PR.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20313). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? This PR pins TensorFlow for now, as it seems there is no version of `tensorflow-text` compatible with it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20313/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20313", "html_url": "https://github.com/huggingface/transformers/pull/20313", "diff_url": "https://github.com/huggingface/transformers/pull/20313.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20313.patch", "merged_at": 1668772636000 }
https://api.github.com/repos/huggingface/transformers/issues/20312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20312/comments
https://api.github.com/repos/huggingface/transformers/issues/20312/events
https://github.com/huggingface/transformers/issues/20312
1,454,952,186
I_kwDOCUB6oc5WuM76
20,312
Add eval and test for the example run_ner_no_trainer.py
{ "login": "fuzihaofzh", "id": 1419566, "node_id": "MDQ6VXNlcjE0MTk1NjY=", "avatar_url": "https://avatars.githubusercontent.com/u/1419566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fuzihaofzh", "html_url": "https://github.com/fuzihaofzh", "followers_url": "https://api.github.com/users/fuzihaofzh/followers", "following_url": "https://api.github.com/users/fuzihaofzh/following{/other_user}", "gists_url": "https://api.github.com/users/fuzihaofzh/gists{/gist_id}", "starred_url": "https://api.github.com/users/fuzihaofzh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fuzihaofzh/subscriptions", "organizations_url": "https://api.github.com/users/fuzihaofzh/orgs", "repos_url": "https://api.github.com/users/fuzihaofzh/repos", "events_url": "https://api.github.com/users/fuzihaofzh/events{/privacy}", "received_events_url": "https://api.github.com/users/fuzihaofzh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Evaluation is done at every epoch in this script, not sure what more you need. The `no_trainer` scripts are more barebone on purpose, to make it easy for users to customize them.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Evaluating NER performance with subword tokenization is complicated. It would be very instructive if you could include code that outputs one prediction for each token in the original corpus. This would have to address the behavior of tokenizers, such as splitting words with hyphens and not prepending them with ##, etc. \r\n\r\n As I understand it, the script currently performs evaluation on the subword tokens and not the restored versions." ]
1,668
1,689
1,672
CONTRIBUTOR
null
### Feature request Hi, thanks for the great work. I just go through the example "transformers/examples/pytorch/token-classification/", I found that run_ner.py has eval and test. However, for run_ner_no_trainer.py, there is no eval or test. I just wonder if it is possible to add these components? ### Motivation Enhance the example code ### Your contribution I can help to test the code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20312/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20311/comments
https://api.github.com/repos/huggingface/transformers/issues/20311/events
https://github.com/huggingface/transformers/issues/20311
1,454,929,394
I_kwDOCUB6oc5WuHXy
20,311
Seq2Seq question answering example script not compatible with the latest Seq2SeqTrainer class.
{ "login": "apohllo", "id": 40543, "node_id": "MDQ6VXNlcjQwNTQz", "avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apohllo", "html_url": "https://github.com/apohllo", "followers_url": "https://api.github.com/users/apohllo/followers", "following_url": "https://api.github.com/users/apohllo/following{/other_user}", "gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}", "starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apohllo/subscriptions", "organizations_url": "https://api.github.com/users/apohllo/orgs", "repos_url": "https://api.github.com/users/apohllo/repos", "events_url": "https://api.github.com/users/apohllo/events{/privacy}", "received_events_url": "https://api.github.com/users/apohllo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "To clarify the last comment (about the documentation) - these arguments in `predict` and `evaluate` are extracted from `gen_kwargs`, but they are not formal arguments of these methods.", "OK. It seems, I have not updated the code in the examples directory. The latest implementation of the trainer is valid.\r\nClosing the issue." ]
1,668
1,668
1,668
CONTRIBUTOR
null
### System Info - `transformers` version: 4.25.0.dev0 - Platform: Linux-4.18.0-372.16.1.el8.cyf.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.6 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.13.0+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: ``` python question-answering/run_seq2seq_qa.py --model_name_or_path google/mt5-base --dataset_name squad --context_column context --question_column question --answer_column answers --do_train --do_eval --evaluation_strategy steps --eval_steps 100 --learning_rate 1e-4 --num_train_epochs 4 --max_seq_length 384 --doc_stride 128 --eval_accumulation_steps 1 --predict_with_generate --output_dir output --per_device_train_batch_size 14 ``` During evaluation, the following error occurs: ``` Traceback (most recent call last): File "/net/people/plgrid/plgapohl/mt5-classification/question-answering/run_seq2seq_qa.py", line 720, in <module> main() File "/net/people/plgrid/plgapohl/mt5-classification/question-answering/run_seq2seq_qa.py", line 656, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/net/ascratch/people/plgapohl/python-3.9.6/lib/python3.9/site-packages/transformers/trainer.py", line 1517, in train return inner_training_loop( File "/net/ascratch/people/plgapohl/python-3.9.6/lib/python3.9/site-packages/transformers/trainer.py", line 1842, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/net/ascratch/people/plgapohl/python-3.9.6/lib/python3.9/site-packages/transformers/trainer.py", line 2105, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/net/people/plgrid/plgapohl/mt5-classification/transformers/examples/pytorch/question-answering/trainer_seq2seq_qa.py", line 59, in evaluate output = eval_loop( File "/net/ascratch/people/plgapohl/python-3.9.6/lib/python3.9/site-packages/transformers/trainer.py", line 2990, in evaluation_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/net/ascratch/people/plgapohl/python-3.9.6/lib/python3.9/site-packages/transformers/trainer_seq2seq.py", line 174, in prediction_step gen_kwargs = self._gen_kwargs.copy() AttributeError: 'QuestionAnsweringSeq2SeqTrainer' object has no attribute '_gen_kwargs' ``` ### Expected behavior There should be no error in evaluation. The reason for this error is the fact, that the `evaluate` method in the `QuestionAnsweringSeq2SeqTrainer`: 1. Does not accept `gen_kwargs`. 2. The value of that (missing) attribute is not assigned to the instance variable `self._gen_kwargs`. Yet in the `prediction_step` (line 174) in `Seq2SeqTrainer` it is expected that the value of that instance variable is set. The reason for that is the fact, that the `evaluate` method in `Seq2SeqTrainer` copies the `gen_kwargs` to the instance variable. BTW both `predict` and `evaluate` methods in `Seq2SeqTrainer` mention `max_length` and `num_beams` in the documentation of these methods, yet these arguments are no longer available.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20311/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20310/comments
https://api.github.com/repos/huggingface/transformers/issues/20310/events
https://github.com/huggingface/transformers/issues/20310
1,454,846,941
I_kwDOCUB6oc5WtzPd
20,310
Sentence-transformer: No such file or directory error
{ "login": "Rolv-Arild", "id": 8886402, "node_id": "MDQ6VXNlcjg4ODY0MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/8886402?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rolv-Arild", "html_url": "https://github.com/Rolv-Arild", "followers_url": "https://api.github.com/users/Rolv-Arild/followers", "following_url": "https://api.github.com/users/Rolv-Arild/following{/other_user}", "gists_url": "https://api.github.com/users/Rolv-Arild/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rolv-Arild/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rolv-Arild/subscriptions", "organizations_url": "https://api.github.com/users/Rolv-Arild/orgs", "repos_url": "https://api.github.com/users/Rolv-Arild/repos", "events_url": "https://api.github.com/users/Rolv-Arild/events{/privacy}", "received_events_url": "https://api.github.com/users/Rolv-Arild/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fixed now.\r\n\r\nThe culprit is either the way the api works or sentence-transformers, but `sentence-transformers` is not great at dealing with missing files (it looks only for 1 file and assumes the rest is there, leading to the failure you are seeing).\r\n\r\nSince the API uses massive storage to handle all the models on the hub, there is periodical cleaning of unused files, and sometimes it happens that sentence-transformers doesn't access all the files at the same time, leading to only partial cleanup of sentence-transformers model files and to this error.\r\n\r\nHope you understand a bit better what happened.\r\nFor the fix we'll see about maybe adding something upstream, but can't make any promises.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I was also facing the same issue, so I changed the directory from \"_Pooling\" to \"1_Pooling\". That resolved the issue." ]
1,668
1,704
1,672
NONE
null
### System Info Using the sentence-transformer widget leads to the following error at https://huggingface.co/NbAiLab/nb-sbert: `[Errno 2] No such file or directory: '/data/NbAiLab_nb-sbert/1_Pooling/config.json'` I have checked all the config files, and can not find any references to this. I am able to load the model locally (from the HF repo), and do valid calculations. The only thing that does not work is the widget. I found this on the discussion forum: https://discuss.huggingface.co/t/sentence-similarity-demo-not-working/8711 Any idea about what is happening here? ### Who can help? @Narsil @LysandreJik ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Open https://huggingface.co/NbAiLab/nb-sbert 2. In the widget, use Example 1 or fill in sentences 3. Press "compute" ### Expected behavior Widget providing sentence similarities
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20310/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20309/comments
https://api.github.com/repos/huggingface/transformers/issues/20309/events
https://github.com/huggingface/transformers/issues/20309
1,454,599,811
I_kwDOCUB6oc5Ws26D
20,309
Enable BAAI/AltCLIP model to better handle Chinese prompts
{ "login": "BAAI-OpenPlatform", "id": 107522723, "node_id": "U_kgDOBmiqow", "avatar_url": "https://avatars.githubusercontent.com/u/107522723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BAAI-OpenPlatform", "html_url": "https://github.com/BAAI-OpenPlatform", "followers_url": "https://api.github.com/users/BAAI-OpenPlatform/followers", "following_url": "https://api.github.com/users/BAAI-OpenPlatform/following{/other_user}", "gists_url": "https://api.github.com/users/BAAI-OpenPlatform/gists{/gist_id}", "starred_url": "https://api.github.com/users/BAAI-OpenPlatform/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BAAI-OpenPlatform/subscriptions", "organizations_url": "https://api.github.com/users/BAAI-OpenPlatform/orgs", "repos_url": "https://api.github.com/users/BAAI-OpenPlatform/repos", "events_url": "https://api.github.com/users/BAAI-OpenPlatform/events{/privacy}", "received_events_url": "https://api.github.com/users/BAAI-OpenPlatform/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,672
1,672
NONE
null
### Feature request We already have our bilingual CLIP model here: https://huggingface.co/BAAI/AltCLIP (currently private) Would you like to collaborate with us to merge it into transformers? Thanks a lot! ### Motivation Want to enable our bilingual ALtCLIP model to handle Chinese prompts as well as English; ### Your contribution We can collaborate with you to work on it. We are familiar with our model, but is not clear how to merge in into transformers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20309/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20308/comments
https://api.github.com/repos/huggingface/transformers/issues/20308/events
https://github.com/huggingface/transformers/issues/20308
1,454,169,484
I_kwDOCUB6oc5WrN2M
20,308
RAG performance on WebQuestion dataset lower than expected
{ "login": "lkfafds", "id": 118490459, "node_id": "U_kgDOBxAFWw", "avatar_url": "https://avatars.githubusercontent.com/u/118490459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lkfafds", "html_url": "https://github.com/lkfafds", "followers_url": "https://api.github.com/users/lkfafds/followers", "following_url": "https://api.github.com/users/lkfafds/following{/other_user}", "gists_url": "https://api.github.com/users/lkfafds/gists{/gist_id}", "starred_url": "https://api.github.com/users/lkfafds/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lkfafds/subscriptions", "organizations_url": "https://api.github.com/users/lkfafds/orgs", "repos_url": "https://api.github.com/users/lkfafds/repos", "events_url": "https://api.github.com/users/lkfafds/events{/privacy}", "received_events_url": "https://api.github.com/users/lkfafds/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only." ]
1,668
1,668
1,668
NONE
null
Hi, I recently fine-tuned the Rag model (based on Rag-token-base) in WebQuestion, but the EM score is only 28. The performance in the paper is 45. Do you have any saved models that can match the performance, or does there any tricks during the fine-tuning?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20308/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20307/comments
https://api.github.com/repos/huggingface/transformers/issues/20307/events
https://github.com/huggingface/transformers/pull/20307
1,453,883,005
PR_kwDOCUB6oc5DJKjq
20,307
Remove double brackets
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Ooops ! \r\n\r\nThank you for the fix !" ]
1,668
1,668
1,668
MEMBER
null
Fixes a small typo in the pipeline docs where there were two brackets.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20307/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20307", "html_url": "https://github.com/huggingface/transformers/pull/20307", "diff_url": "https://github.com/huggingface/transformers/pull/20307.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20307.patch", "merged_at": 1668792564000 }
https://api.github.com/repos/huggingface/transformers/issues/20306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20306/comments
https://api.github.com/repos/huggingface/transformers/issues/20306/events
https://github.com/huggingface/transformers/pull/20306
1,453,869,273
PR_kwDOCUB6oc5DJHnf
20,306
Organize pipelines
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
MEMBER
null
As suggested by @NielsRogge, this PR organizes the individual task-specific pipelines according to their modality since the list is getting quite long now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20306/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20306/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20306", "html_url": "https://github.com/huggingface/transformers/pull/20306", "diff_url": "https://github.com/huggingface/transformers/pull/20306.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20306.patch", "merged_at": 1668801985000 }
https://api.github.com/repos/huggingface/transformers/issues/20305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20305/comments
https://api.github.com/repos/huggingface/transformers/issues/20305/events
https://github.com/huggingface/transformers/pull/20305
1,453,815,201
PR_kwDOCUB6oc5DI73z
20,305
Implement Roberta PreLayerNorm
{ "login": "AndreasMadsen", "id": 505333, "node_id": "MDQ6VXNlcjUwNTMzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/505333?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AndreasMadsen", "html_url": "https://github.com/AndreasMadsen", "followers_url": "https://api.github.com/users/AndreasMadsen/followers", "following_url": "https://api.github.com/users/AndreasMadsen/following{/other_user}", "gists_url": "https://api.github.com/users/AndreasMadsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/AndreasMadsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreasMadsen/subscriptions", "organizations_url": "https://api.github.com/users/AndreasMadsen/orgs", "repos_url": "https://api.github.com/users/AndreasMadsen/repos", "events_url": "https://api.github.com/users/AndreasMadsen/events{/privacy}", "received_events_url": "https://api.github.com/users/AndreasMadsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @ArthurZucker, thanks for looking at this. I fixed minor issues reported by the GitHub testing runners. However, the CircleCI job appears to have some configuration issues that I don't think I'm responsible for. Any insight would be appreciated :)", "While we're waiting for @ArthurZucker to come back from vacation next week, could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?", "@sgugger Thanks for the tip! I refreshed the OAuth and rebased the branch.", "@sgugger thanks for the help. I fixed the remaining issues, all tests appear to be passing now.", "I fixed everything that was mentioned. However, I couldn't remember/find what command to run to update the documentation files. I assume a test will fail and tell me.", "Lol, there appears to be some issue with a Linux distribution repository. I will make a force push later to refresh the CI. ", "Thanks for the fixes! Could you also resolve the merge conflicts! Will then ask for a final review from @sgugger ", "@ArthurZucker I rebased it. However, some of the README diffs contain some space ` ` changes. If you want me to fix that please let me know the `make` command to update the README files. ", "Normally `make fixup` or just `make style` should do the trick 😉 ", "Let's just remove the code to have the correct slice in the different tests and we can merge!", "@ArthurZucker Thanks, I appreciate it.", "@ArthurZucker As I mentioned before, similar comments were included in the RoBERTa testfile which is why I kept them. However, I have removed them now as you suggested.", "Appears to be an unrelated error: \r\n\r\n```\r\nFAILED examples/pytorch/test_accelerate_examples.py::ExamplesTestsNoTrainer::test_run_swag_no_trainer - AssertionError: 0.7 not greater than or equal to 0.8\r\n```\r\n\r\nI will try to rebase.", "Cool thanks for the addition! 🚀 " ]
1,668
1,671
1,671
CONTRIBUTOR
null
# What does this PR do? This PR implements Roberta PreLayerNorm as used in https://arxiv.org/abs/2202.08005, code provided at https://github.com/princeton-nlp/DinkyTrain. The model is equivariant to using the `--encoder-normalize-before` flag in fairseq. The addition of this model was discussed in https://github.com/huggingface/transformers/issues/19877. Note that checkpoints provided by https://arxiv.org/abs/2202.08005, such as https://huggingface.co/princeton-nlp/efficient_mlm_m0.40 are not valid as they assume an hacked RoBERTa model provided in https://github.com/princeton-nlp/DinkyTrain. Additionally, the checkpoints contain extra weights which are never used due to [a bug](https://github.com/princeton-nlp/DinkyTrain/blob/main/huggingface/modeling_roberta_prelayernorm.py#L185) in their convention code. The conversion script provided in this PR fixes those issues. I'm not sure what the appropriate migration is for potentially fixing those checkpoints. For now, I have uploaded the corrected checkpoints to https://huggingface.co/andreasmadsen/efficient_mlm_m0.40. Fixes #19877 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? /cc @sgugger who commented on the original issue https://github.com/huggingface/transformers/issues/19877
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20305/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20305/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20305", "html_url": "https://github.com/huggingface/transformers/pull/20305", "diff_url": "https://github.com/huggingface/transformers/pull/20305.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20305.patch", "merged_at": 1671438617000 }
https://api.github.com/repos/huggingface/transformers/issues/20304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20304/comments
https://api.github.com/repos/huggingface/transformers/issues/20304/events
https://github.com/huggingface/transformers/pull/20304
1,453,742,503
PR_kwDOCUB6oc5DIsEg
20,304
fix device issue
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@alaradirik Only for those post-processing with this block\r\n```python\r\n if isinstance(target_sizes, List):\r\n img_h = torch.Tensor([i[0] for i in target_sizes])\r\n img_w = torch.Tensor([i[1] for i in target_sizes])\r\n```\r\nhave to be fixed.\r\n\r\nIf there is only `img_h, img_w = target_sizes.unbind(1)`, I think it is fine (will be on the correct device already).\r\n\r\nDo you have a link to a place that you think need to be fixed but not done in this PR yet?", "> @alaradirik Only for those post-processing with this block\r\n> \r\n> ```python\r\n> if isinstance(target_sizes, List):\r\n> img_h = torch.Tensor([i[0] for i in target_sizes])\r\n> img_w = torch.Tensor([i[1] for i in target_sizes])\r\n> ```\r\n> \r\n> have to be fixed.\r\n\r\n\r\n@ydshieh that makes sense, and we are assuming that the input `target_sizes` is on the correct device, right?", "\r\n> @ydshieh that makes sense, and we are assuming that the input `target_sizes` is on the correct device, right?\r\n\r\nYes, as so far we don't have CI failure from that one, so we should be good.", "\r\n> @ydshieh that makes sense, and we are assuming that the input `target_sizes` is on the correct device, right?\r\n\r\nYes, as so far we don't have CI failure from that one, so we should be good." ]
1,668
1,669
1,669
COLLABORATOR
null
# What does this PR do? When this block is run ``` if isinstance(target_sizes, List): img_h = torch.Tensor([i[0] for i in target_sizes]) img_w = torch.Tensor([i[1] for i in target_sizes]) ``` `scale_fct` (defined a few line below) is always on `cpu`. We need to put it on the proper device.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20304/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20304", "html_url": "https://github.com/huggingface/transformers/pull/20304", "diff_url": "https://github.com/huggingface/transformers/pull/20304.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20304.patch", "merged_at": 1669021946000 }
https://api.github.com/repos/huggingface/transformers/issues/20303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20303/comments
https://api.github.com/repos/huggingface/transformers/issues/20303/events
https://github.com/huggingface/transformers/issues/20303
1,453,658,816
I_kwDOCUB6oc5WpRLA
20,303
BLOOM past_key_values bug
{ "login": "rpryzant", "id": 8572027, "node_id": "MDQ6VXNlcjg1NzIwMjc=", "avatar_url": "https://avatars.githubusercontent.com/u/8572027?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rpryzant", "html_url": "https://github.com/rpryzant", "followers_url": "https://api.github.com/users/rpryzant/followers", "following_url": "https://api.github.com/users/rpryzant/following{/other_user}", "gists_url": "https://api.github.com/users/rpryzant/gists{/gist_id}", "starred_url": "https://api.github.com/users/rpryzant/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rpryzant/subscriptions", "organizations_url": "https://api.github.com/users/rpryzant/orgs", "repos_url": "https://api.github.com/users/rpryzant/repos", "events_url": "https://api.github.com/users/rpryzant/events{/privacy}", "received_events_url": "https://api.github.com/users/rpryzant/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing (fixed by https://github.com/huggingface/transformers/pull/20213)" ]
1,668
1,668
1,668
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yers - Using distributed or parallel set-up in script?: no ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction import torch from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, AutoModel, MT5ForConditionalGeneration from transformers import AutoModelForCausalLM, AutoTokenizer model_name = 'bigscience/bloomz-7b1' model = AutoModelForCausalLM.from_pretrained( model_name, device_map='auto') print(model.config) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = 'this is a' encoded_input = tokenizer(prompt, return_tensors='pt') output = model(input_ids=encoded_input['input_ids'].cuda()) print([x.size() for x in output.past_key_values[0]]) ### Expected behavior As per the documentation and generation interface (https://huggingface.co/docs/transformers/main_classes/output) past_key_values should have tuples of 4-dimensional tensors with shape (batch_size, num_heads, sequence_length, embed_size_per_head). The BLOOM models returns 3-dimensional tensors, for example the first tuple in the example code has tensors of shape ( torch.Size([32, 128, 3]), torch.Size([32, 3, 128]) ). This means that the code crashes when you try to use some decoding algorithms, for example the new contrastive decoding algorithm.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20303/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20302/comments
https://api.github.com/repos/huggingface/transformers/issues/20302/events
https://github.com/huggingface/transformers/pull/20302
1,453,610,702
PR_kwDOCUB6oc5DIPeo
20,302
remove two tokens that should not be suppressed
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? As mentionned in #20123, two tokens `'` and `-` were supressed. This was probably from the [late commit](https://github.com/openai/whisper/commit/8cf36f3508c9acd341a45eb2364239a3d81458b9) that appeared after I started working on it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20302/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20302/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20302", "html_url": "https://github.com/huggingface/transformers/pull/20302", "diff_url": "https://github.com/huggingface/transformers/pull/20302.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20302.patch", "merged_at": 1668758262000 }
https://api.github.com/repos/huggingface/transformers/issues/20301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20301/comments
https://api.github.com/repos/huggingface/transformers/issues/20301/events
https://github.com/huggingface/transformers/pull/20301
1,453,583,537
PR_kwDOCUB6oc5DIJpI
20,301
Fix blender bot missleading doc
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Fixes #19938, indeed the example is not correct.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20301/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20301", "html_url": "https://github.com/huggingface/transformers/pull/20301", "diff_url": "https://github.com/huggingface/transformers/pull/20301.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20301.patch", "merged_at": 1668758227000 }
https://api.github.com/repos/huggingface/transformers/issues/20300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20300/comments
https://api.github.com/repos/huggingface/transformers/issues/20300/events
https://github.com/huggingface/transformers/pull/20300
1,453,468,707
PR_kwDOCUB6oc5DHw4a
20,300
[bnb] Simplifies slow test
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This PR simplifies a slow test for `bnb`. In fact, you can easily retrieve the devices of the model using `set(model.hf_device_map.values())` cc @sgugger Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20300/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20300", "html_url": "https://github.com/huggingface/transformers/pull/20300", "diff_url": "https://github.com/huggingface/transformers/pull/20300.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20300.patch", "merged_at": 1668697162000 }
https://api.github.com/repos/huggingface/transformers/issues/20299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20299/comments
https://api.github.com/repos/huggingface/transformers/issues/20299/events
https://github.com/huggingface/transformers/issues/20299
1,453,464,050
I_kwDOCUB6oc5Wohny
20,299
Bug in contrastive search with GPT2-decoders crossattention using batches
{ "login": "josh-oo", "id": 22002584, "node_id": "MDQ6VXNlcjIyMDAyNTg0", "avatar_url": "https://avatars.githubusercontent.com/u/22002584?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josh-oo", "html_url": "https://github.com/josh-oo", "followers_url": "https://api.github.com/users/josh-oo/followers", "following_url": "https://api.github.com/users/josh-oo/following{/other_user}", "gists_url": "https://api.github.com/users/josh-oo/gists{/gist_id}", "starred_url": "https://api.github.com/users/josh-oo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josh-oo/subscriptions", "organizations_url": "https://api.github.com/users/josh-oo/orgs", "repos_url": "https://api.github.com/users/josh-oo/repos", "events_url": "https://api.github.com/users/josh-oo/events{/privacy}", "received_events_url": "https://api.github.com/users/josh-oo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @josh-oo\r\n\r\nI believe that issue has been fixed in the latest version as you can see in [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1q25nXzjuvaYHMTtosH2VVzDuSk877Ta6)\r\n\r\nAlso I believe in the reproduction of the same error without generate, you shouldn't have `batch_size*top_k`, instead the following should work\r\n```\r\nencoder_output = (torch.rand(2, 7, 768), None) # shape: (batch_size, seq_len, hidden_dim)\r\nencoder_attention_mask = torch.ones(2, 7) # shape: (batch_size, seq_len)\r\ndecoder_input_ids = torch.tensor([[0],[0]]) # shape: (batch_size, 1)\r\n\r\nmodel(decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_output, attention_mask=encoder_attention_mask)\r\n```", "Hey @josh-oo 👋 \r\n\r\nSince the release of v4.24 we have standardized a few internals of contrastive search, which may have fixed existing bugs (like @kiansierra pointed out, thank you for pitching in!)", "Hi @kiansierra and @gante , thanks for the quick reply. The new version seems to solve the problem.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): 2.9.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplaten @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Load the models and prepare example inputs: ```python import torch from transformers import EncoderDecoderModel, AutoTokenizer model = EncoderDecoderModel.from_encoder_decoder_pretrained( "bert-base-uncased", "gpt2" ) in_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") out_tokenizer = AutoTokenizer.from_pretrained("gpt2") model.config.decoder_start_token_id = out_tokenizer.bos_token_id inputs = in_tokenizer(["This is a simple test","This is a simple test"], return_tensors="pt") input_ids = inputs['input_ids'] ``` Calling the generate method: ```python model.generate(input_ids, top_k=4, penalty_alpha=0.6) ``` Leads to a Runtime Error in GPTs crossattention layer: ``` [...] File "[...]/transformers/generation_utils.py", line 1511, in generate return self.contrastive_search( [...] File "[..]/transformers/models/gpt2/modeling_gpt2.py", line 895, in forward outputs = block( File "[..]/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "[..]/transformers/models/gpt2/modeling_gpt2.py", line 417, in forward cross_attn_outputs = self.crossattention( File "[...]/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) File "[...]/transformers/models/gpt2/modeling_gpt2.py", line 182, in _attn attn_weights = torch.matmul(query, key.transpose(-1, -2)) RuntimeError: The size of tensor a (8) must match the size of tensor b (2) at non-singleton dimension 0 ``` # Notes: I have tried some combinations. The error appears only with plain gpt as decoder. And only when using batches greater than 1 . Beam search works although it also sends batches of size batch_size*num_beams to the gpt model. Therefore the error is probably in the contrastive search. To produce the same Error without calling generate: ```python encoder_output = (torch.rand(2, 7, 768), None) # shape: (batch_size, seq_len, hidden_dim) encoder_attention_mask = torch.ones(8, 7) # shape: (batch_size*top_k, seq_len) decoder_input_ids = torch.tensor([[0],[0]]) # shape: (batch_size, 1) model(decoder_input_ids=decoder_input_ids, encoder_outputs=encoder_output, attention_mask=encoder_attention_mask) ``` ### Expected behavior Calling contrastive search method with batches should return batches of generated ids like beam search.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20299/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20298/comments
https://api.github.com/repos/huggingface/transformers/issues/20298/events
https://github.com/huggingface/transformers/pull/20298
1,453,454,259
PR_kwDOCUB6oc5DHtxU
20,298
Deal with `ImageProcessor`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,677
1,668
COLLABORATOR
null
# What does this PR do? As @amyeroberts add `ImageProcessor`, we need an update in tiny model creation script, otherwise, it won't be returned by `convert_processors`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20298/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20298", "html_url": "https://github.com/huggingface/transformers/pull/20298", "diff_url": "https://github.com/huggingface/transformers/pull/20298.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20298.patch", "merged_at": 1668714587000 }
https://api.github.com/repos/huggingface/transformers/issues/20297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20297/comments
https://api.github.com/repos/huggingface/transformers/issues/20297/events
https://github.com/huggingface/transformers/pull/20297
1,453,428,107
PR_kwDOCUB6oc5DHoIP
20,297
Add Transformers textbox changes
{ "login": "Xiaoxue-xx", "id": 71930531, "node_id": "MDQ6VXNlcjcxOTMwNTMx", "avatar_url": "https://avatars.githubusercontent.com/u/71930531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Xiaoxue-xx", "html_url": "https://github.com/Xiaoxue-xx", "followers_url": "https://api.github.com/users/Xiaoxue-xx/followers", "following_url": "https://api.github.com/users/Xiaoxue-xx/following{/other_user}", "gists_url": "https://api.github.com/users/Xiaoxue-xx/gists{/gist_id}", "starred_url": "https://api.github.com/users/Xiaoxue-xx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Xiaoxue-xx/subscriptions", "organizations_url": "https://api.github.com/users/Xiaoxue-xx/orgs", "repos_url": "https://api.github.com/users/Xiaoxue-xx/repos", "events_url": "https://api.github.com/users/Xiaoxue-xx/events{/privacy}", "received_events_url": "https://api.github.com/users/Xiaoxue-xx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20297). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20297/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20297", "html_url": "https://github.com/huggingface/transformers/pull/20297", "diff_url": "https://github.com/huggingface/transformers/pull/20297.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20297.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20296/comments
https://api.github.com/repos/huggingface/transformers/issues/20296/events
https://github.com/huggingface/transformers/pull/20296
1,453,383,507
PR_kwDOCUB6oc5DHeZb
20,296
[Table Transformer] Add resources
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? This PR adds resources for Table Transformer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20296/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20296", "html_url": "https://github.com/huggingface/transformers/pull/20296", "diff_url": "https://github.com/huggingface/transformers/pull/20296.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20296.patch", "merged_at": 1669051473000 }
https://api.github.com/repos/huggingface/transformers/issues/20295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20295/comments
https://api.github.com/repos/huggingface/transformers/issues/20295/events
https://github.com/huggingface/transformers/pull/20295
1,453,369,864
PR_kwDOCUB6oc5DHbZG
20,295
Add GIT (GenerativeImage2Text)
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger PR is ready for a final review :) only remaining issue is that we need to update `from_pretrained` in `ProcessorMixin` to handle multiple image processors", "Hi, is it possible to finetune git with transformers?" ]
1,668
1,672
1,672
CONTRIBUTOR
null
# What does this PR do? This PR implements GIT, short for GenerativeImage2Text. The model is a decoder-only Transformer conditioned on CLIP patch tokens + text tokens for tasks like image captioning and VQA. To do: - [x] add support for user-provided attention_mask - [x] fix model_input_names of processor, see #20549 - [x] fix issue where GitProcessor seems to instantiate a `VideoMAEImageProcessor` by default - [x] transfer checkpoints to `microsoft`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20295/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20295", "html_url": "https://github.com/huggingface/transformers/pull/20295", "diff_url": "https://github.com/huggingface/transformers/pull/20295.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20295.patch", "merged_at": 1672751839000 }
https://api.github.com/repos/huggingface/transformers/issues/20294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20294/comments
https://api.github.com/repos/huggingface/transformers/issues/20294/events
https://github.com/huggingface/transformers/pull/20294
1,453,140,032
PR_kwDOCUB6oc5DGpOM
20,294
Fixing the doctests failures.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20294/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20294", "html_url": "https://github.com/huggingface/transformers/pull/20294", "diff_url": "https://github.com/huggingface/transformers/pull/20294.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20294.patch", "merged_at": 1668694413000 }
https://api.github.com/repos/huggingface/transformers/issues/20293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20293/comments
https://api.github.com/repos/huggingface/transformers/issues/20293/events
https://github.com/huggingface/transformers/pull/20293
1,453,132,146
PR_kwDOCUB6oc5DGngY
20,293
Add slack report button for Example test
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
COLLABORATOR
null
# What does this PR do? Add slack report button for Example test. - We just need to specify device in the artifact file names
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20293/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20293/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20293", "html_url": "https://github.com/huggingface/transformers/pull/20293", "diff_url": "https://github.com/huggingface/transformers/pull/20293.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20293.patch", "merged_at": 1668696900000 }
https://api.github.com/repos/huggingface/transformers/issues/20292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20292/comments
https://api.github.com/repos/huggingface/transformers/issues/20292/events
https://github.com/huggingface/transformers/pull/20292
1,453,122,795
PR_kwDOCUB6oc5DGleD
20,292
Fix longformer onnx broken export
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Yes, I would add the support for Longformer in `optimum`, and the tests there as well.", "Thanks for iterating @fxmarty ! \r\n\r\nSince @ydshieh has also approved, gently pinging @sgugger for final approval :)", "Hello @sgugger @lewtun @ ydshieh @fxmarty : I am stuck with the same issue. Thanks for the fix. This may be a noob question. But wanted to check when would this change be reflected in the PyPi package ? Is it during the next release ? If so do we know when would that be happening ?", "@adithya1111 Somewhere in the next week if I remember correctly.", "Hello @adithya1111 yes this will be in the next release, feel free to try the main branch meanwhile. In any case, I would advise you to be very careful with the exported ONNX model, and to check that the outputs are on par with PyTorch for your target sequence length. You can possibly edit the ONNX export code if you want to use the exported ONNX with a different sequence length, as explained in my messages above.\r\n\r\nFor reference: https://github.com/huggingface/optimum/issues/503", "Thanks a lot for the comments @fxmarty @ydshieh . Another question. I used the default parameters when training. So would that mean the Global Attention Mask is None ? I see that we are now setting the global_attention_mask[:, ::2] = 1 . I assume here we are making every second token global. Could this lead to a discrepancy ?\r\n\r\nMy original models results are `logits=tensor([[[ 0.6209, 0.0719, 0.1107, -0.5316],\r\n [ 3.0321, -0.2787, -0.6460, -2.5359],\r\n [ 2.6904, 0.1169, -0.7495, -2.8346],\r\n [ 0.6474, 0.0761, 0.1041, -0.5438]]]`\r\n\r\nAnd my ONNX converted predictions are `[array([[[ 0.49600145, 0.08062335, 0.12902021, -0.4010917 ],\r\n [ 3.0400352 , -0.34643874, -0.6276542 , -2.444679 ],\r\n [ 2.158992 , 0.02124629, -0.5462518 , -2.094074 ],\r\n [ 0.6290194 , 0.06919068, 0.10753635, -0.5197539 ]]],\r\n dtype=float32)]`\r\n\r\nIts close but there are some discrepancies . PFB my config file for my model \r\n\r\n`{\r\n \"_name_or_path\": \"/opt/ml/input/data/model-base\",\r\n \"architectures\": [\r\n \"LongformerForTokenClassification\"\r\n ],\r\n \"attention_mode\": \"longformer\",\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"attention_window\": [\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512\r\n ],\r\n \"bos_token_id\": 0,\r\n \"eos_token_id\": 2,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\",\r\n \"2\": \"LABEL_2\",\r\n \"3\": \"LABEL_3\"\r\n },\r\n \"ignore_attention_mask\": false,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1,\r\n \"LABEL_2\": 2,\r\n \"LABEL_3\": 3\r\n },\r\n \"layer_norm_eps\": 1e-05,\r\n \"max_position_embeddings\": 4098,\r\n \"model_type\": \"longformer\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 1,\r\n \"position_embedding_type\": \"absolute\",\r\n \"sep_token_id\": 2,\r\n \"torch_dtype\": \"float32\",\r\n \"transformers_version\": \"4.9.1\",\r\n \"type_vocab_size\": 1,\r\n \"use_cache\": true,\r\n \"vocab_size\": 50265\r\n}\r\n`", "@adithya1111 Could you open an issue in https://github.com/huggingface/optimum/issues with a reproducible code? ONNX export through `transformers.onnx` will eventually depend on `optimum.exporters` so we can track the issue there.\r\n\r\nThanks!", "Created a new issue. Thanks", "@sgugger @fxmarty @ydshieh @lewtun : Does the latest release fix this issue ? ", "With the release you will have no error at the export & running the ONNX model. However, following the discussion above (see the closed comments), and as well in https://github.com/huggingface/transformers/issues/20275 , https://github.com/huggingface/optimum/issues/503 , https://github.com/huggingface/optimum/issues/505 , you can expect to have non-meaningful output running the ONNX model with sensibly different sequence length than the example provided to `torch.onnx.export` during the conversion.\r\n\r\nThis is WIP to add options to customize more the export, refer to https://github.com/huggingface/optimum/pull/522" ]
1,668
1,669
1,669
COLLABORATOR
null
This PR fixes the ONNX export of longformer, that was **silently** broken for several cases: * the export registers `padding_len > 0` as a constant equal to `True`, hence during inference in the dynamic case `padding_len == 0`, we would still go through the path `padding_len > 0` that would then contain negative indexing making some ONNX nodes fail (gather). This PR fixes the negative indexes. * the export registers `hidden_states.size(1) == window_overlap * 2:` as a constant equal `True` during the export, hence using the converted ONNX model was failing when the `input_ids` length was strictly greater than `attention_window` (case where the `else` path should be taken). This PR removes the path `hidden_states.size(1) == window_overlap * 2:`, since the other path can handle this case as well. Had to run `make fix-copies` than modified led model as well. @michaelbenayoun @lewisbails Where should I add tests for this? Optimum? ## Before submitting - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20292/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20292", "html_url": "https://github.com/huggingface/transformers/pull/20292", "diff_url": "https://github.com/huggingface/transformers/pull/20292.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20292.patch", "merged_at": 1669133239000 }
https://api.github.com/repos/huggingface/transformers/issues/20291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20291/comments
https://api.github.com/repos/huggingface/transformers/issues/20291/events
https://github.com/huggingface/transformers/pull/20291
1,453,121,131
PR_kwDOCUB6oc5DGlHa
20,291
[wip: testing doc-builder]
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,668
1,671
1,671
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20291/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20291", "html_url": "https://github.com/huggingface/transformers/pull/20291", "diff_url": "https://github.com/huggingface/transformers/pull/20291.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20291.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20290/comments
https://api.github.com/repos/huggingface/transformers/issues/20290/events
https://github.com/huggingface/transformers/issues/20290
1,452,803,522
I_kwDOCUB6oc5WmAXC
20,290
Memory issue with OPT models when given long input sequences
{ "login": "hxiaoyang", "id": 98200137, "node_id": "U_kgDOBdpqSQ", "avatar_url": "https://avatars.githubusercontent.com/u/98200137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hxiaoyang", "html_url": "https://github.com/hxiaoyang", "followers_url": "https://api.github.com/users/hxiaoyang/followers", "following_url": "https://api.github.com/users/hxiaoyang/following{/other_user}", "gists_url": "https://api.github.com/users/hxiaoyang/gists{/gist_id}", "starred_url": "https://api.github.com/users/hxiaoyang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hxiaoyang/subscriptions", "organizations_url": "https://api.github.com/users/hxiaoyang/orgs", "repos_url": "https://api.github.com/users/hxiaoyang/repos", "events_url": "https://api.github.com/users/hxiaoyang/events{/privacy}", "received_events_url": "https://api.github.com/users/hxiaoyang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "```\r\n compute_logprobs(int(sys.argv[3]), int(sys.argv[4]), sys.argv[2], model_name=sys.argv[1])\r\n```\r\n \r\nWhat are those values, and what do they correspond to ?", "Hi @xiaoyangnickhu 👋 This seems business as usual for large models and a forum question for the PyTorch experts.\r\n\r\nAs per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗", "My bad... Thanks!" ]
1,668
1,668
1,668
NONE
null
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-4.18.0-305.45.1.el8_4.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @gante @patrickvonplaten @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When the input sequence is long, `opt-30b` and `opt-66b` gave the following memory error: ``` Traceback (most recent call last): File "/home/x/script.py", line 189, in <module> main() File "/home/x/script.py", line 186, in main compute_logprobs(int(sys.argv[3]), int(sys.argv[4]), sys.argv[2], model_name=sys.argv[1]) File "/home/x/script.py", line 170, in compute_logprobs logits = model(input_ids).logits File "/sw/pkgs/arc/python/3.9.12/pytorch/1.12.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/x/hf/lib/python3.9/site-packages/accelerate/hooks.py", line 148, in new_forward output = old_forward(*args, **kwargs) File "/home/x/hf/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 929, in forward outputs = self.model.decoder( File "/sw/pkgs/arc/python/3.9.12/pytorch/1.12.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/x/hf/lib/python3.9/site-packages/accelerate/hooks.py", line 148, in new_forward output = old_forward(*args, **kwargs) File "/home/x/hf/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 693, in forward layer_outputs = decoder_layer( File "/sw/pkgs/arc/python/3.9.12/pytorch/1.12.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/x/hf/lib/python3.9/site-packages/accelerate/hooks.py", line 148, in new_forward output = old_forward(*args, **kwargs) File "/home/x/hf/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 346, in forward hidden_states = self.fc1(hidden_states) File "/sw/pkgs/arc/python/3.9.12/pytorch/1.12.1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/x/hf/lib/python3.9/site-packages/accelerate/hooks.py", line 148, in new_forward output = old_forward(*args, **kwargs) File "/home/x/hf/lib/python3.9/site-packages/bitsandbytes/nn/modules.py", line 256, in forward out = bnb.matmul(x, self.weight, state=self.state) File "/home/x/hf/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 410, in matmul return MatMul8bitLt.apply(A, B, out, state) File "/home/x/hf/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py", line 328, in forward out32, Sout32 = F.igemmlt(C32A, state.CxB, SA, state.SB) File "/home/nickhu/hf/lib/python3.9/site-packages/bitsandbytes/functional.py", line 1332, in igemmlt out, Sout = get_transform_buffer( File "/home/x/hf/lib/python3.9/site-packages/bitsandbytes/functional.py", line 294, in get_transform_buffer return init_func((rows, cols), dtype=dtype, device=device), state RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 44.37 GiB total capacity; 42.44 GiB already allocated; 40.50 MiB free; 43.23 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ``` To reproduce, simply call `model(input)` on some long input sequence. Additional information: - My total GPU memory: `2*42 GB` for `opt-30b` and `4*42 GB` for `opt-66b`. - Also, the same inputs did not cause errors in the smaller opt models. ### Expected behavior No `CUDA out of memory` error. How can I fix this? Any help is appreciated!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20290/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20289/comments
https://api.github.com/repos/huggingface/transformers/issues/20289/events
https://github.com/huggingface/transformers/pull/20289
1,452,583,222
PR_kwDOCUB6oc5DE1dS
20,289
set the default cache_enable to True, aligned with the default value …
{ "login": "sywangyi", "id": 36058628, "node_id": "MDQ6VXNlcjM2MDU4NjI4", "avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sywangyi", "html_url": "https://github.com/sywangyi", "followers_url": "https://api.github.com/users/sywangyi/followers", "following_url": "https://api.github.com/users/sywangyi/following{/other_user}", "gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions", "organizations_url": "https://api.github.com/users/sywangyi/orgs", "repos_url": "https://api.github.com/users/sywangyi/repos", "events_url": "https://api.github.com/users/sywangyi/events{/privacy}", "received_events_url": "https://api.github.com/users/sywangyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,668
1,668
1,668
CONTRIBUTOR
null
…in pytorch cpu/cuda amp autocast Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> - trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20289/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20289", "html_url": "https://github.com/huggingface/transformers/pull/20289", "diff_url": "https://github.com/huggingface/transformers/pull/20289.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20289.patch", "merged_at": 1668694867000 }
https://api.github.com/repos/huggingface/transformers/issues/20288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20288/comments
https://api.github.com/repos/huggingface/transformers/issues/20288/events
https://github.com/huggingface/transformers/pull/20288
1,452,539,146
PR_kwDOCUB6oc5DEsn7
20,288
Added FAN Models
{ "login": "kiansierra", "id": 47116198, "node_id": "MDQ6VXNlcjQ3MTE2MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/47116198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kiansierra", "html_url": "https://github.com/kiansierra", "followers_url": "https://api.github.com/users/kiansierra/followers", "following_url": "https://api.github.com/users/kiansierra/following{/other_user}", "gists_url": "https://api.github.com/users/kiansierra/gists{/gist_id}", "starred_url": "https://api.github.com/users/kiansierra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kiansierra/subscriptions", "organizations_url": "https://api.github.com/users/kiansierra/orgs", "repos_url": "https://api.github.com/users/kiansierra/repos", "events_url": "https://api.github.com/users/kiansierra/events{/privacy}", "received_events_url": "https://api.github.com/users/kiansierra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20288). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20288). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20288). All of your documentation changes will be reflected on that endpoint.", "Hi @NielsRogge , thanks for your review.\r\nI believe the current tests that are failing are not due to this PR (mostly because the failure makes reference to a file not modified by this PR) (torch tests passed in a previous run).\r\nPlease let me know if there any additional tasks to complete" ]
1,668
1,669
1,669
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #17234 Implements the FAN Models described in this [paper](https://arxiv.org/pdf/2204.12451.pdf) and available in the following [github repo](https://github.com/NVlabs/FAN), Additionally this repo has some of the weights available as described in their README file. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Tasks Completed From those provided in the [add new model](https://huggingface.co/docs/transformers/add_new_model) contribute section - [X] (Optional) Understood the model’s theoretical aspects - [X] Prepared 🤗 Transformers dev environment - [X] Set up debugging environment of the original repository - [X] Created script that successfully runs the forward() pass using the original repository and checkpoint (Available in Demo Colab) - [X] Successfully added the model skeleton to 🤗 Transformers - [X] Successfully converted original checkpoint to 🤗 Transformers checkpoint - [X] Successfully ran forward() pass in 🤗 Transformers that gives identical output to original checkpoint - [X] Finished model tests in 🤗 Transformers - [X] Successfully added tokenizer in 🤗 Transformers - [X] Run end-to-end integration tests - [X] Finished docs - [X] Uploaded model weights to the Hub - [X] Submitted the pull request - [X] (Optional) Added a demo notebook [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/10aCWtEPpRD2X251EiCNemjhvxmsQhGCl) ## Model files migration Ideally I believe the different model files for these architectures should be hosted under the [NVIDIA](https://huggingface.co/nvidia) organization, instead of my own [personal space](https://huggingface.co/ksmcg). ## Thank you note It has been a very enriching experience to migrate these models. I've learned a lot while developing under the constraints imposed by this library that provide such a great consistent user experience, from the tests, PreTrainedModel class, the cookiecuter template.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20288/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20288", "html_url": "https://github.com/huggingface/transformers/pull/20288", "diff_url": "https://github.com/huggingface/transformers/pull/20288.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20288.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20287/comments
https://api.github.com/repos/huggingface/transformers/issues/20287/events
https://github.com/huggingface/transformers/issues/20287
1,452,198,548
I_kwDOCUB6oc5WjsqU
20,287
Flan-T5-XXL generates non-sensical text when load_in_8bit=True
{ "login": "jimmy-marmalade", "id": 85194663, "node_id": "MDQ6VXNlcjg1MTk0NjYz", "avatar_url": "https://avatars.githubusercontent.com/u/85194663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jimmy-marmalade", "html_url": "https://github.com/jimmy-marmalade", "followers_url": "https://api.github.com/users/jimmy-marmalade/followers", "following_url": "https://api.github.com/users/jimmy-marmalade/following{/other_user}", "gists_url": "https://api.github.com/users/jimmy-marmalade/gists{/gist_id}", "starred_url": "https://api.github.com/users/jimmy-marmalade/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jimmy-marmalade/subscriptions", "organizations_url": "https://api.github.com/users/jimmy-marmalade/orgs", "repos_url": "https://api.github.com/users/jimmy-marmalade/repos", "events_url": "https://api.github.com/users/jimmy-marmalade/events{/privacy}", "received_events_url": "https://api.github.com/users/jimmy-marmalade/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "Hi @jimmy-marmalade \r\nThanks a lot for raising this point. Note that `int8` quantization is done in 2 stages, it first converts the model in `float16` and uses the `fp16` model to quantize it in `8bit`. If you try to load and run the model in `fp16` you also get gibberish output:\r\n```\r\nimport torch\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"./flan-t5-xxl\")\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"./flan-t5-xxl\", torch_dtype=torch.float16, device_map=\"auto\")\r\n\r\ninput_text = \"translate English to German: How old are you?\"\r\ninput_ids = tokenizer(input_text, return_tensors=\"pt\").input_ids.to(\"cuda\")\r\n\r\noutputs = model.generate(input_ids, max_length=512)\r\nprint(tokenizer.decode(outputs[0]))\r\n>>> <pad> How old are die Sie? Ihr Mutter?tat?ztlich, rezult Interesse: restriction = = = = ...\r\n```\r\nI suspect there is something wrong with `bf16` to `fp16` conversion for this specific model and for `xl` model too. \r\n@stas00 do have you any intuition on why the int8 conversion (so the underlying fp16 conversion) worked well for bloom-176 and not here? 🙏 \r\nThanks! ", "Did the bf16 model weights have large values resulting in overflow when used under fp16? bf16 to fp16 conversion is a huge issue with almost every model we have seen - .e.g all large t5 and derivative models. Seeing that this is a T5 derivative it's almost certainly related, you can see the rich discussion here: https://github.com/huggingface/transformers/pull/10956 and possible workarounds to try.\r\n\r\nProbably talk to @TimDettmers and ask if perhaps there could be bf16/int8 variation for bf16 models in BNB?\r\n\r\n", "T5 doesn't work in FP16 because the softmaxes in the attention layers are not upcast to float32. @younesbelkada if you remember the fixes done in BLOOM/OPT I suspect similar ones would fix inference in FP16 for T5 :-)", "but why are we even going through FP16 to do quanitization of a bf16 model? why can't this be done directly in the original dtype?\r\n\r\nnote: interestingly deepspeed-inference also converts the model to fp16 to do quantization. ", "Thank you very much for all the pointers\r\n\r\n> T5 doesn't work in FP16 because the softmaxes in the attention layers are not upcast to float32. @younesbelkada if you remember the fixes done in BLOOM/OPT I suspect similar ones would fix inference in FP16 for T5 :-)\r\n\r\nI think that T5 [already upcasts the softmax to `fp32`](https://github.com/huggingface/transformers/blob/6c2be845dd384829610897b63e6dcbe911e9e811/src/transformers/models/t5/modeling_t5.py#L539). I suspected that the overflow might come from the addition to positional bias in the line before but did not helped. I also tried to upcast the lm_logits and the hidden_states before the lm_head to `fp32` but did not helped too. \r\n\r\nIn addition, I also printed the hidden states at every stage, checking whether it contains any `nan` or `inf`. This was always set to `False`. \r\n\r\nI will investigate more by reading in deep details @stas00 PR\r\n\r\nI think the most efficient solution is to try to see where the overflow comes, and force the operation to be done in `fp32` in this operation. ", "@stas00 using your tool `detect_overflow` here, and I am flagging an overflow starting from a certain layer (from layer `6` - because of `inf`). (btw I also tried the `autocast` solution but seems to not work for inference, I have seen on the issue that it might work for some situations for inference, but sadly not in this case :/ )\r\nThen the hidden states gets constantly overflowed and clamped at each layer. I guess that the accumulation of clamping at various stages introduces these errors. \r\n\r\nI am wondering if we can find a proper workaround for that, is clamping the right solution? My intuition is that the model gets confused at some point since clamping `n` times the hidden states will yield to completely different results than the `bf16` hidden states.\r\n\r\nAlso, let's say you have flagged the layer responsible of the overflow. You can then do the operation there in `fp32`. But the under/overflow will be still present since you need to cast the results back in `fp16` right? ", "There was one more scaling hack posted here if you want to try it. https://github.com/huggingface/transformers/issues/14189#issuecomment-961571628\r\n\r\nIn general bf16-pretrained models ought to run under bf16 or fp32 regimes, as fp16 and bf16 are very incompatible dtypes. It's not as bad if you were to go from fp16 to bf16 as you'd only lose precision, and it'd only impact quality, but not the other way around (overflow).\r\n\r\nSo we should raise this question with the BNB and perhaps deepspeed-inference developers, at least to have an understanding of why both require fp16 and won't support bf16.\r\n\r\n@TimDettmers, @RezaYazdaniAminabadi - is there a way for your libraries to work with bf16 dtype, so that the bf16-pretrained models won't overflow during inference? Thank you.\r\n\r\n", "Thanks everyone for the discussion and work! \r\n\r\nAre there any possible workarounds that I could implement as an end user?", "> There was one more scaling hack posted here if you want to try it. [#14189 (comment)](https://github.com/huggingface/transformers/issues/14189#issuecomment-961571628)\r\n> \r\n> In general bf16-pretrained models ought to run under bf16 or fp32 regimes, as fp16 and bf16 are very incompatible dtypes. It's not as bad if you were to go from fp16 to bf16 as you'd only lose precision, and it'd only impact quality, but not the other way around (overflow).\r\n> \r\n> So we should raise this question with the BNB and perhaps deepspeed-inference developers, at least to have an understanding of why both require fp16 and won't support bf16.\r\n> \r\n> @TimDettmers, @RezaYazdaniAminabadi - is there a way for your libraries to work with bf16 dtype, so that the bf16-pretrained models won't overflow during inference? Thank you.\r\n\r\nThe main reason the weights are converted to half on DeepSpeed-side is that the kernels are only working with fp16 values. However, we are looking into some of these limitations and will resolve them soon.\r\nThe other part is that we can quantize from the original bf16 checkpoint and resolve some of the overflow issue due to different data-precision of fp16 vs bf16.\r\n ", "Wonderful! \r\n\r\nSo it looks like @jimmy-marmalade can try out your solution (Deepspeed-Inference) once you have something working in bf16/int8, Reza, and hopefully this will unblock them.", "Thanks @RezaYazdaniAminabadi is there an issue I can watch to keep track of progress.", "@younesbelkada (cc @thomwolf who gave the inspiration for the workaround; cc @stas00 , @sgugger ):\r\n\r\n@navjotts and myself had a look at this and found a workaround.\r\n\r\nAs already concluded in ticket above, the bf16->fp16 conversion is generally incompatible. We ran the `detect_overflow` as well (great tool, thanks @stas00 !) and found generally we got overflows in the dense part of layer 7 in the encoder, specifically in the `wo` operation of `T5DenseGatedActDense`.\r\n\r\nWe implemented a hacky workaround to keep `wo` in fp32, cast its input to fp32 and then leave it in fp32 until after the `T5LayerNorm`. At the end of the norm we cast back to fp16. All fp16 linear modules (i.e. everything except the `wo`) can then use the 8-bit quantization. The cast back to fp16 is not lossless ofcourse, but we've generally found it to perform equivalent. We haven't spotted any difference in output so far.\r\n\r\nWe made 3 changes:\r\n\r\n1. `T5DenseGatedActDense.forward`: \r\n```py\r\n hidden_gelu = self.act(self.wi_0(hidden_states))\r\n hidden_linear = self.wi_1(hidden_states)\r\n hidden_states = hidden_gelu * hidden_linear\r\n hidden_states = self.dropout(hidden_states)\r\n hidden_states = self.wo(\r\n hidden_states.to(torch.float32)\r\n ) # PATCH: Cast to float32, as self.wo is casted to float32 w/ patch 3 below\r\n return hidden_states\r\n```\r\n\r\n2. `T5LayerNorm.forward`\r\n```py\r\n variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)\r\n hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)\r\n\r\n # convert into half-precision if necessary\r\n if self.weight.dtype in [torch.float16, torch.bfloat16]:\r\n hidden_states = hidden_states.to(self.weight.dtype)\r\n\r\n return (self.weight * hidden_states).to(\r\n torch.float16\r\n ) # PATCH: Cast back to float16 for compatibility w/ next layer. This is not lossless.\r\n```\r\n\r\n3. `_load_state_dict_into_meta_model`\r\n```py\r\n if param_name.endswith(\r\n '.wo.weight'\r\n ): # PATCH: For the wo weights of the dense layers, keep them in float32, others will get converted to float16 as this is a requirement for the LLM 8-bit quantization.\r\n param = param.to(torch.float32) # PATCH\r\n else: # PATCH\r\n # We convert floating dtypes to the `dtype` passed.We want to keep the buffers/params\r\n # in int/uint/bool and not cast them.\r\n if dtype is not None and torch.is_floating_point(param):\r\n param = param.to(dtype)\r\n # For compatibility with PyTorch which loads float16/bfloat16 weights in fp32\r\n if is_safetensors and dtype is None and torch.is_floating_point(param):\r\n param = param.to(torch.float32)\r\n```\r\n\r\nand then when instantiating the model from pretrained we set:\r\n\r\n```py\r\nload_in_8bit_skip_modules=['decoder', 'lm_head', 'wo']\r\n```\r\n\r\nI'm not sure what a good way would be to get this into `transformers` though / if that would even be a good idea given this is quite hacky, curious for your thoughts. For patch 3, if we could add an option to specify an exclude_list for the conversion to float16, that would remove the need for that patch. Then the layers can be adapted at model-level.", "Hi @larsmennen (and cc @thomwolf )\r\nThanks for this great investigation and amazing fix - I also believe that this approach is the best fix so far for this problem. Thanks so much for working on this as it will enable using these modeis in a more accessible way.\r\nI see 2 workaround for that \r\n\r\n1- The fix should be applied for 8bit models only, in this case, I think that we can perfectly have an additional flag `load_in_8bit_fp32_modules = [\"wo\"]` and apply a patch similar to your point `3.`. For the points `2` and `1` we can probably have a hot fix as you suggested but I would wait for @sgugger and/or @stas00 to hear what they think about that\r\n\r\n2- We add an additional flag, regardless if the model is loaded in 8bit or no, since this could fix the issue with T5-fp16 too, with a flag similar than above `keep_in_fp32_modules=[\"wo\"]` that is activated only for half precision models (and 8bit models too). But again we'll probably need the hotfixes from `1`&`2`. \r\n\r\nFew questions:\r\n\r\n- When using `load_in_8bit_skip_modules=['decoder', 'lm_head', 'wo']` note that with your fix `decoder` and `lm_head` will be kept to their native `dtype`. In the case you are calling `load_in_8bit=True`, we first cast all the weights in fp16 therefore `load_in_8bit_skip_modules=['decoder', 'lm_head', 'wo']` is probably not needed as `wo` is \"force-casted\" in fp32 and `lm_head` is always detected to be kept in its original precision. Can you double check that? 🙏 \r\n- Does the same fix applies for T5-XL ?\r\n\r\n", "1. and 2. are easy patches to integrate. I don't anticipate any difficulty to have merged as is. For 3 we need a solution that is not too ad-hoc in the code of modeling_utils. I like the proposed 2- in @younesbelkada comment, since I also believe this should also fix the T5 evaluation problem in FP16.\r\n\r\nThanks a lot for the clear explanations and the deep dive!", "what about users on bf16 hardware that don't want to waste memory casting to fp32 since the modeling code works just fine when using bf16 mixed precision?\r\n\r\nI think if this is done it should only be done for fp16 mixed precision.\r\n\r\n--------------\r\n\r\nAlso please note that we automatically use `apex`'s faster layernorm when it's found, so `T5LayerNorm.forward` workaround will apply only if it's not found. i.e. you may want to disable the swap-in of the faster version.", "> I think if this is done it should only be done for fp16 mixed precision.\r\n\r\nYes indeed that's a very good point! (cc @younesbelkada) ", "Also what about users pre-training their own model in fp16, the proposed change will negatively impact them as well, as the current modeling code should work just fine for them. \r\n\r\nIMHO, the safest approach would be to leave the current code alone and have a flag that activates workaround solutions for those who need them.\r\n\r\nAdditionally, I remind you that there were other workarounds proposed that don't use any additional memory and use a scaling factor instead that moves the weights into a safe-to-fp16 numerical range. https://github.com/huggingface/transformers/pull/10956#issuecomment-961030728", "> Also what about users pre-training their own model in fp16, the proposed change will negatively impact them as well, as the current modeling code should work just fine for them.\r\n\r\nThe main goal of having T5 in the library is to support the corresponding pretrained models as best as possible. All of T5, FlanT5 and T0 checkpoints have been trained in bfloat16, so changing the code to support fp16 inference is for the better for the larger community. If this slows down an edge case, the user can just adapt the line of code in the modeling file to suit their need (that's why the library is not modular and with a strict one file per model policy after all :-) ).", "One could argue that this breaks backward compatibility since suddenly the model requires more memory to operate than when it was originally released.\r\n\r\nIf the belief is that the majority X% of users will benefit from such BC breaking change I think it'd at least be nice to have a flag for the small Y% to be able to retain what they have been using w/o needing to manually change the code.\r\n\r\nMight be much simpler to clone this to `models/t5-bf162fp16` apply all the proposed patches and thus have 2 versions - one for normal use and one for the originally unintended bf162fp16 use.", "Thanks for the quick replies all!\r\n\r\n@younesbelkada to answer your questions:\r\n\r\n> When using load_in_8bit_skip_modules=['decoder', 'lm_head', 'wo'] note that with your fix decoder and lm_head will be kept to their native dtype. In the case you are calling load_in_8bit=True, we first cast all the weights in fp16 therefore load_in_8bit_skip_modules=['decoder', 'lm_head', 'wo'] is probably not needed as wo is \"force-casted\" in fp32 and lm_head is always detected to be kept in its original precision. Can you double check that? 🙏\r\n\r\nRegarding need for `wo`: if we don't pass it in, then it is not ignored from the conversion of the linear layers to 8bit, and an autocast is applied:\r\n```\r\n/root/venv/lib/python3.7/site-packages/bitsandbytes/autograd/_functions.py:231: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization\r\n warnings.warn(f\"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization\")\r\n```\r\n\r\nand then resulting model:\r\n```\r\n (1): T5LayerFF(\r\n (DenseReluDense): T5DenseGatedActDense(\r\n (wi_0): Linear8bitLt(in_features=4096, out_features=10240, bias=False)\r\n (wi_1): Linear8bitLt(in_features=4096, out_features=10240, bias=False)\r\n --> (wo): Linear8bitLt(in_features=10240, out_features=4096, bias=False) <--\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n (act): NewGELUActivation()\r\n )\r\n (layer_norm): T5LayerNorm()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n )\r\n```\r\nSo that one is required.\r\n\r\nFor `decoder` and `lm_head`, I included those because of this line:\r\n\r\nhttps://github.com/huggingface/transformers/blob/9e56aff58a742b48fc8edea8d28d5b80330efbcc/src/transformers/modeling_utils.py#L2319\r\n\r\nFor this model `get_keys_to_not_convert` returns `['decoder', 'lm_head']`. So I didn't want to change this behavior.\r\n\r\nNote that the `decoder` doesn't actually seem to do anything, because in `replace_8bit_linear`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/9e56aff58a742b48fc8edea8d28d5b80330efbcc/src/transformers/utils/bitsandbytes.py#L113-L126\r\n\r\nthis actually only checks the last part of the module name (e.g. `wo`), but `decoder` itself is not a linear layer. Not sure if this behavior is intended, or is this a separate bug that `replace_8bit_linear` should check the full module name?\r\n\r\n> Does the same fix applies for T5-XL ?\r\n\r\nYes. I ran the same test internally; can confirm fp32 quality == 8bit-with-fix quality != 8bit-without-fix quality for XL.\r\n\r\nThanks!", "hi @larsmennen \r\nThanks so much for your detailed answer, everything is clear on my side now. Regarding your point about `get_keys_not_convert` it is probably a bug, let's fix this in a separate PR later .\r\n#20683 is in a good shape IMO. Can you checkout from this branch, apply the patches mentioned in 1&2 and let us know if it works as expected? 🙏 ", "@younesbelkada moving back to this thread:\r\n\r\n> Would you mind opening a PR addressing your suggestions (patch 1 & 2 from the discussion at https://github.com/huggingface/transformers/issues/20287 )?\r\n\r\nYes, happy to. will get that in today or tomorrow." ]
1,668
1,671
1,671
NONE
null
### System Info - `transformers` version: 4.25.0.dev0 - Platform: Linux-5.10.133+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running the English to German example: ``` from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto") input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` produces expected output: ``` <pad> Wie alt sind Sie?</s> ``` Loading in 8-bit and running: ``` from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl") model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xxl", device_map="auto", load_in_8bit=True) input_text = "translate English to German: How old are you?" input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` results in output more nonsensical than I'd expect: ``` <pad> How old are</s> ``` ### Expected behavior I expected close or approximate output between the original output and the 8-bit output. This was the provided INT8 code snippet so expected output to be sensible for task.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20287/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/20286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20286/comments
https://api.github.com/repos/huggingface/transformers/issues/20286/events
https://github.com/huggingface/transformers/pull/20286
1,452,172,990
PR_kwDOCUB6oc5DDf1x
20,286
Fix no trainer summarization script test failure
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" } ]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20286). All of your documentation changes will be reflected on that endpoint." ]
1,668
1,668
1,668
CONTRIBUTOR
null
# What does this PR do? This PR solves the test failure in the nightly CI for the summarization no trainer script. Also potentially https://github.com/huggingface/transformers/issues/18189 too, just waiting for verification first otherwise a followup PR will be made `gather_for_metrics` was being called twice instead of once, along with a syntax error, which is not good! :) Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20286/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/20286", "html_url": "https://github.com/huggingface/transformers/pull/20286", "diff_url": "https://github.com/huggingface/transformers/pull/20286.diff", "patch_url": "https://github.com/huggingface/transformers/pull/20286.patch", "merged_at": 1668632227000 }
https://api.github.com/repos/huggingface/transformers/issues/20285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20285/comments
https://api.github.com/repos/huggingface/transformers/issues/20285/events
https://github.com/huggingface/transformers/issues/20285
1,452,140,624
I_kwDOCUB6oc5WjehQ
20,285
Transformer cannot tokenize Chinese texts into correct words
{ "login": "fivehills", "id": 40301946, "node_id": "MDQ6VXNlcjQwMzAxOTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/40301946?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fivehills", "html_url": "https://github.com/fivehills", "followers_url": "https://api.github.com/users/fivehills/followers", "following_url": "https://api.github.com/users/fivehills/following{/other_user}", "gists_url": "https://api.github.com/users/fivehills/gists{/gist_id}", "starred_url": "https://api.github.com/users/fivehills/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fivehills/subscriptions", "organizations_url": "https://api.github.com/users/fivehills/orgs", "repos_url": "https://api.github.com/users/fivehills/repos", "events_url": "https://api.github.com/users/fivehills/events{/privacy}", "received_events_url": "https://api.github.com/users/fivehills/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker ", "> cc @ArthurZucker\r\n\r\nMany thanks!", "Hey!\r\nFor starters, would be great if you could have provided the example with `transformers` only code because we are not really aware of what might be going on inside `minicons`. Does it use the `fast` or `slow` tokenizer? What arguments are modified behind it etc.. \r\n\r\nMy question here would be \"What is the expected behavior of the `bert-base-chinese` model. The problem might just come from the tokenizer that is used, it might not correspond to your needs. In this case, it is expected that the characters are tokenized one by one. \r\nHowever if you use `tokenizer = AutoTokenizer.from_pretrained('bert-base-chinese', use_fast = False, tokenize_chinese_chars =False)`, you should get the expected results. \r\n\r\n```python \r\n>>> tokenizer.batch_decode(tokenizer(sentences).input_ids)\r\n['[CLS] 他边 都 是 小 小葱 包括 重庆 那边 [SEP]']\r\n```\r\nYou should either create a copy model with that parameter set or see with `minicons`", "Many thanks, Arthur! @ArthurZucker \r\n\r\nEven after changing the parameters in tokenizer = AutoTokenizer.from_pretrained('bert-base-chinese', use_fast = False, tokenize_chinese_chars =False), the result is not desirable. The word segments are not correct in the output.\r\n\r\n```python\r\n>>>import scorer\r\n>>> model = scorer.MaskedLMScorer('bert-base-chinese', 'cpu')\r\n>>> model.token_score(sentences, surprisal = True, base_two = True)\r\n[[('他', 8.213873863220215), ('##边', 22.977821350097656), ('##都', 22.392602920532227), ('##是', 21.245899200439453), ('##小', 21.8975830078125), ('##小', 21.818450927734375), ('##葱', 21.52490997314453), ('##包', 22.13797950744629), ('##括', 22.856788635253906), ('##重', 22.35895347595215), ('##庆', 21.193708419799805), ('##那', 21.345863342285156), ('##边', 27.09543800354004)]]\r\n```\r\nThe following is the code for \"scorer.py\". I am not sure what caused the problem on word tokenizations.\r\n\r\nThanks again!\r\n\r\n```python\r\nfrom logging import log\r\nfrom typing import Iterable, Union, List, Dict, Optional, Callable, Tuple, Any\r\n\r\nimport torch\r\nfrom transformers import (\r\n AutoModelForCausalLM, AutoModelForMaskedLM,\r\n AutoModelForSeq2SeqLM,\r\n AutoTokenizer\r\n)\r\nfrom transformers.utils.logging import set_verbosity_error\r\n\r\nfrom collections import defaultdict\r\n\r\nfrom itertools import chain\r\nfrom re import sub\r\n\r\nimport warnings\r\n\r\nset_verbosity_error()\r\n\r\nclass LMScorer:\r\n \"\"\"\r\n Base LM scorer class intended to store models and tokenizers along\r\n with methods to facilitate the analysis of language model output scores.\r\n \"\"\"\r\n def __init__(self, model_name: str, device: Optional[str] = 'cpu') -> None:\r\n \"\"\"\r\n :param model_name: name of the model, should either be a path\r\n to a model (.pt or .bin file) stored locally, or a\r\n pretrained model stored on the Huggingface Model Hub.\r\n :type model_name: str\r\n :param device: device type that the model should be loaded on,\r\n options: `cpu or cuda:{0, 1, ...}`\r\n :type device: str, optional\r\n \"\"\"\r\n self.tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast = True)\r\n self.device = device\r\n self.vocab = defaultdict(list)\r\n # {self.vocab[x.strip()].append(i) for x, i in [(self.tokenizer.decode([i]), i) for i in range(self.tokenizer.vocab_size)]}\r\n for i in range(self.tokenizer.vocab_size):\r\n decoded = [(self.tokenizer.decode(i), i)]\r\n for x, j in decoded:\r\n self.vocab[x.strip()].append(j)\r\n\r\n def add_special_tokens(self, text: Union[str, List[str]]) -> Union[str, List[str]]:\r\n raise NotImplementedError\r\n \r\n def distribution(self, batch: Iterable) -> torch.Tensor:\r\n raise NotImplementedError\r\n \r\n def topk(self, distribution: torch.Tensor, k: int = 1) -> Tuple:\r\n top_k = distribution.topk(k)\r\n \r\n probs = top_k.values.squeeze(1).exp().tolist()\r\n if k == 1:\r\n tokens = self.decode(top_k.indices.squeeze(1))\r\n else:\r\n tokens = [self.decode(x) for x in top_k.indices.squeeze(1)]\r\n \r\n return tokens, probs\r\n\r\n def query(self, distribution: torch.Tensor, queries: List[str]) -> Tuple:\r\n # this will be self.vocab tho\r\n query_ids = [self.vocab[a] for a in queries]\r\n maxlen = max(map(len, query_ids))\r\n query_ids = [q + [self.tokenizer.pad_token_id] * (maxlen - len(q)) if len(q) < maxlen else q for q in query_ids]\r\n current_batch_size = distribution.shape[0]\r\n probs = distribution[torch.arange(current_batch_size)[:, None], query_ids].max(1).values.exp().tolist()\r\n \r\n inv_ranks = distribution.argsort().argsort() + 1\r\n ranks = distribution.shape[1] - inv_ranks + 1\r\n token_ranks = ranks[torch.arange(current_batch_size)[:, None], query_ids].min(1).values.tolist()\r\n \r\n return probs, token_ranks\r\n\r\n def logprobs(self, batch: Iterable, rank: bool = False) -> Union[float, List[float]]:\r\n warnings.warn(\r\n \"logprobs is deprecated, use compute_stats instead\",\r\n DeprecationWarning\r\n )\r\n raise NotImplementedError\r\n\r\n def compute_stats(self, batch: Iterable, rank: bool = False) -> Union[Union[float, int], List[Union[float, int]]]:\r\n raise NotImplementedError\r\n\r\n def prepare_text(self, text: Union[str, List[str]]) -> Union[str, List[str]]:\r\n raise NotImplementedError\r\n\r\n def prime_text(self, preamble: Union[str, List[str]], stimuli: Union[str, List[str]]) -> Tuple:\r\n raise NotImplementedError\r\n\r\n def token_score(self, batch: Union[str, List[str]], surprisal: bool = False, prob: bool = False, base_two: bool = False, rank: bool = False) -> Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]:\r\n '''\r\n For every input sentence, returns a list of tuples in the following format:\r\n `(token, score)`,\r\n\r\n where score represents the log-probability (by default) of the token given context. Can also return ranks along with scores.\r\n\r\n :param ``Union[str, List[str]]`` batch: a single sentence or a batch of sentences.\r\n :param ``bool`` surprisal: If `True`, returns per-word surprisals instead of log-probabilities.\r\n :param ``bool`` prob: If `True`, returns per-word probabilities instead of log-probabilities.\r\n :param ``bool`` base_two: If `True`, uses log base 2 instead of natural-log (returns bits of values in case of surprisals)\r\n :param ``bool`` rank: If `True`, also returns the rank of each word in context (based on the log-probability value)\r\n\r\n :return: A `List` containing a `Tuple` consisting of the word, its associated score, and optionally, its rank.\r\n :rtype: ``Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]``\r\n '''\r\n raise NotImplementedError\r\n \r\n def score(self, batch: Union[str, List[str]], pool: Callable = torch.mean, *args) -> Union[float, List[float]]:\r\n '''\r\n DEPRECATED as of v 0.1.18. Check out ``sequence_score`` or ``token_score`` instead!\r\n\r\n Pooled estimates of sentence log probabilities, computed by the\r\n language model. Pooling is usually done using a function that\r\n is passed to the method.\r\n\r\n :param batch: a list of sentences that will be passed to the\r\n language model to score.\r\n :type batch: Union[str, List[str]]\r\n :param pool: Pooling function, is selected to be\r\n `torch.mean()` by default.\r\n :type pool: Callable\r\n\r\n :return: Float or list of floats specifying the log\r\n probabilities of the input sentence(s).\r\n :rtype: Union[float, List[float]]\r\n '''\r\n warnings.warn(\r\n \"score is deprecated, use sequence_score or token_score instead\",\r\n DeprecationWarning\r\n )\r\n result = self.logprobs(self.prepare_text(batch))\r\n logprob, _ = list(zip(*result))\r\n pooled = list(map(lambda x: pool(x, *args).tolist(), logprob))\r\n \r\n return pooled\r\n \r\n def adapt_score(self, preamble: Union[str, List[str]], stimuli: Union[str, List[str]], pool: Callable = torch.mean, *args) -> None:\r\n \"\"\"\r\n DEPRECATED as of v 0.1.18. Check out ``partial_score`` instead!\r\n \"\"\"\r\n warnings.warn(\r\n \"adapt_score is deprecated, use partial_score or token_score instead\",\r\n DeprecationWarning\r\n )\r\n\r\n def partial_score(self, preamble: Union[str, List[str]], stimuli: Union[str, List[str]], reduction: Callable = lambda x: x.mean(0).item(), **kwargs) -> List[float]:\r\n '''\r\n Pooled estimates of sequence log probabilities (or some modification of it), given a preamble. Pooling is usually done using a function that is passed to the method.\r\n\r\n :param preamble: a batch of preambles or primes passed to the\r\n language model. This is what the sequence is conditioned on, and the model ignores the word probabilities of this part of the input in estimating the overall score.\r\n :type preamble: ``Union[str, List[str]]``\r\n :param stimuli: a batch of sequences (same length as preamble)\r\n that form the main input consisting of the sequence whose\r\n score you want to calculate.\r\n :type stimuli: ``Union[str, List[str]]``\r\n :param reduction: Reduction function, is selected to be\r\n ``lambda x: x.mean(0).item()`` by default, which stands for the avg. log-probability per token for each sequence in the batch.\r\n :type reduction: Callable\r\n :param kwargs: parameters for the ``compute_stats`` call --\r\n\r\n * `prob` (`bool`): Whether the returned value should be a probability (note that the default reduction method will have to be changed to `lambda x: x.prod(0).item()` to get a meaningful return value)\r\n\r\n * `base_two` (`bool`): whether the returned value should be in base 2 (only works when `prob = False`)\r\n\r\n * `surprisal` (`bool`): whether the returned value should be a surprisal (does not work when `prob = True`)\r\n\r\n\r\n :return: List of floats specifying the desired score for the stimuli part of the input, e.g., P(stimuli | preamble).\r\n :rtype: ``List[float]``\r\n '''\r\n result = self.compute_stats(self.prime_text(preamble, stimuli), **kwargs, return_tensors = True)\r\n logprob = result\r\n reduced = list(map(reduction, logprob))\r\n \r\n return reduced\r\n\r\n def encode(self, text: Union[str, List[str]], manual_special: bool = True, return_tensors: Optional[str] = 'pt') -> Dict:\r\n \"\"\"\r\n Encode a batch of sentences using the model's tokenizer.\r\n Equivalent of calling `model.tokenizer(input)`\r\n\r\n :param ``Union[str, List[str]]`` text: Input batch/sentence to\r\n be encoded.\r\n :param manual_special: Specification of whether special tokens\r\n will be manually encoded.\r\n :type manual_special: bool\r\n :param return_tensors: returned tensor format. Default `'pt'`\r\n :type manual_special: str\r\n\r\n :return: Encoded batch \r\n :rtype: ``Dict``\r\n \"\"\"\r\n sentences = [text] if isinstance(text, str) else text\r\n\r\n if manual_special:\r\n # manually add special tokens\r\n sentences = self.add_special_tokens(sentences)\r\n if return_tensors:\r\n tokens = self.tokenizer.batch_encode_plus(sentences, add_special_tokens = False, padding = 'longest', return_attention_mask = True, return_tensors = return_tensors)\r\n else:\r\n # mostly for masked LMs\r\n tokens = self.tokenizer.batch_encode_plus(sentences, padding = 'longest', return_attention_mask = True)\r\n\r\n return tokens\r\n \r\n def decode(self, idx: List[int]):\r\n \"\"\"\r\n Decode input ids using the model's tokenizer.\r\n\r\n :param ``List[int]`` idx: List of ids.\r\n\r\n :return: Decoded strings\r\n :rtype: List[str]\r\n \"\"\"\r\n return [self.tokenizer.decode([x]).strip() for x in self.tokenizer.convert_tokens_to_ids(self.tokenizer.convert_ids_to_tokens(idx))]\r\n\r\nclass MaskedLMScorer(LMScorer):\r\n \"\"\"\r\n Class for Masked Langauge Models such as BERT, RoBERTa, etc.\r\n\r\n :param model_name: name of the model, should either be a path\r\n to a model (.pt or .bin file) stored locally, or a\r\n pretrained model stored on the Huggingface Model Hub.\r\n :type model_name: str\r\n :param device: device type that the model should be loaded on,\r\n options: `cpu or cuda:{0, 1, ...}`\r\n :type device: str, optional\r\n \"\"\"\r\n def __init__(self, model_name: str, device: Optional[str] = 'cpu') -> None:\r\n \"\"\"\r\n :param model_name: name of the model, should either be a path\r\n to a model (.pt or .bin file) stored locally, or a\r\n pretrained model stored on the Huggingface Model Hub.\r\n\r\n :type model_name: str\r\n :param device: device type that the model should be loaded on,\r\n options: `cpu or cuda:{0, 1, ...}`\r\n :type device: str, optional\r\n \"\"\"\r\n super(MaskedLMScorer, self).__init__(model_name, device)\r\n \r\n self.model = AutoModelForMaskedLM.from_pretrained(model_name, return_dict = True)\r\n self.model.to(self.device)\r\n self.model.eval()\r\n \r\n # define CLS and SEP tokens\r\n self.bos_token_id = self.tokenizer.cls_token_id\r\n self.eos_token_id = self.tokenizer.sep_token_id\r\n self.cls_token_id = self.tokenizer.cls_token_id\r\n self.sep_token_id = self.tokenizer.sep_token_id\r\n self.mask_token_id = self.tokenizer.mask_token_id\r\n self.pad_token_id = self.tokenizer.pad_token_id\r\n \r\n def add_special_tokens(self, text: Union[str, List[str]]) -> List[str]:\r\n \"\"\"\r\n Reformats input text to add special model-dependent tokens.\r\n\r\n :param text: single string or batch of strings to be\r\n modified.\r\n :type text: ``Union[str, List[str]]``\r\n\r\n :return: Modified input, containing special tokens as per \r\n tokenizer specification\r\n :rtype: ``List[str]``\r\n \"\"\"\r\n sentences = [text] if isinstance(text, str) else text\r\n sentences = [self.tokenizer.cls_token + \" \" + sentence + \" \" + self.tokenizer.sep_token for sentence in sentences]\r\n\r\n return sentences\r\n\r\n def mask(self, sentence_words: Union[Tuple[str, str], List[Tuple[str, str]]]) -> Tuple[str, str, int]:\r\n \"\"\"\r\n Processes a list of (sentence, word) into input that has the\r\n word masked out of the sentence. \r\n \r\n Note: only works for masked LMs.\r\n\r\n :param ``Union[Tuple[str], List[Tuple[str]]]`` sentence_words:\r\n Input consisting of `[(sentence, word)]`, where sentence\r\n is an input sentence, and word is a word present in the\r\n sentence that will be masked out.\r\n\r\n :return: Tuple `(sentence, word, length)`\r\n \"\"\"\r\n sentence_words = [sentence_words] if isinstance(sentence_words[0], str) else sentence_words\r\n sentences, words = list(zip(*sentence_words))\r\n words = list(words)\r\n length = len(words)\r\n\r\n sentences = [sub(rf'(?<![\\w\\/-])({word})(?=[^\\w\\/-])', self.tokenizer.mask_token, sentence) for sentence, word in sentence_words]\r\n\r\n return (sentences, words, length)\r\n\r\n def cloze(self, sentence_words: Union[Tuple[str, str], List[Tuple[str, str]]]) -> torch.Tensor:\r\n \"\"\"\r\n Runs inference on masked input. \r\n Note: only works for masked LMs.\r\n\r\n :param ``Union[Tuple[str], List[Tuple[str]]]`` sentence_words:\r\n Input consisting of `[(sentence, word)]`, where sentence\r\n is an input sentence, and word is a word present in the\r\n sentence that will be masked out and inferred.\r\n \r\n :return: A tensor with log probabilities for the desired word\r\n in context\r\n \"\"\"\r\n sentences, words, length = self.mask(sentence_words)\r\n\r\n encoded = self.tokenizer(sentences, return_tensors='pt')\r\n encoded = encoded.to(self.device)\r\n\r\n idx = torch.nonzero(encoded['input_ids'] == self.tokenizer.mask_token_id, as_tuple=False)[:,1].unsqueeze(1)\r\n word_idx = self.tokenizer(words, add_special_tokens=False)['input_ids']\r\n with torch.no_grad():\r\n masked_logits = self.model(**encoded).logits[torch.arange(length)[:, None], idx].squeeze().detach()\r\n if len(sentences) > 1:\r\n logprobs = masked_logits - masked_logits.logsumexp(1).unsqueeze(1)\r\n masked_logprobs = logprobs[torch.arange(len(sentences))[:, None], word_idx].exp().squeeze()\r\n else:\r\n logprobs = masked_logits - masked_logits.logsumexp(0)\r\n masked_logprobs = logprobs[word_idx].exp().squeeze()\r\n\r\n return masked_logprobs\r\n\r\n\r\n def prepare_text(self, text: Union[str, List[str]]) -> Iterable[Any]:\r\n \"\"\"\r\n Prepares a batch of input text into a format fit to run MLM\r\n scoring on. \r\n\r\n Borrows preprocessing algorithm from Salazar et al. (2020), and\r\n modifies code from the following github repository by simonpri:\r\n https://github.com/simonepri/lm-scorer\r\n \r\n :param text: batch of sentences to be prepared for scoring.\r\n\r\n :return: Batch of formatted input that can be passed to `logprob`\r\n \"\"\"\r\n # converts input text to batch of tensors with every position except the cls and sep token masked\r\n sentences = [text] if isinstance(text, str) else text\r\n \r\n # idea is to tokenize and then create batches of tokenized instances,\r\n # but with each token in the sequence replaced by the mask token. \r\n \r\n encoded = self.encode(sentences, manual_special = False)\r\n\r\n token_idx = encoded['input_ids']\r\n attention_masks = encoded['attention_mask']\r\n\r\n masked_tensors = [] # token ids, attention masks, lengths\r\n\r\n for token_ids, attention_mask in zip(token_idx, attention_masks):\r\n token_ids = torch.tensor(token_ids)\r\n # final_lengths = len(token_ids) - 2\r\n attention_mask = torch.tensor(attention_mask)\r\n \r\n token_ids_masked_list = []\r\n attention_masked_list = []\r\n\r\n effective_token_ids = [token for token in token_ids if token != self.pad_token_id and token != self.cls_token_id and token != self.sep_token_id]\r\n effective_length = len(effective_token_ids)\r\n \r\n\r\n mask_indices = []\r\n mask_indices = [[mask_pos] for mask_pos in range(effective_length+2)]\r\n\r\n # We don't mask the [CLS], [SEP] for now for PLL\r\n mask_indices = mask_indices[1:-1]\r\n\r\n mask_token_id = self.mask_token_id\r\n for mask_set in mask_indices:\r\n token_ids_masked = token_ids.clone()\r\n token_ids_masked[mask_set] = mask_token_id\r\n attention_masked = attention_mask.clone()\r\n \r\n attention_masked_list.append(attention_masked)\r\n token_ids_masked_list.append(token_ids_masked)\r\n masked_tensors.append((torch.stack(token_ids_masked_list), torch.stack(attention_masked_list), effective_token_ids, len(mask_indices), 1))\r\n \r\n return masked_tensors\r\n\r\n def prime_text(self, preamble: Union[str, List[str]] , stimuli: Union[str, List[str]]) -> Iterable[Any]:\r\n \"\"\"\r\n Prepares a batch of input text into a format fit to run LM\r\n scoring on. \r\n\r\n Borrows preprocessing algorithm from Salazar et al. (2020), and\r\n modifies code from the following github repository by simonpri:\r\n https://github.com/simonepri/lm-scorer\r\n\r\n :param ``Union[str, List[str]]`` preamble: Batch of prefixes/prime/preambles on which the LM is conditioned.\r\n :param ``Union[str, List[str]]`` stimuli: Batch of continuations that are scored based on the conditioned text (provided in the ``preamble``). The positions of the elements match their counterparts in the ``preamble``.\r\n\r\n :return: Batch of formatted input that can be passed to\r\n ``compute_stats``\r\n \"\"\"\r\n preamble_text = [preamble] if isinstance(preamble, str) else preamble\r\n preamble_encoded = self.encode(preamble_text, False)['input_ids']\r\n preamble_lens = []\r\n for preamble_tokens in preamble_encoded:\r\n preamble_lens.append(len([token for token in preamble_tokens if token != self.pad_token_id and token != self.sep_token_id]))\r\n \r\n sentences = [preamble + \" \" + stimuli] if isinstance(preamble, str) else [p + \" \" + s for p, s in list(zip(preamble, stimuli))]\r\n \r\n # idea is to tokenize and then create batches of tokenized instances,\r\n # but with each token in the sequence replaced by the mask token. \r\n\r\n encoded = self.encode(sentences, manual_special = False)\r\n\r\n token_idx = encoded['input_ids']\r\n attention_masks = encoded['attention_mask']\r\n\r\n masked_tensors = [] # token ids, attention masks, lengths\r\n\r\n for i, (token_ids, attention_mask) in enumerate(zip(token_idx, attention_masks)):\r\n token_ids = torch.tensor(token_ids)\r\n # final_lengths = len(token_ids) - 2\r\n attention_mask = torch.tensor(attention_mask)\r\n\r\n token_ids_masked_list = []\r\n attention_masked_list = []\r\n \r\n effective_token_ids = [token for j, token in enumerate(token_ids) if token != self.pad_token_id and token != self.cls_token_id and token != self.sep_token_id and j >= preamble_lens[i]]\r\n effective_length = len(effective_token_ids) + preamble_lens[i]\r\n\r\n\r\n mask_indices = []\r\n mask_indices = [[mask_pos] for mask_pos in range(preamble_lens[i], effective_length+1)]\r\n\r\n # We don't mask the [CLS], [SEP] for now for PLL\r\n mask_indices = mask_indices[:-1]\r\n\r\n mask_token_id = self.mask_token_id\r\n for mask_set in mask_indices:\r\n token_ids_masked = token_ids.clone()\r\n token_ids_masked[mask_set] = mask_token_id\r\n attention_masked = attention_mask.clone()\r\n\r\n attention_masked_list.append(attention_masked)\r\n token_ids_masked_list.append(token_ids_masked)\r\n masked_tensors.append((torch.stack(token_ids_masked_list), torch.stack(attention_masked_list), effective_token_ids, len(mask_indices), preamble_lens[i]))\r\n\r\n return masked_tensors\r\n\r\n def distribution(self, batch: Iterable) -> torch.Tensor:\r\n \"\"\"\r\n Returns a distribution over the vocabulary of the model.\r\n\r\n :param `Iterable` batch: A batch of inputs fit to pass to a\r\n transformer LM.\r\n\r\n :return: Tensor consisting of log probabilies over vocab items.\r\n \"\"\"\r\n # takes in prepared text and returns scores for each sentence in batch\r\n token_ids, attention_masks, effective_token_ids, lengths, offsets = list(zip(*batch))\r\n token_ids = torch.cat(token_ids)\r\n attention_masks = torch.cat(attention_masks)\r\n token_ids = token_ids.to(self.device)\r\n attention_masks = attention_masks.to(self.device)\r\n effective_token_ids = torch.cat([torch.tensor(x) for x in effective_token_ids])\r\n\r\n indices = list(chain.from_iterable([list(range(o,o+n)) for n, o in zip(lengths, offsets)]))\r\n with torch.no_grad():\r\n output = self.model(token_ids, attention_mask = attention_masks)\r\n logits = output.logits[torch.arange(sum(lengths)), indices].detach()\r\n\r\n logprob_distribution = logits - logits.logsumexp(1).unsqueeze(1)\r\n\r\n return logprob_distribution\r\n\r\n def cloze_distribution(self, queries: Iterable) -> torch.Tensor:\r\n \r\n '''\r\n Accepts as input batch of [(s_i, bw_i)] where s_i is a prompt with an\r\n abstract token (bw_i) representing a blank word and returns a distribution\r\n over the vocabulary of the model.\r\n\r\n :param `Iterable` queries: A batch of [(s_i, bw_i)] where s_i is a prompt with an abstract token (bw_i) representing a blank word\r\n\r\n :return: Tensor contisting of log probabilities over vocab items.\r\n '''\r\n \r\n queries = [queries] if isinstance(queries[0], str) else queries\r\n prompts, words = list(zip(*queries))\r\n \r\n modified_prompts = self.add_special_tokens(prompts)\r\n splits = [prompt.split(word) for prompt, word in zip(modified_prompts, words)]\r\n splits = [[x.strip() for x in s] for s in splits]\r\n pre, post = list(zip(*splits))\r\n pre_idx = self.tokenizer(list(pre), add_special_tokens = False, padding=False)['input_ids']\r\n mask_idx = [len(item) for item in pre_idx]\r\n masked = [m.replace(w, self.tokenizer.mask_token) for m, w in zip(modified_prompts, words)]\r\n \r\n with torch.no_grad():\r\n encoded = self.tokenizer(masked, add_special_tokens = False, return_tensors='pt', padding = True)\r\n encoded = encoded.to(self.device)\r\n logits = self.model(**encoded)\r\n presoftmax = logits.logits[torch.arange(len(queries)), mask_idx]\r\n if 'cuda' in self.device:\r\n presoftmax = presoftmax.detach().cpu()\r\n else:\r\n presoftmax = presoftmax.detach()\r\n \r\n logprobs = presoftmax - presoftmax.logsumexp(1).unsqueeze(1)\r\n \r\n return logprobs \r\n\r\n def logprobs(self, batch: Iterable, rank = False) -> Union[List[Tuple[torch.Tensor, str]], List[Tuple[torch.Tensor, str, int]]]:\r\n \"\"\"\r\n Returns log probabilities\r\n\r\n :param `Iterable` batch: A batch of inputs fit to pass to a\r\n transformer LM.\r\n :param rank: Specifies whether to also return ranks of words.\r\n :type rank: bool\r\n\r\n :return: List of MLM score metrics and tokens.\r\n :rtype: Union[List[Tuple[torch.Tensor, str]], List[Tuple[torch.Tensor, str, int]]]\r\n \"\"\"\r\n warnings.warn(\r\n \"logprobs is deprecated, use compute_stats instead\",\r\n DeprecationWarning\r\n )\r\n token_ids, attention_masks, effective_token_ids, lengths, offsets = list(zip(*batch))\r\n token_ids = torch.cat(token_ids)\r\n attention_masks = torch.cat(attention_masks)\r\n token_ids = token_ids.to(self.device)\r\n attention_masks = attention_masks.to(self.device)\r\n effective_token_ids = torch.cat([torch.tensor(x) for x in effective_token_ids])\r\n\r\n sent_tokens = list(map(lambda x: self.tokenizer.convert_ids_to_tokens(x.tolist()), effective_token_ids.split(lengths)))\r\n \r\n indices = list(chain.from_iterable([list(range(o,o+n)) for n, o in zip(lengths, offsets)]))\r\n with torch.no_grad():\r\n output = self.model(token_ids, attention_mask = attention_masks)\r\n logits = output.logits[torch.arange(sum(lengths)), indices]\r\n if self.device == 'cuda:0' or self.device == \"cuda:1\":\r\n logits.detach()\r\n \r\n sent_log_probs = logits - logits.logsumexp(1).unsqueeze(1)\r\n if rank:\r\n shape = sent_log_probs.shape\r\n # inv_ranks = (sent_log_probs).argsort().argsort() + 1\r\n # ranks = shape[1] - inv_ranks + 1\r\n ranks = (-1.0 * sent_log_probs).argsort().argsort() + 1\r\n word_ranks = ranks[torch.arange(shape[0]), effective_token_ids].split(lengths)\r\n sent_log_probs = sent_log_probs[torch.arange(sum(lengths)), effective_token_ids].type(torch.DoubleTensor).split(lengths)\r\n # print(sent_log_probs)\r\n # sentence_scores = list(map(lambda x: x.sum().tolist(), logprobs))\r\n # outputs.append((logprobs, sent_tokens))\r\n if rank:\r\n return list(zip(sent_log_probs, sent_tokens, word_ranks))\r\n \r\n return list(zip(sent_log_probs, sent_tokens))\r\n\r\n def compute_stats(self, batch: Iterable, rank: bool = False, prob = False, base_two: bool = False, return_tensors: bool = False) -> Union[Tuple[List[float], List[float]], List[float]]:\r\n '''\r\n Primary computational method that processes a batch of prepared sentences and returns per-token scores for each sentence. By default, returns log-probabilities.\r\n\r\n :param ``Iterable`` batch: batched input as processed by ``prepare_text`` or ``prime_text``.\r\n :param ``bool`` rank: whether the model should also return ranks per word (based on the conditional log-probability of the word in context).\r\n :param ``bool`` prob: whether the model should return probabilities instead of log-probabilities. Can only be `True` when `base_two` is `False`.\r\n :param ``bool`` base_two: whether the base of the log should be 2 (usually preferred when reporting results in bits). Can only be `True` when `prob` is `False`.\r\n :param ``bool`` return_tensors: whether the model should return scores as a list of tensors instead of a list of lists. This is important in some other convenient methods used in the package.\r\n\r\n :return: Either a tuple of lists, each containing probabilities and ranks per token in each sentence passed in the input.\r\n :rtype: ``Union[Tuple[List[float], List[float]], List[float]]``\r\n '''\r\n assert not (base_two and prob), \"cannot both use base (which is for a log), and a probability measure at the same time!\"\r\n\r\n token_ids, attention_masks, effective_token_ids, lengths, offsets = list(zip(*batch))\r\n token_ids = torch.cat(token_ids)\r\n attention_masks = torch.cat(attention_masks)\r\n token_ids = token_ids.to(self.device)\r\n attention_masks = attention_masks.to(self.device)\r\n effective_token_ids = torch.cat([torch.tensor(x) for x in effective_token_ids])\r\n \r\n indices = list(chain.from_iterable([list(range(o,o+n)) for n, o in zip(lengths, offsets)]))\r\n\r\n with torch.no_grad():\r\n output = self.model(token_ids, attention_mask = attention_masks)\r\n logits = output.logits.detach()[torch.arange(sum(lengths)), indices]\r\n\r\n logprob_distribution = logits - logits.logsumexp(1).unsqueeze(1)\r\n\r\n if base_two:\r\n logprob_distribution = logprob_distribution/torch.tensor(2).log()\r\n\r\n if prob:\r\n logprob_distribution = logprob_distribution.exp()\r\n\r\n if rank:\r\n shape = logprob_distribution.shape\r\n '''\r\n Double argsort trick:\r\n first argsort returns idxes of values that would return a sorted tensor,\r\n second argsort returns ranks (0 indexed)\r\n\r\n Proof: https://www.berkayantmen.com/rank.html\r\n\r\n TODO: Try to implement ranking in linear time but across arbitrary dimensions:\r\n https://stackoverflow.com/a/5284703\r\n '''\r\n word_ranks = (-1.0 * logprob_distribution).argsort().argsort() + 1\r\n word_ranks = word_ranks[torch.arange(shape[0]), effective_token_ids].split(lengths)\r\n word_ranks = [wr.tolist() for wr in word_ranks]\r\n\r\n scores = logprob_distribution[torch.arange(sum(lengths)), effective_token_ids].type(torch.DoubleTensor).split(lengths)\r\n scores = [s for s in scores]\r\n\r\n if not return_tensors:\r\n scores = [s.tolist() for s in scores]\r\n\r\n if rank:\r\n return scores, word_ranks\r\n else:\r\n return scores\r\n\r\n def sequence_score(self, batch, reduction = lambda x: x.mean(0).item(), base_two = False):\r\n '''\r\n TODO: reduction should be a string, if it's a function, specify what kind of function. --> how to ensure it is always that type?\r\n '''\r\n tokenized = self.prepare_text(batch)\r\n scores = self.compute_stats(tokenized, rank = False, base_two = base_two, return_tensors = True)\r\n reduced = list(map(reduction, scores))\r\n return reduced\r\n\r\n def token_score(self, batch: Union[str, List[str]], surprisal: bool = False, prob: bool = False, base_two: bool = False, rank: bool = False) -> Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]:\r\n '''\r\n For every input sentence, returns a list of tuples in the following format:\r\n `(token, score)`,\r\n\r\n where score represents the log-probability (by default) of the token given context. Can also return ranks along with scores.\r\n\r\n :param ``Union[str, List[str]]`` batch: a single sentence or a batch of sentences.\r\n :param ``bool`` surprisal: If `True`, returns per-word surprisals instead of log-probabilities.\r\n :param ``bool`` prob: If `True`, returns per-word probabilities instead of log-probabilities.\r\n :param ``bool`` base_two: If `True`, uses log base 2 instead of natural-log (returns bits of values in case of surprisals)\r\n :param ``bool`` rank: If `True`, also returns the rank of each word in context (based on the log-probability value)\r\n\r\n :return: A `List` containing a `Tuple` consisting of the word, its associated score, and optionally, its rank.\r\n :rtype: ``Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]``\r\n '''\r\n assert not (surprisal and prob), \"cannot both evaluate probability and surprisal at the same time!\"\r\n assert not (base_two and prob), \"cannot both use base (which is for a log), and a probability measure at the same time!\"\r\n\r\n tokenized = self.prepare_text(batch)\r\n if rank:\r\n scores, ranks = self.compute_stats(tokenized, rank = rank, prob = prob, base_two = base_two, return_tensors=True)\r\n else:\r\n scores = self.compute_stats(tokenized, prob = prob, base_two = base_two, return_tensors=True)\r\n\r\n if surprisal:\r\n scores = [-1.0 * s for s in scores]\r\n\r\n scores = [s.tolist() for s in scores]\r\n\r\n indices = [[i.item() for i in indexed if i.item() != self.tokenizer.pad_token_id] for indexed in list(zip(*tokenized))[2]]\r\n tokens = [self.decode(idx) for idx in indices]\r\n\r\n if rank:\r\n assert len(tokens) == len(scores) == len(ranks)\r\n else:\r\n assert len(tokens) == len(scores)\r\n\r\n res = []\r\n if rank:\r\n for t, s, r in zip(tokens, scores, ranks):\r\n res.append(list(zip(t, s, r)))\r\n # return [list(zip(t, s, r)) for t, s, r in zip(tokens, scores, ranks)]\r\n else:\r\n for t, s in zip(tokens, scores):\r\n res.append(list(zip(t, s)))\r\n\r\n return res\r\n\r\nclass IncrementalLMScorer(LMScorer):\r\n \"\"\"\r\n Class for Autoregressive or Incremental (or left-to-right) language models such as GPT2, etc.\r\n\r\n :param model_name: name of the model, should either be a path\r\n to a model (.pt or .bin file) stored locally, or a\r\n pretrained model stored on the Huggingface Model Hub.\r\n :type model_name: str\r\n :param device: device type that the model should be loaded on,\r\n options: `cpu or cuda:{0, 1, ...}`\r\n :type device: str, optional\r\n \"\"\"\r\n def __init__(self, model_name: str, device: Optional[str] = 'cpu') -> None:\r\n \"\"\"\r\n :param model_name: name of the model, should either be a path\r\n to a model (.pt or .bin file) stored locally, or a\r\n pretrained model stored on the Huggingface Model Hub.\r\n\r\n :type model_name: str\r\n :param device: device type that the model should be loaded on,\r\n options: `cpu or cuda:{0, 1, ...}`\r\n :type device: str, optional\r\n \"\"\"\r\n super(IncrementalLMScorer, self).__init__(model_name, device)\r\n \r\n self.model = AutoModelForCausalLM.from_pretrained(model_name, return_dict = True)\r\n \r\n # define CLS and SEP tokens\r\n if self.tokenizer.pad_token is None:\r\n self.tokenizer.add_special_tokens({\"additional_special_tokens\": [\"<|pad|>\"]})\r\n self.tokenizer.pad_token = \"<|pad|>\"\r\n\r\n if self.tokenizer.bos_token is None:\r\n self.tokenizer.add_special_tokens({\"additional_special_tokens\": [\"<|bos|>\"]})\r\n self.tokenizer.bos_token = \"<|bos|>\"\r\n\r\n self.model.resize_token_embeddings(len(self.tokenizer))\r\n self.model.to(self.device)\r\n self.model.eval()\r\n \r\n def add_special_tokens(self, text: Union[str, List[str]]) -> Union[str, List[str]]:\r\n \"\"\"\r\n Reformats input text to add special model-dependent tokens.\r\n\r\n :param text: single string or batch of strings to be\r\n modified.\r\n :type text: Union[str, List[str]]\r\n \r\n :return: Modified input, containing special tokens as per \r\n tokenizer specification\r\n :rtype: Union[float, List[float]]:\r\n \"\"\"\r\n sentences = [text] if isinstance(text, str) else text\r\n sentences = [self.tokenizer.bos_token + sentence for sentence in sentences]\r\n\r\n return sentences\r\n\r\n def encode(self, text: Union[str, List[str]]) -> dict:\r\n text = [text] if isinstance(text, str) else text\r\n return self.tokenizer(text, return_tensors='pt', padding = True)\r\n \r\n def prepare_text(self, text: Union[str, List[str]]) -> Tuple:\r\n \"\"\"\r\n Prepares a batch of input text into a format fit to run LM\r\n scoring on. \r\n\r\n :param text: batch of sentences to be prepared for scoring.\r\n \r\n :return: Batch of formatted input that can be passed to\r\n ``compute_stats``\r\n \"\"\"\r\n encoded = self.encode(text)\r\n offsets = [0] * len(encoded['input_ids'])\r\n return encoded, offsets\r\n \r\n def prime_text(self, preamble: Union[str, List[str]], stimuli: Union[str, List[str]]) -> Tuple:\r\n \"\"\"\r\n Prepares a batch of input text into a format fit to run LM\r\n scoring on. \r\n\r\n :param ``Union[str, List[str]]`` preamble: Batch of prefixes/prime/preambles on which the LM is conditioned.\r\n :param ``Union[str, List[str]]`` stimuli: Batch of continuations that are scored based on the conditioned text (provided in the ``preamble``). The positions of the elements match their counterparts in the ``preamble``.\r\n\r\n :return: Batch of formatted input that can be passed to\r\n ``compute_stats``\r\n \"\"\"\r\n preamble_text = [preamble] if isinstance(preamble, str) else preamble\r\n preamble_encoded = self.tokenizer(preamble_text)['input_ids']\r\n preamble_lens = []\r\n for preamble_tokens in preamble_encoded:\r\n preamble_lens.append(len([token for token in preamble_tokens if token != self.tokenizer.pad_token_id and token != self.tokenizer.sep_token_id]) - 1)\r\n \r\n sentences = [preamble + \" \" + stimuli] if isinstance(preamble, str) else [p + \" \" + s for p , s in list(zip(preamble, stimuli))]\r\n \r\n return self.encode(sentences), preamble_lens\r\n \r\n def distribution(self, batch: Iterable) -> torch.Tensor:\r\n \"\"\"\r\n Returns a distribution over the vocabulary of the model.\r\n\r\n :param `Iterable` batch: A batch of inputs fit to pass to a\r\n transformer LM.\r\n\r\n :return: Tensor consisting of log probabilies over vocab items.\r\n \"\"\"\r\n batch, offsets = batch\r\n ids = batch[\"input_ids\"]\r\n ids = ids.to(self.device)\r\n attention_masks = batch[\"attention_mask\"]\r\n attention_masks = attention_masks.to(self.device)\r\n nopad_mask = ids != self.tokenizer.pad_token_id\r\n\r\n with torch.no_grad():\r\n outputs = self.model(ids, attention_mask=attention_masks)\r\n logits = outputs.logits\r\n if self.device == 'cuda:0' or self.device == \"cuda:1\":\r\n logits.detach()\r\n\r\n outputs = []\r\n for sent_index in range(len(ids)):\r\n sent_nopad_mask = nopad_mask[sent_index]\r\n # len(tokens) = len(text[sent_index]) + 1\r\n sent_tokens = [\r\n tok\r\n for i, tok in enumerate(batch.tokens(sent_index))\r\n if sent_nopad_mask[i] and i > offsets[sent_index] + 1\r\n ]\r\n\r\n # sent_ids.shape = [len(text[sent_index]) + 1]\r\n # ignore first token (<|eos|>)\r\n sent_ids = ids[sent_index, sent_nopad_mask][1:]\r\n # logits.shape = [len(text[sent_index]) + 1, vocab_size]\r\n sent_logits = logits[sent_index, sent_nopad_mask][:-1, :]\r\n sent_logits[:, self.tokenizer.pad_token_id] = float(\"-inf\")\r\n\r\n outputs.append(sent_logits[-1])\r\n return torch.stack(outputs, 0)\r\n\r\n def next_word_distribution(self, queries: List, surprisal: bool = False):\r\n '''\r\n Returns the log probability distribution of the next word.\r\n '''\r\n encoded = self.encode(queries)\r\n encoded = encoded.to(self.device)\r\n query_ids = [[j for j, i in enumerate(instance) if i != self.tokenizer.pad_token_id][-1] for instance in encoded['input_ids'].tolist()]\r\n\r\n logits = self.model(**encoded).logits.detach()\r\n logits[:, :, self.tokenizer.pad_token_id] = float(\"-inf\")\r\n\r\n logits = logits[torch.arange(len(query_ids)), query_ids]\r\n logprobs = logits - logits.logsumexp(1).unsqueeze(1)\r\n\r\n if surprisal:\r\n logprobs = -1.0 * logprobs\r\n \r\n return logprobs\r\n\r\n def compute_stats(self, batch: Iterable, rank: bool = False, prob: bool = False, base_two: bool = False, return_tensors: bool = False) -> Union[Tuple[List[float], List[float]], List[float]]:\r\n '''\r\n Primary computational method that processes a batch of prepared sentences and returns per-token scores for each sentence. By default, returns log-probabilities.\r\n\r\n :param ``Iterable`` batch: batched input as processed by ``prepare_text`` or ``prime_text``.\r\n :param ``bool`` rank: whether the model should also return ranks per word (based on the conditional log-probability of the word in context).\r\n :param ``bool`` prob: whether the model should return probabilities instead of log-probabilities. Can only be `True` when `base_two` is `False`.\r\n :param ``bool`` base_two: whether the base of the log should be 2 (usually preferred when reporting results in bits). Can only be `True` when `prob` is `False`.\r\n :param ``bool`` return_tensors: whether the model should return scores as a list of tensors instead of a list of lists. This is important in some other convenient methods used in the package.\r\n\r\n :return: Either a tuple of lists, each containing probabilities and ranks per token in each sentence passed in the input.\r\n :rtype: ``Union[Tuple[List[float], List[int]], List[float]]``\r\n '''\r\n assert not (base_two and prob), \"cannot both use base (which is for a log), and a probability measure at the same time!\"\r\n\r\n encoded, offsets = batch\r\n encoded = encoded.to(self.device)\r\n \r\n ids = [[i for i in instance if i != self.tokenizer.pad_token_id] for instance in encoded['input_ids'].tolist()]\r\n\r\n ## Ignore the probabilities of the first token.\r\n effective_ids = [id[1:] for id in ids]\r\n\r\n with torch.no_grad():\r\n logits = self.model(**encoded).logits.detach()\r\n\r\n logits[:, :, self.tokenizer.pad_token_id] = float(\"-inf\")\r\n\r\n logits = logits.split([1]*len(offsets))\r\n\r\n ## Set up storage variables\r\n scores = []\r\n if rank:\r\n ranks = []\r\n\r\n for logit, idx, offset in zip(logits, effective_ids, offsets):\r\n length = len(idx)\r\n logit = logit.squeeze(0)[:, :-1][torch.arange(offset, length),]\r\n\r\n logprob_distribution = logit - logit.logsumexp(1).unsqueeze(1)\r\n query_ids = idx[offset:]\r\n if base_two:\r\n '''\r\n Log_2(X) = log_e(X)/log_e(2) (broadcasted)\r\n '''\r\n score = (logprob_distribution[torch.arange(length - offset), query_ids] / torch.tensor(2).log()).tolist()\r\n else:\r\n if prob:\r\n score = logprob_distribution[torch.arange(length - offset), query_ids].exp().tolist()\r\n else:\r\n score = logprob_distribution[torch.arange(length - offset), query_ids].tolist()\r\n\r\n if rank:\r\n # shape = logprob_distribution.shape\r\n '''\r\n Double argsort trick:\r\n first argsort returns idxes of values that would return a sorted tensor,\r\n second argsort returns ranks (0 indexed)\r\n\r\n Proof: https://www.berkayantmen.com/rank.html\r\n\r\n TODO: Try to implement ranking in linear time but across arbitrary dimensions:\r\n https://stackoverflow.com/a/5284703\r\n '''\r\n word_ranks = (-1.0 * logprob_distribution).argsort().argsort() + 1\r\n # inv_ranks = logprob_distribution.argsort().argsort() + 1\r\n # word_ranks = shape[1] - inv_ranks + 1\r\n word_ranks = word_ranks[torch.arange(length - offset), query_ids].tolist()\r\n ranks.append(word_ranks)\r\n\r\n scores.append(score)\r\n\r\n if return_tensors:\r\n scores = [torch.tensor(l) for l in scores]\r\n\r\n if rank:\r\n return scores, ranks\r\n else:\r\n return scores\r\n\r\n def sequence_score(self, batch, reduction = lambda x: x.mean(0).item(), base_two = False):\r\n '''\r\n TODO: reduction should be a string, if it's a function, specify what kind of function. --> how to ensure it is always that type?\r\n '''\r\n tokenized = self.prepare_text(batch)\r\n scores = self.compute_stats(tokenized, rank = False, base_two = base_two, return_tensors = True)\r\n reduced = list(map(reduction, scores))\r\n return reduced\r\n\r\n def token_score(self, batch: Union[str, List[str]], surprisal: bool = False, prob: bool = False, base_two: bool = False, rank: bool = False) -> Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]:\r\n '''\r\n For every input sentence, returns a list of tuples in the following format:\r\n `(token, score)`,\r\n\r\n where score represents the log-probability (by default) of the token given context. Can also return ranks along with scores.\r\n\r\n :param ``Union[str, List[str]]`` batch: a single sentence or a batch of sentences.\r\n :param ``bool`` surprisal: If `True`, returns per-word surprisals instead of log-probabilities.\r\n :param ``bool`` prob: If `True`, returns per-word probabilities instead of log-probabilities.\r\n :param ``bool`` base_two: If `True`, uses log base 2 instead of natural-log (returns bits of values in case of surprisals)\r\n :param ``bool`` rank: If `True`, also returns the rank of each word in context (based on the log-probability value)\r\n\r\n :return: A `List` containing a `Tuple` consisting of the word, its associated score, and optionally, its rank.\r\n :rtype: ``Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]``\r\n '''\r\n\r\n assert not (surprisal and prob), \"cannot both evaluate probability and surprisal at the same time!\"\r\n assert not (base_two and prob), \"cannot both use base (which is for a log), and a probability measure at the same time!\"\r\n\r\n tokenized = self.prepare_text(batch)\r\n if rank:\r\n scores, ranks = self.compute_stats(tokenized, rank = rank, prob = prob, base_two = base_two, return_tensors=True)\r\n else:\r\n scores = self.compute_stats(tokenized, prob = prob, base_two = base_two, return_tensors=True)\r\n\r\n if surprisal:\r\n scores = [-1.0 * s for s in scores]\r\n\r\n scores = [s.tolist() for s in scores]\r\n\r\n indices = [[i for i in indexed if i != self.tokenizer.pad_token_id] for indexed in tokenized[0]['input_ids'].tolist()]\r\n tokens = [self.decode(idx) for idx in indices]\r\n\r\n if rank:\r\n assert len(tokens) == len(scores) == len(ranks)\r\n else:\r\n assert len(tokens) == len(scores)\r\n\r\n res = []\r\n if rank:\r\n for t, s, r in zip(tokens, scores, ranks):\r\n if len(t) > len(s):\r\n diff = len(t) - len(s)\r\n sc = [0.0]*diff + s\r\n ra = [0]*diff + r\r\n res.append(list(zip(t, sc, ra)))\r\n else:\r\n res.append(list(zip(t, sc, ra)))\r\n # return [list(zip(t, s, r)) for t, s, r in zip(tokens, scores, ranks)]\r\n else:\r\n for t, s in zip(tokens, scores):\r\n if len(t) > len(s):\r\n diff = len(t) - len(s)\r\n sc = [0.0]*diff + s\r\n res.append(list(zip(t, sc)))\r\n else:\r\n res.append(list(zip(t, sc)))\r\n\r\n return res\r\n\r\n def logprobs(self, batch: Iterable, rank = False) -> Union[float, List[float]]:\r\n \"\"\"\r\n Returns log probabilities\r\n\r\n :param `Iterable` batch: A batch of inputs fit to pass to a\r\n transformer LM.\r\n :param rank: Specifies whether to also return ranks of words.\r\n :type rank: bool\r\n\r\n :return: List of LM score metrics (probability and rank)\r\n and tokens.\r\n :rtype: Union[List[Tuple[torch.Tensor, str]], List[Tuple[torch.Tensor, str, int]]]\r\n \"\"\"\r\n warnings.warn(\r\n \"logprobs is deprecated, use compute_stats instead\",\r\n DeprecationWarning\r\n )\r\n batch, offsets = batch\r\n ids = batch[\"input_ids\"]\r\n ids = ids.to(self.device)\r\n attention_masks = batch[\"attention_mask\"]\r\n attention_masks = attention_masks.to(self.device)\r\n nopad_mask = ids != self.tokenizer.pad_token_id\r\n\r\n with torch.no_grad():\r\n outputs = self.model(ids, attention_mask=attention_masks)\r\n logits = outputs.logits\r\n if self.device == 'cuda:0' or self.device == \"cuda:1\":\r\n logits.detach()\r\n \r\n outputs = []\r\n for sent_index in range(len(ids)):\r\n sent_nopad_mask = nopad_mask[sent_index]\r\n # len(tokens) = len(text[sent_index]) + 1\r\n sent_tokens = [\r\n tok\r\n for i, tok in enumerate(batch.tokens(sent_index))\r\n if sent_nopad_mask[i] and i > offsets[sent_index]\r\n ]\r\n\r\n # sent_ids.shape = [len(text[sent_index]) + 1]\r\n # ignore first token (<|eos|>)\r\n sent_ids = ids[sent_index, sent_nopad_mask][1:]\r\n # logits.shape = [len(text[sent_index]) + 1, vocab_size]\r\n sent_logits = logits[sent_index, sent_nopad_mask][:-1, :]\r\n sent_logits[:, self.tokenizer.pad_token_id] = float(\"-inf\")\r\n # ids_scores.shape = [seq_len + 1]\r\n # select only the ids present in the sentence out of all vocab items (as a 2d array)\r\n sent_ids_scores = sent_logits.gather(1, sent_ids.unsqueeze(1)).squeeze(1)\r\n # log_prob.shape = [seq_len + 1]\r\n sent_log_probs = sent_ids_scores - sent_logits.logsumexp(1)\r\n \r\n sent_log_probs = sent_log_probs.type(torch.DoubleTensor)\r\n sent_log_probs = sent_log_probs[offsets[sent_index]:]\r\n lengths = len(sent_log_probs)\r\n if rank:\r\n shape = sent_logits.shape\r\n inv_ranks = (sent_logits).argsort().argsort() + 1\r\n ranks = shape[1] - inv_ranks + 1\r\n word_ranks = ranks[list(range(shape[0]))[offsets[sent_index]:], sent_ids[offsets[sent_index]: ].tolist()].split(lengths)\r\n word_ranks = [x[0] for x in word_ranks]\r\n outputs.append((sent_log_probs, sent_tokens, word_ranks))\r\n else:\r\n outputs.append((sent_log_probs, sent_tokens))\r\n # output = (sent_log_probs.sum(), sent_ids, sent_tokens)\r\n # outputs.append(output)\r\n return outputs\r\n\r\n\r\nclass Seq2SeqScorer(LMScorer):\r\n \"\"\"\r\n Class for Autoregressive or Incremental (or left-to-right) language models such as GPT2, etc.\r\n\r\n :param model_name: name of the model, should either be a path\r\n to a model (.pt or .bin file) stored locally, or a\r\n pretrained model stored on the Huggingface Model Hub.\r\n :type model_name: str\r\n :param device: device type that the model should be loaded on,\r\n options: `cpu or cuda:{0, 1, ...}`\r\n :type device: str, optional\r\n \"\"\"\r\n def __init__(self, model_name: str, device: Optional[str] = 'cpu') -> None:\r\n \"\"\"\r\n :param model_name: name of the model, should either be a path\r\n to a model (.pt or .bin file) stored locally, or a\r\n pretrained model stored on the Huggingface Model Hub.\r\n\r\n :type model_name: str\r\n :param device: device type that the model should be loaded on,\r\n options: `cpu or cuda:{0, 1, ...}`\r\n :type device: str, optional\r\n \"\"\"\r\n super(Seq2SeqScorer, self).__init__(model_name, device)\r\n \r\n self.model = AutoModelForSeq2SeqLM.from_pretrained(\r\n model_name, return_dict = True\r\n )\r\n \r\n # define CLS and SEP tokens\r\n if self.tokenizer.pad_token is None:\r\n self.tokenizer.add_special_tokens({\"additional_special_tokens\": [\"<|pad|>\"]})\r\n self.tokenizer.pad_token = \"<|pad|>\"\r\n\r\n if self.tokenizer.bos_token is None:\r\n self.tokenizer.add_special_tokens({\"additional_special_tokens\": [\"<|bos|>\"]})\r\n self.tokenizer.bos_token = \"<|bos|>\"\r\n\r\n self.model.resize_token_embeddings(len(self.tokenizer))\r\n self.model.to(self.device)\r\n self.model.eval()\r\n \r\n def add_special_tokens(self, text: Union[str, List[str]]) -> Union[str, List[str]]:\r\n \"\"\"\r\n Reformats input text to add special model-dependent tokens.\r\n\r\n :param text: single string or batch of strings to be\r\n modified.\r\n :type text: Union[str, List[str]]\r\n \r\n :return: Modified input, containing special tokens as per \r\n tokenizer specification\r\n :rtype: Union[float, List[float]]:\r\n \"\"\"\r\n sentences = [text] if isinstance(text, str) else text\r\n sentences = [self.tokenizer.bos_token + sentence for sentence in sentences]\r\n\r\n return sentences\r\n\r\n def encode(self, text: Union[str, List[str]]) -> dict:\r\n text = [text] if isinstance(text, str) else text\r\n return self.tokenizer(text, return_tensors='pt', padding = True)\r\n \r\n def prepare_text(self, text: Union[str, List[str]]) -> Tuple:\r\n \"\"\"\r\n Prepares a batch of input text into a format fit to run LM\r\n scoring on. \r\n\r\n :param text: batch of sentences to be prepared for scoring.\r\n \r\n :return: Batch of formatted input that can be passed to\r\n ``compute_stats``\r\n \"\"\"\r\n encoded = self.encode(text)\r\n offsets = [0] * len(encoded['input_ids'])\r\n return encoded, offsets\r\n \r\n def prime_text(self, preamble: Union[str, List[str]], stimuli: Union[str, List[str]]) -> Tuple:\r\n \"\"\"\r\n Prepares a batch of input text into a format fit to run LM\r\n scoring on. \r\n\r\n :param ``Union[str, List[str]]`` preamble: Batch of prefixes/prime/preambles on which the LM is conditioned.\r\n :param ``Union[str, List[str]]`` stimuli: Batch of continuations that are scored based on the conditioned text (provided in the ``preamble``). The positions of the elements match their counterparts in the ``preamble``.\r\n\r\n :return: Batch of formatted input that can be passed to\r\n ``compute_stats``\r\n \"\"\"\r\n preamble_text = [preamble] if isinstance(preamble, str) else preamble\r\n preamble_encoded = self.tokenizer(preamble_text)['input_ids']\r\n preamble_lens = []\r\n for preamble_tokens in preamble_encoded:\r\n preamble_lens.append(len([token for token in preamble_tokens if token != self.tokenizer.pad_token_id and token != self.tokenizer.sep_token_id]) - 1)\r\n \r\n sentences = [preamble + \" \" + stimuli] if isinstance(preamble, str) else [p + \" \" + s for p , s in list(zip(preamble, stimuli))]\r\n \r\n return self.encode(sentences), preamble_lens\r\n \r\n def distribution(self, batch: Iterable) -> torch.Tensor:\r\n \"\"\"\r\n Returns a distribution over the vocabulary of the model.\r\n\r\n :param `Iterable` batch: A batch of inputs fit to pass to a\r\n transformer LM.\r\n\r\n :return: Tensor consisting of log probabilies over vocab items.\r\n \"\"\"\r\n batch, offsets = batch\r\n ids = batch[\"input_ids\"]\r\n ids = ids.to(self.device)\r\n attention_masks = batch[\"attention_mask\"]\r\n attention_masks = attention_masks.to(self.device)\r\n nopad_mask = ids != self.tokenizer.pad_token_id\r\n\r\n with torch.no_grad():\r\n outputs = self.model(ids, attention_mask=attention_masks)\r\n logits = outputs.logits\r\n if self.device == 'cuda:0' or self.device == \"cuda:1\":\r\n logits.detach()\r\n\r\n outputs = []\r\n for sent_index in range(len(ids)):\r\n sent_nopad_mask = nopad_mask[sent_index]\r\n # len(tokens) = len(text[sent_index]) + 1\r\n sent_tokens = [\r\n tok\r\n for i, tok in enumerate(batch.tokens(sent_index))\r\n if sent_nopad_mask[i] and i > offsets[sent_index] + 1\r\n ]\r\n\r\n # sent_ids.shape = [len(text[sent_index]) + 1]\r\n # ignore first token (<|eos|>)\r\n sent_ids = ids[sent_index, sent_nopad_mask][1:]\r\n # logits.shape = [len(text[sent_index]) + 1, vocab_size]\r\n sent_logits = logits[sent_index, sent_nopad_mask][:-1, :]\r\n sent_logits[:, self.tokenizer.pad_token_id] = float(\"-inf\")\r\n\r\n outputs.append(sent_logits[-1])\r\n return torch.stack(outputs, 0)\r\n\r\n def next_word_distribution(self, queries: List, surprisal: bool = False):\r\n '''\r\n Returns the log probability distribution of the next word.\r\n '''\r\n encoded = self.encode(queries)\r\n encoded = encoded.to(self.device)\r\n query_ids = [[j for j, i in enumerate(instance) if i != self.tokenizer.pad_token_id][-1] for instance in encoded['input_ids'].tolist()]\r\n\r\n logits = self.model(**encoded).logits.detach()\r\n logits[:, :, self.tokenizer.pad_token_id] = float(\"-inf\")\r\n\r\n logits = logits[torch.arange(len(query_ids)), query_ids]\r\n logprobs = logits - logits.logsumexp(1).unsqueeze(1)\r\n\r\n if surprisal:\r\n logprobs = -1.0 * logprobs\r\n \r\n return logprobs\r\n\r\n def compute_stats(self, batch: Iterable, source: Iterable, rank: bool = False, prob: bool = False, base_two: bool = False, return_tensors: bool = False) -> Union[Tuple[List[float], List[float]], List[float]]:\r\n '''\r\n Primary computational method that processes a batch of prepared sentences and returns per-token scores for each sentence. By default, returns log-probabilities.\r\n\r\n :param ``Iterable`` batch: batched input as processed by ``prepare_text`` or ``prime_text``.\r\n :param ``bool`` rank: whether the model should also return ranks per word (based on the conditional log-probability of the word in context).\r\n :param ``bool`` prob: whether the model should return probabilities instead of log-probabilities. Can only be `True` when `base_two` is `False`.\r\n :param ``bool`` base_two: whether the base of the log should be 2 (usually preferred when reporting results in bits). Can only be `True` when `prob` is `False`.\r\n :param ``bool`` return_tensors: whether the model should return scores as a list of tensors instead of a list of lists. This is important in some other convenient methods used in the package.\r\n\r\n :return: Either a tuple of lists, each containing probabilities and ranks per token in each sentence passed in the input.\r\n :rtype: ``Union[Tuple[List[float], List[int]], List[float]]``\r\n '''\r\n assert not (base_two and prob), \"cannot both use base (which is for a log), and a probability measure at the same time!\"\r\n\r\n source_encoded, source_offsets = source\r\n target_encoded, target_offsets = batch\r\n source_ids = source_encoded['input_ids'].to(self.device)\r\n target_ids = target_encoded['input_ids'].to(self.device)\r\n \r\n source_ids_list = [[i for i in instance if i != self.tokenizer.pad_token_id] for instance in source_encoded['input_ids'].tolist()]\r\n target_ids_list = [[i for i in instance if i != self.tokenizer.pad_token_id] for instance in target_encoded['input_ids'].tolist()]\r\n\r\n ## Ignore the probabilities of the first token.\r\n source_effective_ids = [id[1:] for id in source_ids_list]\r\n target_effective_ids = [id[1:] for id in target_ids_list]\r\n\r\n with torch.no_grad():\r\n logits = self.model(input_ids=source_ids, labels=target_ids).logits.detach()\r\n\r\n logits[:, :, self.tokenizer.pad_token_id] = float(\"-inf\")\r\n\r\n logits = logits.split([1]*len(target_offsets))\r\n\r\n ## Set up storage variables\r\n scores = []\r\n if rank:\r\n ranks = []\r\n\r\n for logit, idx, offset in zip(logits, target_effective_ids, target_offsets):\r\n length = len(idx)\r\n logit = logit.squeeze(0)[:, -4:-1][torch.arange(offset, length),]\r\n\r\n logprob_distribution = logit - logit.logsumexp(1).unsqueeze(1)\r\n query_ids = idx[offset:]\r\n if base_two:\r\n '''\r\n Log_2(X) = log_e(X)/log_e(2) (broadcasted)\r\n '''\r\n score = (logprob_distribution[torch.arange(length - offset), query_ids] / torch.tensor(2).log()).tolist()\r\n else:\r\n if prob:\r\n score = logprob_distribution[torch.arange(length - offset), query_ids].exp().tolist()\r\n else:\r\n score = logprob_distribution[torch.arange(length - offset), query_ids].tolist()\r\n\r\n if rank:\r\n # shape = logprob_distribution.shape\r\n '''\r\n Double argsort trick:\r\n first argsort returns idxes of values that would return a sorted tensor,\r\n second argsort returns ranks (0 indexed)\r\n\r\n Proof: https://www.berkayantmen.com/rank.html\r\n\r\n TODO: Try to implement ranking in linear time but across arbitrary dimensions:\r\n https://stackoverflow.com/a/5284703\r\n '''\r\n word_ranks = (-1.0 * logprob_distribution).argsort().argsort() + 1\r\n # inv_ranks = logprob_distribution.argsort().argsort() + 1\r\n # word_ranks = shape[1] - inv_ranks + 1\r\n word_ranks = word_ranks[torch.arange(length - offset), query_ids].tolist()\r\n ranks.append(word_ranks)\r\n\r\n scores.append(score)\r\n\r\n if return_tensors:\r\n scores = [torch.tensor(l) for l in scores]\r\n\r\n if rank:\r\n return scores, ranks\r\n else:\r\n return scores\r\n\r\n def sequence_score(self, batch, reduction = lambda x: x.mean(0).item(), base_two = False,\r\n source_format = 'blank', source = None):\r\n '''\r\n TODO: reduction should be a string, if it's a function, specify what kind of function. --> how to ensure it is always that type?\r\n '''\r\n if source is not None:\r\n assert len(source) == len(batch)\r\n source_format = \"custom\"\r\n\r\n tokenized = self.prepare_text(batch)\r\n if source_format == 'blank':\r\n source = [\"\"] * len(batch)\r\n elif source_format == 'copy':\r\n source = batch\r\n source = self.prepare_text(source)\r\n\r\n scores = self.compute_stats(tokenized, source, rank = False, base_two = base_two, return_tensors = True)\r\n reduced = list(map(reduction, scores))\r\n return reduced\r\n\r\n def token_score(self, batch: Union[str, List[str]], surprisal: bool = False, prob: bool = False, base_two: bool = False, rank: bool = False, source_format: str = 'blank') -> Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]:\r\n '''\r\n For every input sentence, returns a list of tuples in the following format:\r\n `(token, score)`,\r\n\r\n where score represents the log-probability (by default) of the token given context. Can also return ranks along with scores.\r\n\r\n :param ``Union[str, List[str]]`` batch: a single sentence or a batch of sentences.\r\n :param ``bool`` surprisal: If `True`, returns per-word surprisals instead of log-probabilities.\r\n :param ``bool`` prob: If `True`, returns per-word probabilities instead of log-probabilities.\r\n :param ``bool`` base_two: If `True`, uses log base 2 instead of natural-log (returns bits of values in case of surprisals)\r\n :param ``bool`` rank: If `True`, also returns the rank of each word in context (based on the log-probability value)\r\n\r\n :return: A `List` containing a `Tuple` consisting of the word, its associated score, and optionally, its rank.\r\n :rtype: ``Union[List[Tuple[str, float]], List[Tuple[str, float, int]]]``\r\n '''\r\n\r\n assert not (surprisal and prob), \"cannot both evaluate probability and surprisal at the same time!\"\r\n assert not (base_two and prob), \"cannot both use base (which is for a log), and a probability measure at the same time!\"\r\n\r\n tokenized = self.prepare_text(batch)\r\n if source_format == 'blank':\r\n source = [\"\"] * len(batch)\r\n elif source_format == 'copy':\r\n source = batch\r\n source = self.prepare_text(source)\r\n\r\n if rank:\r\n scores, ranks = self.compute_stats(tokenized, source, rank = rank, prob = prob, base_two = base_two, return_tensors=True)\r\n else:\r\n scores = self.compute_stats(tokenized, source, prob = prob, base_two = base_two, return_tensors=True)\r\n\r\n if surprisal:\r\n scores = [-1.0 * s for s in scores]\r\n\r\n scores = [s.tolist() for s in scores]\r\n\r\n indices = [[i for i in indexed if i != self.tokenizer.pad_token_id] for indexed in tokenized[0]['input_ids'].tolist()]\r\n tokens = [self.decode(idx) for idx in indices]\r\n\r\n if rank:\r\n assert len(tokens) == len(scores) == len(ranks)\r\n else:\r\n assert len(tokens) == len(scores)\r\n\r\n res = []\r\n if rank:\r\n for t, s, r in zip(tokens, scores, ranks):\r\n if len(t) > len(s):\r\n diff = len(t) - len(s)\r\n sc = [0.0]*diff + s\r\n ra = [0]*diff + r\r\n res.append(list(zip(t, sc, ra)))\r\n else:\r\n res.append(list(zip(t, sc, ra)))\r\n # return [list(zip(t, s, r)) for t, s, r in zip(tokens, scores, ranks)]\r\n else:\r\n for t, s in zip(tokens, scores):\r\n if len(t) > len(s):\r\n diff = len(t) - len(s)\r\n sc = [0.0]*diff + s\r\n res.append(list(zip(t, sc)))\r\n else:\r\n res.append(list(zip(t, sc)))\r\n\r\n return res\r\n\r\n def logprobs(self, batch: Iterable, rank = False, source_format: str = 'blank') -> Union[float, List[float]]:\r\n \"\"\"\r\n Returns log probabilities\r\n\r\n :param `Iterable` batch: A batch of inputs fit to pass to a\r\n transformer LM.\r\n :param rank: Specifies whether to also return ranks of words.\r\n :type rank: bool\r\n\r\n :return: List of LM score metrics (probability and rank)\r\n and tokens.\r\n :rtype: Union[List[Tuple[torch.Tensor, str]], List[Tuple[torch.Tensor, str, int]]]\r\n \"\"\"\r\n warnings.warn(\r\n \"logprobs is deprecated, use compute_stats instead\",\r\n DeprecationWarning\r\n )\r\n batch, offsets = batch\r\n ids = batch[\"input_ids\"]\r\n ids = ids.to(self.device)\r\n attention_masks = batch[\"attention_mask\"]\r\n attention_masks = attention_masks.to(self.device)\r\n nopad_mask = ids != self.tokenizer.pad_token_id\r\n\r\n with torch.no_grad():\r\n outputs = self.model(ids, attention_mask=attention_masks)\r\n logits = outputs.logits\r\n if self.device == 'cuda:0' or self.device == \"cuda:1\":\r\n logits.detach()\r\n \r\n outputs = []\r\n for sent_index in range(len(ids)):\r\n sent_nopad_mask = nopad_mask[sent_index]\r\n # len(tokens) = len(text[sent_index]) + 1\r\n sent_tokens = [\r\n tok\r\n for i, tok in enumerate(batch.tokens(sent_index))\r\n if sent_nopad_mask[i] and i > offsets[sent_index]\r\n ]\r\n\r\n # sent_ids.shape = [len(text[sent_index]) + 1]\r\n # ignore first token (<|eos|>)\r\n sent_ids = ids[sent_index, sent_nopad_mask][1:]\r\n # logits.shape = [len(text[sent_index]) + 1, vocab_size]\r\n sent_logits = logits[sent_index, sent_nopad_mask][:-1, :]\r\n sent_logits[:, self.tokenizer.pad_token_id] = float(\"-inf\")\r\n # ids_scores.shape = [seq_len + 1]\r\n # select only the ids present in the sentence out of all vocab items (as a 2d array)\r\n sent_ids_scores = sent_logits.gather(1, sent_ids.unsqueeze(1)).squeeze(1)\r\n # log_prob.shape = [seq_len + 1]\r\n sent_log_probs = sent_ids_scores - sent_logits.logsumexp(1)\r\n \r\n sent_log_probs = sent_log_probs.type(torch.DoubleTensor)\r\n sent_log_probs = sent_log_probs[offsets[sent_index]:]\r\n lengths = len(sent_log_probs)\r\n if rank:\r\n shape = sent_logits.shape\r\n inv_ranks = (sent_logits).argsort().argsort() + 1\r\n ranks = shape[1] - inv_ranks + 1\r\n word_ranks = ranks[list(range(shape[0]))[offsets[sent_index]:], sent_ids[offsets[sent_index]: ].tolist()].split(lengths)\r\n word_ranks = [x[0] for x in word_ranks]\r\n outputs.append((sent_log_probs, sent_tokens, word_ranks))\r\n else:\r\n outputs.append((sent_log_probs, sent_tokens))\r\n # output = (sent_log_probs.sum(), sent_ids, sent_tokens)\r\n # outputs.append(output)\r\n return outputs\r\n```\r\n\r\n\r\n```python\r\n\r\n```\r\n", "Hi, I am sorry but this goes outside the scope of `transformers`. I believe the issue was fixed as the * chinese text is correctly tokenized* in transformers! \r\n\r\nThe expected behaviour was matched in the example script I provided you with, and I have no idea what post processing might be done here that prevents the correct prediction. I think the issue should be open on their repo! " ]
1,668
1,669
1,669
NONE
null
### System Info Linux Debian, bert-base-multilingual-cased, bert-base-chinese ### Who can help? @SaulLu @LysandreJik @ArthurZucker ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction from minicons import scorer import torch from torch.utils.data import DataLoader import numpy as np import json model = scorer.MaskedLMScorer('bert-base-multilingual-uncased', 'cpu') sentences = ["我 昨天 下午 我 就 是 直接 买 了 一份 那个 凉菜", "他们 那边 都 是 小 小葱 包括 重庆 那边"] model.token_score(sentences, surprisal = True, base_two = True) ### Expected behavior Hi everyone, A package called "minicons" (https://github.com/kanishkamisra/minicons) can extract word representations from contextualized word embeddings, and compute word probability in context. The package is based on Transformer language models. The transformer models, like "bert-base-multilingual-uncased" could be introduced in "minicons" help compute taken surprisal or probabilities for different languages if we have a text of this language as an input. This can be applied in English, German, Spanish and some alphabet-based languages. However, it doesn't seem to work in Chinese. As you know, Chinese will be pre-processed with word segmentation (word segmentation by space). Despite this, if the input Chinese text is the one with word segments (two-character, or three and more character combination)), the output is still on the surprisal with an invididual Chinese character in the text rather than Chinese words. This is very different from the outputs of English, German, Spanish and other alphabet-based languages. I have checked with the code of the package "minicon". The package uses "self.tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast = True)" to tokenize the input text in one given language. I think that this tokenization method is fine because some Chinese transformer language models use the similar one, like "bert-chinese-base" (https://huggingface.co/bert-base-chinese?text=%E7%94%9F%E6%B4%BB%E7%9A%84%E7%9C%9F%E8%B0%9B%E6%98%AF%5BMASK%5D%E3%80%82). The following is an example. ```python from minicons import scorer import torch from torch.utils.data import DataLoader import numpy as np import json ``` ```python model = scorer.MaskedLMScorer('bert-base-chinese', 'cpu') ``` ```python sentences = ["他边 都 是 小 小葱 包括 重庆 那边"] ``` ```python model.token_score(sentences, surprisal = True, base_two = True) ``` ```python This is the real output. [[('他', 12.03427505493164), ('边', 15.405355453491211), ('都', 2.9198732376098633), ('是', 0.4283633828163147), ('小', 4.383847236633301), ('小', 8.884271621704102), ('葱', 19.068641662597656), ('包', 0.0020940606482326984), ('括', 8.549302101135254), ('重', 0.4292893409729004), ('庆', 3.0496609210968018), ('那', 3.522364377975464), ('边', 13.743260383605957)]] ``` ``` The following is the desirable result. [[('他边', 12.03427505493164), ('都', 2.9198732376098633), ('是', 0.4283633828163147), ('小', 4.383847236633301), ('小葱', 19.068641662597656), ('包括', 8.549302101135254), ('重庆', 3.0496609210968018), ('那边', 13.743260383605957)]] ``` This means that transformer seems not tokenize the input text properly. What I want to get is the probability of each word ("小葱",“那边” etc.) rather than a single character (“小”, “葱”, “那”, “边” etc.). It will be greatly appreciated if you could kindly solve this problem. Best, Kevin The following is the desirable result. [[('他边', 12.03427505493164), ('都', 2.9198732376098633), ('是', 0.4283633828163147), ('小', 4.383847236633301), ('小葱', 19.068641662597656), ('包括', 8.549302101135254), ('重庆', 3.0496609210968018), ('那边', 13.743260383605957)]] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20285/timeline
completed
null
null