url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/22594
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22594/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22594/comments
https://api.github.com/repos/huggingface/transformers/issues/22594/events
https://github.com/huggingface/transformers/issues/22594
1,656,001,656
I_kwDOCUB6oc5itJR4
22,594
Minimum set of requirements
{ "login": "markdjwilliams", "id": 25598354, "node_id": "MDQ6VXNlcjI1NTk4MzU0", "avatar_url": "https://avatars.githubusercontent.com/u/25598354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/markdjwilliams", "html_url": "https://github.com/markdjwilliams", "followers_url": "https://api.github.com/users/markdjwilliams/followers", "following_url": "https://api.github.com/users/markdjwilliams/following{/other_user}", "gists_url": "https://api.github.com/users/markdjwilliams/gists{/gist_id}", "starred_url": "https://api.github.com/users/markdjwilliams/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/markdjwilliams/subscriptions", "organizations_url": "https://api.github.com/users/markdjwilliams/orgs", "repos_url": "https://api.github.com/users/markdjwilliams/repos", "events_url": "https://api.github.com/users/markdjwilliams/events{/privacy}", "received_events_url": "https://api.github.com/users/markdjwilliams/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Transformers also makes use of `extras` and the base requirements are limited to what's strictly necessary (see [here](https://github.com/huggingface/transformers/blob/176ceff91f5e5ff15922715e5a4a4d9f66b92d14/setup.py#L412)).", "Thank you. I shouldn't have assumed that the presence of `requirements.txt` meant that `setup.py` was using it.", "Which requirements.txt are you talking about? We only have some for the examples, but there isn't one at the root of the repo." ]
1,680
1,680
1,680
NONE
null
### Feature request Separate base requirements from development dependencies, as the existing `requirements.txt` file conflates them. For example, `accelerate` already does this via setuptools' "extras" [[link](https://github.com/huggingface/accelerate/blob/3cb9d5fd9c78c1da9fbc3127d6e63679a2475c6a/setup.py)] ### Motivation I am a package maintainer who would like to install `transformers` from source. The current `requirements.txt` lists multiple dependencies, covering various use cases: testing, linting, formatting, MLOps, etc. Not all of these would be required for basic usage of the package, such as loading and inference of pre-trained models. Installation could be streamlined by allowing users to only install the dependencies necessary for their workflow. ### Your contribution This would best be tackled by somebody more familiar with the extent of the functionality available in `transformers` and consequences of any changes here.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22594/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22593
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22593/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22593/comments
https://api.github.com/repos/huggingface/transformers/issues/22593/events
https://github.com/huggingface/transformers/pull/22593
1,655,993,764
PR_kwDOCUB6oc5NsjXl
22,593
Use native TF checkpoints for the BLIP TF tests
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
MEMBER
null
Stop using `from_pt` now that the checkpoints have native TF weights
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22593/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22593/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22593", "html_url": "https://github.com/huggingface/transformers/pull/22593", "diff_url": "https://github.com/huggingface/transformers/pull/22593.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22593.patch", "merged_at": 1680716595000 }
https://api.github.com/repos/huggingface/transformers/issues/22592
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22592/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22592/comments
https://api.github.com/repos/huggingface/transformers/issues/22592/events
https://github.com/huggingface/transformers/issues/22592
1,655,924,617
I_kwDOCUB6oc5is2eJ
22,592
[Model request] Meta's SegmentAnything Model (SAM)
{ "login": "xenova", "id": 26504141, "node_id": "MDQ6VXNlcjI2NTA0MTQx", "avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xenova", "html_url": "https://github.com/xenova", "followers_url": "https://api.github.com/users/xenova/followers", "following_url": "https://api.github.com/users/xenova/following{/other_user}", "gists_url": "https://api.github.com/users/xenova/gists{/gist_id}", "starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xenova/subscriptions", "organizations_url": "https://api.github.com/users/xenova/orgs", "repos_url": "https://api.github.com/users/xenova/repos", "events_url": "https://api.github.com/users/xenova/events{/privacy}", "received_events_url": "https://api.github.com/users/xenova/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi @xenova @alaradirik I would like to work on adding this model.", "@xenova I just checked the model website and I don't have the hardware resources to perform the model inference. ", "> @xenova I just checked the model website and I don't have the hardware resources to perform the model inference.\r\n\r\nHave you tried running it locally w/ python?", "I think I can run it locally, I can work on it", "> Have you tried running it locally w/ python?\r\n\r\nno, but i dont have gpu also I recently worked on adding [seaformer model](https://github.com/huggingface/transformers/pull/21819) having 14M params, running it locally on cpu took a few seconds so this one with 632M params will take time and RAM. \r\n\r\n", "> I think I can run it locally, I can work on it\r\n\r\nGreat! How is it going? Let me know if you need any help.", "@xenova I think this week I will finish it", "Hey @Xrenya, I'm pretty across these models and would love to get them into `transformers` so please reach out if I can help you in any way. ", "Hi folks, please ignore this if you're already familiar with transformers but otherwise, you can refer to the [guidelines](https://huggingface.co/docs/transformers/add_new_model) to get started with adding a model. I'd recommend first checking you can run the original repo without any issues though. \r\n\r\nHere are some summarized points that might help with model addition:\r\n- Each model, including different checkpoints of the same model, has it's own repo on the Hub (see [DETR-ResNet-50 repo](https://huggingface.co/facebook/detr-resnet-50) as an example). This is basically a git repo that stores the checkpoint specific configuration, preprocessing configuration and the model weights.\r\n- The code (PR) added to transformers acts as a boilerplate to load different checkpoints - target model trained on different datasets or with different resolution or larger / smaller architecture.\r\n- configuration_sam.py should contain all the hyperparameters, the input image size and architectural details (e.g. number of hidden layers) to initialize the model.\r\n- image_processing_sam.py should contain the ImageProcessor class that takes in the raw input image and preprocesses it to the format expected as input to the model (resizing to a fixed input size, normalization, cropping, etc.)\r\n- processing_sam.py wraps the CLIPTokenizer used by SAM for prompt encoding and SAMImageProcessor to a single processor class. You can refer to the OWL-ViT model to see how that works.\r\n- modeling_sam.py should contain the model definition.\r\n- The conversion script:\r\n - Loads the pretrained original model and randomly initializes the HF implementation with the corresponding configuration\r\n - Copies the pretrained parameters (weights and biases) of the original model to the corresponding parameters of the randomly initialized HF model (the conversion step)\r\n - Forward propagates an arbitrary input through both the original model and converted HF model and checks if the outputs match\r\n - Uploads the converted HF model to the hub\r\n - Each model and image processor class is tested with scripts under `tests/models/<MODEL_NAME>/ `, you can refer to other test files to see what tests to add.\r\n\r\nOnce you are done, you would need to run the following commands to check the PR passes all CI tests:\r\n```\r\nmake style\r\nmake quality\r\nmake repo-consistency\r\n\r\nRUN_SLOW=TRUE pytest tests/models/sam/test_modeling_sam.py\r\nRUN_SLOW=TRUE pytest tests/models/sam/test_image_processor_sam.py\r\nRUN_SLOW=TRUE pytest tests/models/sam/test_processor_sam.py\r\n```\r\n\r\nWe can do an in-depth review once the PR passes most tests or the configuration, preprocessing and modeling is mostly complete.\r\n\r\nHope this helps!", "PR for this model is available here, sorry for not catching this issue : #22654 ", "@ArthurZucker I see, okay, next time I should push [WIP]" ]
1,680
1,681
1,681
CONTRIBUTOR
null
### Model description Meta Research recently open-sourced their "SegmentAnything Model" (SAM) for image segmentation. It would be great to have it working with this library's `ImageSegmentationPipeline`. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation GitHub repo: https://github.com/facebookresearch/segment-anything Paper: https://ai.facebook.com/research/publications/segment-anything/ Website: https://segment-anything.com/ Demo: https://segment-anything.com/demo Weights: - **`default` or `vit_h`: [ViT-H SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth)** - `vit_l`: [ViT-L SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth) - `vit_b`: [ViT-B SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22592/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22592/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22591
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22591/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22591/comments
https://api.github.com/repos/huggingface/transformers/issues/22591/events
https://github.com/huggingface/transformers/pull/22591
1,655,896,707
PR_kwDOCUB6oc5NsOsi
22,591
feat(model parallelism): moving the labels to the same device as the logits for gpt2 and bart
{ "login": "kaustubh-s1", "id": 82315953, "node_id": "MDQ6VXNlcjgyMzE1OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/82315953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kaustubh-s1", "html_url": "https://github.com/kaustubh-s1", "followers_url": "https://api.github.com/users/kaustubh-s1/followers", "following_url": "https://api.github.com/users/kaustubh-s1/following{/other_user}", "gists_url": "https://api.github.com/users/kaustubh-s1/gists{/gist_id}", "starred_url": "https://api.github.com/users/kaustubh-s1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kaustubh-s1/subscriptions", "organizations_url": "https://api.github.com/users/kaustubh-s1/orgs", "repos_url": "https://api.github.com/users/kaustubh-s1/repos", "events_url": "https://api.github.com/users/kaustubh-s1/events{/privacy}", "received_events_url": "https://api.github.com/users/kaustubh-s1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks a lot for your PR! Could you apply `make fix-copies` so that the models copied from BART or GPT-2 are auto-updated?", "> Thanks a lot for your PR! Could you apply `make fix-copies` so that the models copied from BART or GPT-2 are auto-updated?\r\n\r\nHi, just did that!", "> Thanks a lot!\r\n\r\nAll good! ✨", "Hi, @kaustubh-s1, does this change will fix model parallel for gpt2? I've just tried but got \r\n\r\n```\r\n File \"/opt/conda/envs/gpt_neox/lib/python3.9/site-packages/torch/nn/functional.py\", line 2515, in layer_norm\r\n return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)\r\n```\r\n\r\nP.S. my setup is almost same like [this](https://github.com/huggingface/transformers/issues/22569#issue-1654189111), only the following differences\r\n\r\n```python\r\ndef get_parallel_model(model_name):\r\n model = AutoModelForCausalLM.from_pretrained(\r\n model_name,\r\n device_map='auto',\r\n torch_dtype=torch.float16,\r\n low_cpu_mem_usage=True\r\n )\r\n\r\n # \r\n # setattr(model, 'model_parallel', True)\r\n # setattr(model, 'is_parallelizable', True)\r\n\r\n setattr(model, 'gradient_checkpointing', True)\r\n return model\r\n```", "> Hi, @kaustubh-s1, does this change will fix model parallel for gpt2? I've just tried but got\r\n> \r\n> ```\r\n> File \"/opt/conda/envs/gpt_neox/lib/python3.9/site-packages/torch/nn/functional.py\", line 2515, in layer_norm\r\n> return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)\r\n> RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)\r\n> ```\r\n> \r\n> P.S. my setup is almost same like [this](https://github.com/huggingface/transformers/issues/22569#issue-1654189111), only the following differences\r\n> \r\n> ```python\r\n> def get_parallel_model(model_name):\r\n> model = AutoModelForCausalLM.from_pretrained(\r\n> model_name,\r\n> device_map='auto',\r\n> torch_dtype=torch.float16,\r\n> low_cpu_mem_usage=True\r\n> )\r\n> \r\n> # \r\n> # setattr(model, 'model_parallel', True)\r\n> # setattr(model, 'is_parallelizable', True)\r\n> \r\n> setattr(model, 'gradient_checkpointing', True)\r\n> return model\r\n> ```\r\n\r\nHi @innat. It should do that ig. But I do not have a multi gpu setup so can't say for sure. I just followed the steps #22535 to move labels to same device as logits. Theoretically speaking, it should work." ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? As suggested in the #22561 moving the labels to the same device as the logits they are compared to for `bart` and `gpt-2` models This action has been referred to from #22535 ``` lm_logits = self.lm_head(outputs[0]) lm_logits = lm_logits + self.final_logits_bias.to(lm_logits.device) masked_lm_loss = None if labels is not None: labels = labels.to(lm_logits.device) loss_fct = CrossEntropyLoss() masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) ``` cc @sgugger could you review this once.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22591/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22591", "html_url": "https://github.com/huggingface/transformers/pull/22591", "diff_url": "https://github.com/huggingface/transformers/pull/22591.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22591.patch", "merged_at": 1680719838000 }
https://api.github.com/repos/huggingface/transformers/issues/22590
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22590/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22590/comments
https://api.github.com/repos/huggingface/transformers/issues/22590/events
https://github.com/huggingface/transformers/issues/22590
1,655,753,527
I_kwDOCUB6oc5isMs3
22,590
Support whisper-timestamped
{ "login": "jozefchutka", "id": 750041, "node_id": "MDQ6VXNlcjc1MDA0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/750041?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jozefchutka", "html_url": "https://github.com/jozefchutka", "followers_url": "https://api.github.com/users/jozefchutka/followers", "following_url": "https://api.github.com/users/jozefchutka/following{/other_user}", "gists_url": "https://api.github.com/users/jozefchutka/gists{/gist_id}", "starred_url": "https://api.github.com/users/jozefchutka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jozefchutka/subscriptions", "organizations_url": "https://api.github.com/users/jozefchutka/orgs", "repos_url": "https://api.github.com/users/jozefchutka/repos", "events_url": "https://api.github.com/users/jozefchutka/events{/privacy}", "received_events_url": "https://api.github.com/users/jozefchutka/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We've been thinking about how to add word-level timestamps to Whisper. I still have to look at whisper-timestamped to see exactly what they're doing, but for now I'll reference https://github.com/huggingface/transformers/issues/21412 as the main issue for tracking this.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Probably this one can be closed as work on https://github.com/huggingface/transformers/pull/23205 will deliver the feature", "Closed by https://github.com/huggingface/transformers/pull/23205" ]
1,680
1,687
1,687
NONE
null
### Feature request It would be great if [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped) could be added into Transformers. ### Motivation whisper-timestamped is an extension of [openai-whisper](https://pypi.org/project/whisper-openai/) python package and is meant to be compatible with any version of openai-whisper. On top of openai-whisper it provides word timestamps and give more accurate estimation of speech segments when transcribing. This is suitable for karakoe style subtitles etc. ### Your contribution Probably unable to help with this at the moment.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22590/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22590/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22589
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22589/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22589/comments
https://api.github.com/repos/huggingface/transformers/issues/22589/events
https://github.com/huggingface/transformers/pull/22589
1,655,743,999
PR_kwDOCUB6oc5NrvDi
22,589
[WIP] 🌐 [i18n-KO] Translated `sequence_classification.mdx` to Korean
{ "login": "0525hhgus", "id": 47289574, "node_id": "MDQ6VXNlcjQ3Mjg5NTc0", "avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0525hhgus", "html_url": "https://github.com/0525hhgus", "followers_url": "https://api.github.com/users/0525hhgus/followers", "following_url": "https://api.github.com/users/0525hhgus/following{/other_user}", "gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}", "starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions", "organizations_url": "https://api.github.com/users/0525hhgus/orgs", "repos_url": "https://api.github.com/users/0525hhgus/repos", "events_url": "https://api.github.com/users/0525hhgus/events{/privacy}", "received_events_url": "https://api.github.com/users/0525hhgus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "There is a problem with the storage stream, so I will PR it again.. 😢 " ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Translated the `tasks/sequence_classification.mdx` file of the documentation to Korean. - The file name is `sequence_classification.mdx`, but the document name is `text classification`. Thank you in advance for your review:) Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제출 전 체크리스트로, 가짜연구소만의 체크리스트도 <details>로 감싸서 만들어두면 더 좋을 것 같아요. --> ## Who can review? <!-- 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22589/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22589", "html_url": "https://github.com/huggingface/transformers/pull/22589", "diff_url": "https://github.com/huggingface/transformers/pull/22589.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22589.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22588
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22588/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22588/comments
https://api.github.com/repos/huggingface/transformers/issues/22588/events
https://github.com/huggingface/transformers/pull/22588
1,655,613,836
PR_kwDOCUB6oc5NrStx
22,588
Fix a typo in one of the BLIP pretrained checkpoint names
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22588). All of your documentation changes will be reflected on that endpoint." ]
1,680
1,680
1,680
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22588/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22588", "html_url": "https://github.com/huggingface/transformers/pull/22588", "diff_url": "https://github.com/huggingface/transformers/pull/22588.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22588.patch", "merged_at": 1680702981000 }
https://api.github.com/repos/huggingface/transformers/issues/22587
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22587/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22587/comments
https://api.github.com/repos/huggingface/transformers/issues/22587/events
https://github.com/huggingface/transformers/pull/22587
1,655,469,534
PR_kwDOCUB6oc5NqzgV
22,587
Move back doctest instructions to setup.cfg
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22587). All of your documentation changes will be reflected on that endpoint." ]
1,680
1,680
1,680
COLLABORATOR
null
# What does this PR do? As a result of #22539, the options we have for the doctests are now all ignored. This PR reverts the change for those and puts them back in `setup.cfg`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22587/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22587/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22587", "html_url": "https://github.com/huggingface/transformers/pull/22587", "diff_url": "https://github.com/huggingface/transformers/pull/22587.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22587.patch", "merged_at": 1680695599000 }
https://api.github.com/repos/huggingface/transformers/issues/22586
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22586/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22586/comments
https://api.github.com/repos/huggingface/transformers/issues/22586/events
https://github.com/huggingface/transformers/pull/22586
1,655,445,752
PR_kwDOCUB6oc5NquQ0
22,586
Fix PT-TF equivalence test for GPT1
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I may have tagged you slightly too early, please be slower to get to your notifications", "I will do my best to take 72 hours to get to your PR next time you ask for a review ;-)", ":handshake: " ]
1,680
1,680
1,680
MEMBER
null
This PR fixes the hidden states output from `TFOpenAIGPTDoubleHeadsModel` to have the same shapes as the PT version, and re-enables the relevant test.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22586/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22586", "html_url": "https://github.com/huggingface/transformers/pull/22586", "diff_url": "https://github.com/huggingface/transformers/pull/22586.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22586.patch", "merged_at": 1680696960000 }
https://api.github.com/repos/huggingface/transformers/issues/22585
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22585/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22585/comments
https://api.github.com/repos/huggingface/transformers/issues/22585/events
https://github.com/huggingface/transformers/pull/22585
1,655,357,175
PR_kwDOCUB6oc5NqbEw
22,585
Tests: disable `accelerate_tests` mark warnings
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
MEMBER
null
# What does this PR do? Adds the `accelerate_tests` mark to `conftest.py`, so we don't get related warnings at test time. Here's a print screen before the fix: <img width="1512" alt="Screenshot 2023-04-05 at 11 27 32" src="https://user-images.githubusercontent.com/12240844/230054803-cc95b93d-aab4-4133-baa8-7115760cd3ee.png"> And after the fix: <img width="1512" alt="Screenshot 2023-04-05 at 11 27 53" src="https://user-images.githubusercontent.com/12240844/230054894-46a9435a-43be-4ef9-b111-92d664017d13.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22585/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22585", "html_url": "https://github.com/huggingface/transformers/pull/22585", "diff_url": "https://github.com/huggingface/transformers/pull/22585.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22585.patch", "merged_at": 1680696807000 }
https://api.github.com/repos/huggingface/transformers/issues/22584
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22584/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22584/comments
https://api.github.com/repos/huggingface/transformers/issues/22584/events
https://github.com/huggingface/transformers/pull/22584
1,655,296,507
PR_kwDOCUB6oc5NqNqI
22,584
Seq2SeqTrainer: use unwrapped model to retrieve the generation config
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stas00 I do not have quick access to a multi-GPU setup. Would you be so kind as to double-check whether this fix solves the issue? 🙏 ", "_The documentation is not available anymore as the PR was closed or merged._", "I confirm that it fixes the first crash with 2+ gpus, the 2nd crash in eval remains.\r\n\r\n```\r\n$ PYTHONPATH=src python examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --do_train --do_eval --source_lang en --target_lang de --source_prefix 'translate English to German: ' --dataset_name stas/wmt14-en-de-pre-processed --output_dir /tmp/tst-translation --num_train_epochs 1 --per_device_train_batch_size=1 --max_train_samples 10 --overwrite_output_dir --seed 1137 --per_device_eval_batch_size 1 --predict_with_generate --fp16 --max_eval_samples 10\r\n[...]\r\n04/05/2023 10:11:48 - INFO - __main__ - *** Evaluate ***\r\n[INFO|trainer.py:3126] 2023-04-05 10:11:48,677 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3128] 2023-04-05 10:11:48,677 >> Num examples = 10\r\n[INFO|trainer.py:3131] 2023-04-05 10:11:48,677 >> Batch size = 2\r\n[INFO|configuration_utils.py:575] 2023-04-05 10:11:48,691 >> Generate config GenerationConfig {\r\n \"_from_model_config\": true,\r\n \"decoder_start_token_id\": 0,\r\n \"eos_token_id\": 1,\r\n \"pad_token_id\": 0,\r\n \"transformers_version\": \"4.28.0.dev0\"\r\n}\r\n\r\nTraceback (most recent call last):\r\n File \"examples/pytorch/translation/run_translation.py\", line 664, in <module>\r\n main()\r\n File \"examples/pytorch/translation/run_translation.py\", line 605, in main\r\n metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix=\"eval\")\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py\", line 159, in evaluate\r\n return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py\", line 2990, in evaluate\r\n output = eval_loop(\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py\", line 3278, in evaluation_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))\r\n File \"examples/pytorch/translation/run_translation.py\", line 546, in compute_metrics\r\n decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py\", line 3445, in batch_decode\r\n return [\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py\", line 3446, in <listcomp>\r\n self.decode(\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py\", line 3485, in decode\r\n return self._decode(\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_fast.py\", line 549, in _decode\r\n text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)\r\nOverflowError: out of range integral type conversion attempted\r\n```\r\n\r\nThis of course can be dealt with in a separate PR since the issue appears to be totally different. In which case please remove `Fixes: ...` so that the original Issue doesn't get closed.", "Thank you for checking @stas00! PR header changed accordingly." ]
1,680
1,680
1,680
MEMBER
null
# What does this PR do? Addresses one of the issues in #22571 As the title indicates, changes the source of the generation config from `model` (wrapped model) to `self.model` (unwrapped model).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22584/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22584", "html_url": "https://github.com/huggingface/transformers/pull/22584", "diff_url": "https://github.com/huggingface/transformers/pull/22584.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22584.patch", "merged_at": 1680784198000 }
https://api.github.com/repos/huggingface/transformers/issues/22583
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22583/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22583/comments
https://api.github.com/repos/huggingface/transformers/issues/22583/events
https://github.com/huggingface/transformers/pull/22583
1,655,236,729
PR_kwDOCUB6oc5NqAy3
22,583
Add thousands separator in training summary
{ "login": "qmeeus", "id": 25608944, "node_id": "MDQ6VXNlcjI1NjA4OTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/25608944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qmeeus", "html_url": "https://github.com/qmeeus", "followers_url": "https://api.github.com/users/qmeeus/followers", "following_url": "https://api.github.com/users/qmeeus/following{/other_user}", "gists_url": "https://api.github.com/users/qmeeus/gists{/gist_id}", "starred_url": "https://api.github.com/users/qmeeus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qmeeus/subscriptions", "organizations_url": "https://api.github.com/users/qmeeus/orgs", "repos_url": "https://api.github.com/users/qmeeus/repos", "events_url": "https://api.github.com/users/qmeeus/events{/privacy}", "received_events_url": "https://api.github.com/users/qmeeus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? The logger prints a summary at the beginning of training that displays some info such as number of examples, number of parameters, total number of steps, etc. Those numbers can be quite large and difficult to read. I added a thousand separator to improve readability for the following: - num_examples - num_train_epochs - per_device_train_batch_size - total_train_batch_size - max_steps - num_trainable_params ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22583/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22583", "html_url": "https://github.com/huggingface/transformers/pull/22583", "diff_url": "https://github.com/huggingface/transformers/pull/22583.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22583.patch", "merged_at": 1680701319000 }
https://api.github.com/repos/huggingface/transformers/issues/22582
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22582/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22582/comments
https://api.github.com/repos/huggingface/transformers/issues/22582/events
https://github.com/huggingface/transformers/pull/22582
1,655,208,178
PR_kwDOCUB6oc5Np6hR
22,582
Adding support for BPE merge creation from scores instead of ids.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22582/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22582", "html_url": "https://github.com/huggingface/transformers/pull/22582", "diff_url": "https://github.com/huggingface/transformers/pull/22582.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22582.patch", "merged_at": 1680703386000 }
https://api.github.com/repos/huggingface/transformers/issues/22581
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22581/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22581/comments
https://api.github.com/repos/huggingface/transformers/issues/22581/events
https://github.com/huggingface/transformers/pull/22581
1,655,145,882
PR_kwDOCUB6oc5NptKu
22,581
docs: ko: complete `_toctree.yml`
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Here's the preview screenshot:\n\n<img src=\"https://user-images.githubusercontent.com/29195190/230030526-fcdde977-3c46-4606-8700-daecba4bd99c.jpg\" width=\"200px\">\n\n\nYellow highlighted items are complete. Hence, they do not have `(번역중)` in front of them. I hope this new approach will help my colleagues. \n\nMay you please merge this, Mr. @sgugger ? Thank you so much for your support 🙏💕", "I think this PR is a good idea to avoid wrong depth mistakes or yaml syntax errors" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? From our first week in localizing the documentation, we came across issues and git conflicts with `_toctree.yml`. This PR updates the `_toctree.yml` file for translations, making it easier for translators to locate the correct files to translate without worrying about the yaml formatting. Each document title now has `(<in translation phrase>)` added in front of it and the "local" key has been changed to `in_translation`. Translators can now use this scaffold by following these steps: 1. Edit the `local` value by copy & pasting directly from the same line number in `en/_toctree.yml`. 2. Edit the `title` value by replacing the `(<in translation phrase>) <english title>` with the translated title of each document. 3. That's it! By using this updated scaffold, translators will be able to easily identify the correct files to translate, minimizing the time spent on formatting and file location issues. We hope that this will streamline the translation process and make it more accessible to our community. Initial language starters can recreate this scaffold by following these steps: 1. Copy the `_toctree.yml` file from the `en` folder. 2. Paste it into the corresponding language folder (e.g., `ko`, `fr`, `de`). 3. Create a temporary `in_translation.mdx` file in the desired language. 4. Find & Replace `local: .*` with `local: in_translation`. 5. Find & Replace `title:` with `title: (<in translation phrase>)` where the phrase is preferrably in the desired language. For example, in Korean the phrase is "번역중" By taking the initial time to create this scaffold for your language, you will greatly reduce fellow language speakers' yaml formatting issues in the long run. Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Team HuggingFace: @sgugger, @ArthurZucker, @eunseojo May you please review this PR? Team PseudoLab: @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22581/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22581/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22581", "html_url": "https://github.com/huggingface/transformers/pull/22581", "diff_url": "https://github.com/huggingface/transformers/pull/22581.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22581.patch", "merged_at": 1680701537000 }
https://api.github.com/repos/huggingface/transformers/issues/22580
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22580/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22580/comments
https://api.github.com/repos/huggingface/transformers/issues/22580/events
https://github.com/huggingface/transformers/issues/22580
1,655,101,639
I_kwDOCUB6oc5iptjH
22,580
resume train
{ "login": "MikeDean2367", "id": 65744560, "node_id": "MDQ6VXNlcjY1NzQ0NTYw", "avatar_url": "https://avatars.githubusercontent.com/u/65744560?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MikeDean2367", "html_url": "https://github.com/MikeDean2367", "followers_url": "https://api.github.com/users/MikeDean2367/followers", "following_url": "https://api.github.com/users/MikeDean2367/following{/other_user}", "gists_url": "https://api.github.com/users/MikeDean2367/gists{/gist_id}", "starred_url": "https://api.github.com/users/MikeDean2367/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MikeDean2367/subscriptions", "organizations_url": "https://api.github.com/users/MikeDean2367/orgs", "repos_url": "https://api.github.com/users/MikeDean2367/repos", "events_url": "https://api.github.com/users/MikeDean2367/events{/privacy}", "received_events_url": "https://api.github.com/users/MikeDean2367/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Resuming training with a different setup than the one that begun it is at your own peril and is definitely not recommended or officially supported.", "Ok, thank you for your reply. I will attempt to manually skip these already trained data." ]
1,680
1,680
1,680
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-4.15.0-175-generic-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 1.12.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Currently, I am using 8 GPUs to train GPT. I used the `Trainer` API provided by huggingface for training. Due to the large amount of data, it is expected to train only one epoch. When I was halfway through my training, I stopped training and increased the number of GPU to 24. When I resumed training, there was no change in the number of steps trained. However, due to the increase in the number of GPUs, the global batch size will also increase, so theoretically, the overall training step should change. ### Expected behavior Is this a displayed bug? In other words, the steps required to complete an epoch during training will be less than the displayed number of steps. If not, how should I skip data that has already been trained?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22580/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22579
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22579/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22579/comments
https://api.github.com/repos/huggingface/transformers/issues/22579/events
https://github.com/huggingface/transformers/issues/22579
1,655,067,428
I_kwDOCUB6oc5iplMk
22,579
Hosted Files Compression
{ "login": "jozefchutka", "id": 750041, "node_id": "MDQ6VXNlcjc1MDA0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/750041?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jozefchutka", "html_url": "https://github.com/jozefchutka", "followers_url": "https://api.github.com/users/jozefchutka/followers", "following_url": "https://api.github.com/users/jozefchutka/following{/other_user}", "gists_url": "https://api.github.com/users/jozefchutka/gists{/gist_id}", "starred_url": "https://api.github.com/users/jozefchutka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jozefchutka/subscriptions", "organizations_url": "https://api.github.com/users/jozefchutka/orgs", "repos_url": "https://api.github.com/users/jozefchutka/repos", "events_url": "https://api.github.com/users/jozefchutka/events{/privacy}", "received_events_url": "https://api.github.com/users/jozefchutka/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Answered in https://github.com/huggingface/huggingface_hub/issues/1446#issuecomment-1521893687 on why this will unfortunately not be supported anytime soon :confused: Since we have a generic issue in `huggingface_hub`, I think we can close this one." ]
1,680
1,682
1,682
NONE
null
### Feature request I am very new to hugging face, so I am not sure this is the right place to request, if not please guide me. I was thinking, the hosted files (i.e. models) could use compression like brotli. Considering its all static files this could be done once instead on per request. For example, [decoder_model_merged.onnx](https://huggingface.co/Xenova/transformers.js/blob/main/quantized/openai/whisper-tiny.en/speech2seq-lm-with-past/decoder_model_merged.onnx) has ~50MB but can be compressed ~30MB using brotli: ``` brotli decoder_model_merged.onnx -o decoder_model_merged.onnx.br -Z -f ``` ### Motivation There are many sites and online demos using huggingface cdn, fetching large models. There might be substantial reduction in traffic and waiting times if these files are compressed. Considering it follows the request headers and serves with proper response headers this will be very transparent (no code change needed) for end devs / users. ### Your contribution I am sorry, I am no expert in this field nor I have knowledge of cdn architecture used
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22579/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22579/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/22578
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22578/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22578/comments
https://api.github.com/repos/huggingface/transformers/issues/22578/events
https://github.com/huggingface/transformers/pull/22578
1,654,871,778
PR_kwDOCUB6oc5NozDH
22,578
🌐 [i18n-KO] Translated `tutorial/proprecssing.mdx` to Korean
{ "login": "sim-so", "id": 96299403, "node_id": "U_kgDOBb1piw", "avatar_url": "https://avatars.githubusercontent.com/u/96299403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sim-so", "html_url": "https://github.com/sim-so", "followers_url": "https://api.github.com/users/sim-so/followers", "following_url": "https://api.github.com/users/sim-so/following{/other_user}", "gists_url": "https://api.github.com/users/sim-so/gists{/gist_id}", "starred_url": "https://api.github.com/users/sim-so/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sim-so/subscriptions", "organizations_url": "https://api.github.com/users/sim-so/orgs", "repos_url": "https://api.github.com/users/sim-so/repos", "events_url": "https://api.github.com/users/sim-so/events{/privacy}", "received_events_url": "https://api.github.com/users/sim-so/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sim-so keep it up 👍 :-)\r\n", "Some updates!\r\n- Newly translated two parts of this documentation: `Build Tensors` and `Audio`\r\n- Revised the sentence with the feedback as below:\r\n`토크나이저가 두 개의 특수한 토큰(분류 토큰 CLS와 구분 토큰 SEP)을 문장에 추가했습니다.`\r\n- Translated and revised all `feature extractor` to `특징 추출기` based on TTA.\r\n\r\nI am going to finish it by this Sunday. Thank you all in advance! :smile:", "I translated all of this document.\r\nThank you in advance for your review! 😉 ", "다른 문서들을 참고하여 일부 단어의 번역어를 모두 변경했습니다.\r\n- argument -> 인수\r\n- feature extractor -> 특성 추출기\r\n- method -> 메소드\r\n- separator([SEP]) -> 분할 토큰", "Could you review this PR?\r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,680
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Translated ~partially~ the documentation `tutorial/proprecessing.mdx` to Korean. - [x] Proprecessing 전처리 - [x] Natural Language Processing 자연어처리 - [x] Pad 패딩 - [x] Trancation 생략 - [x] Build Tensor 텐서 만들기 - [x] Audio 오디오 - [x] Computer Vision 컴퓨터 비전 - [x] Pad 패딩 - [x] Multimodal 멀티모달 Thank you in advance for your review! Part of #20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Team PsedoLab, could you review this PR? @wonhyeongseo @0525hhgus @kihoon71 @gabrielwithappy, @HanNayeoniee, @jungnerd <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22578/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22578/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22578", "html_url": "https://github.com/huggingface/transformers/pull/22578", "diff_url": "https://github.com/huggingface/transformers/pull/22578.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22578.patch", "merged_at": 1681471604000 }
https://api.github.com/repos/huggingface/transformers/issues/22577
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22577/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22577/comments
https://api.github.com/repos/huggingface/transformers/issues/22577/events
https://github.com/huggingface/transformers/issues/22577
1,654,848,556
I_kwDOCUB6oc5iovws
22,577
BeitFeatureExtractor no longer works with grayscale images "unsupported number of image dimensions"
{ "login": "grantdelozier", "id": 4543334, "node_id": "MDQ6VXNlcjQ1NDMzMzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4543334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/grantdelozier", "html_url": "https://github.com/grantdelozier", "followers_url": "https://api.github.com/users/grantdelozier/followers", "following_url": "https://api.github.com/users/grantdelozier/following{/other_user}", "gists_url": "https://api.github.com/users/grantdelozier/gists{/gist_id}", "starred_url": "https://api.github.com/users/grantdelozier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/grantdelozier/subscriptions", "organizations_url": "https://api.github.com/users/grantdelozier/orgs", "repos_url": "https://api.github.com/users/grantdelozier/repos", "events_url": "https://api.github.com/users/grantdelozier/events{/privacy}", "received_events_url": "https://api.github.com/users/grantdelozier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts ", "Hi @grantdelozier, thanks for reporting this. \r\n\r\nUnfortunately for some of the image transformations, grayscale images aren't currently compatible. Handling the different input formats in a more robust way is something I'm currently working on. Having these issues reported is really useful to know how to prioritise and which test cases that should pass. \r\n\r\nAt the moment, grayscale images / masks are handled in a bit of a hacky way by adding an axis e.g. [here in mask2former](https://github.com/huggingface/transformers/blob/48706c7178127e7bcd6cccd90d941801e071a4a2/src/transformers/models/mask2former/image_processing_mask2former.py#L611). \r\n\r\nTo understand the previous behaviour, could you share the feature extractor config and version of transformers which was working? The reason I ask is that when testing on commit `83e5a1060` - which added the BeiT model, the feature extractor also failed with a grayscale image input. ", "Hi Amy,\r\n\r\nFirst, thanks for all the awesome work in the transformers project!\r\n\r\nMy last known version transformers where grayscale worked is `transformers==4.24`\r\n\r\nHere is my beit config:\r\n```\r\n\r\n{\r\n \"architectures\": [\r\n \"BeitForMaskedImageModeling\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.0,\r\n \"drop_path_rate\": 0.1,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.0,\r\n \"hidden_size\": 768,\r\n \"image_size\": [512, 512],\r\n \"size\": [512, 512],\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"layer_scale_init_value\": 0.1,\r\n \"model_type\": \"beit\",\r\n \"num_attention_heads\": 12,\r\n \"num_channels\": 1,\r\n \"num_hidden_layers\": 12,\r\n \"patch_size\": 16,\r\n \"semantic_loss_ignore_index\": 255,\r\n \"torch_dtype\": \"float32\",\r\n \"use_absolute_position_embeddings\": true,\r\n \"use_auxiliary_head\": true,\r\n \"use_mask_token\": true,\r\n \"use_mean_pooling\": true,\r\n \"use_relative_position_bias\": true,\r\n \"use_shared_relative_position_bias\": false,\r\n \"do_center_crop\": false,\r\n \"vocab_size\": 8192,\r\n \"image_mean\": [0.5],\r\n \"image_std\": [0.5]\r\n}\r\n```\r\n", "@grantdelozier Thanks for sharing the config :) I'll use this as reference grayscale config to make sure everything works as expected in the fixes for accepting grayscale images. ", "Just ran into this issue today as well, with both `ViTFeatureExtractor` and `MobileViTFeatureExtractor` (but I believe this is an issue with the base class anyway).\r\n\r\n@amyeroberts Is there a problem with just expanding the dimensions of the image after converting to a numpy array / tensor?\r\n\r\ne.g.,:\r\n```bash\r\n>>> example['image']\r\n<PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x7F0480FA54B0>\r\n>>> feature_extractor(example['image'], return_tensors='pt')\r\n...\r\nValueError: Unsupported number of image dimensions: 2\r\n>>> feature_extractor(np.expand_dims(np.array(example['image']), 0), return_tensors='pt')\r\n{'pixel_values': tensor([[[[0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n ...,\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.]]]])}\r\n\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,693
1,693
NONE
null
### System Info ``` Python 3.7.13 (default, Mar 29 2022, 02:18:16) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers >>> transformers.__version__ '4.27.4' ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I recently had to update my transformers version due to a cross dependency issue and my image preprocesssing code stopped working. ``` from PIL import Image from transformers import BeitFeatureExtractor pil_image = Image.open('sample_rgb_image.png').convert('L') #RGB image and convert to grayscale image_feature_extractor = BeitFeatureExtractor.from_pretrained('/opt/ml/configs/beit-config.json') pixel_input_ids = image_feature_extractor(pil_image, return_tensors="pt")['pixel_values'] ``` Produces this error: ``` /opt/conda/lib/python3.7/site-packages/transformers/models/beit/feature_extraction_beit.py:31: FutureWarning: The class BeitFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use BeitImageProcessor instead. FutureWarning, Traceback (most recent call last): File "test_beit.py", line 8, in <module> pixel_input_ids = image_feature_extractor(pil_image, return_tensors="pt")['pixel_values'] File "/opt/conda/lib/python3.7/site-packages/transformers/models/beit/image_processing_beit.py", line 359, in __call__ return super().__call__(images, segmentation_maps=segmentation_maps, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/image_processing_utils.py", line 458, in __call__ return self.preprocess(images, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/models/beit/image_processing_beit.py", line 481, in preprocess for img in images File "/opt/conda/lib/python3.7/site-packages/transformers/models/beit/image_processing_beit.py", line 481, in <listcomp> for img in images File "/opt/conda/lib/python3.7/site-packages/transformers/models/beit/image_processing_beit.py", line 314, in _preprocess_image image_std=image_std, File "/opt/conda/lib/python3.7/site-packages/transformers/models/beit/image_processing_beit.py", line 271, in _preprocess image = self.resize(image=image, size=size, resample=resample) File "/opt/conda/lib/python3.7/site-packages/transformers/models/beit/image_processing_beit.py", line 176, in resize image, size=(size["height"], size["width"]), resample=resample, data_format=data_format, **kwargs File "/opt/conda/lib/python3.7/site-packages/transformers/image_transforms.py", line 290, in resize data_format = infer_channel_dimension_format(image) if data_format is None else data_format File "/opt/conda/lib/python3.7/site-packages/transformers/image_utils.py", line 159, in infer_channel_dimension_format raise ValueError(f"Unsupported number of image dimensions: {image.ndim}") ValueError: Unsupported number of image dimensions: 2 ``` ### Expected behavior In the past I was always able to give a grayscale PIL image as input to feature extractor. Is this input type no longer supported?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22577/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22576
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22576/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22576/comments
https://api.github.com/repos/huggingface/transformers/issues/22576/events
https://github.com/huggingface/transformers/pull/22576
1,654,558,700
PR_kwDOCUB6oc5NnwMP
22,576
Generate: `TextIteratorStreamer` timeout
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
MEMBER
null
# What does this PR do? Learning the hard way: exception in a thread that feeds an iterator = iterator hangs forever. This PR adds a timeout to the queue so that we can protect ourselves from hanging streaming generation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22576/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22576", "html_url": "https://github.com/huggingface/transformers/pull/22576", "diff_url": "https://github.com/huggingface/transformers/pull/22576.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22576.patch", "merged_at": 1680685067000 }
https://api.github.com/repos/huggingface/transformers/issues/22575
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22575/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22575/comments
https://api.github.com/repos/huggingface/transformers/issues/22575/events
https://github.com/huggingface/transformers/pull/22575
1,654,538,854
PR_kwDOCUB6oc5Nnr63
22,575
Add GPTBigCode model (Optimized GPT2 with MQA from Santacoder & BigCode)
{ "login": "jlamypoirier", "id": 18523627, "node_id": "MDQ6VXNlcjE4NTIzNjI3", "avatar_url": "https://avatars.githubusercontent.com/u/18523627?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlamypoirier", "html_url": "https://github.com/jlamypoirier", "followers_url": "https://api.github.com/users/jlamypoirier/followers", "following_url": "https://api.github.com/users/jlamypoirier/following{/other_user}", "gists_url": "https://api.github.com/users/jlamypoirier/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlamypoirier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlamypoirier/subscriptions", "organizations_url": "https://api.github.com/users/jlamypoirier/orgs", "repos_url": "https://api.github.com/users/jlamypoirier/repos", "events_url": "https://api.github.com/users/jlamypoirier/events{/privacy}", "received_events_url": "https://api.github.com/users/jlamypoirier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@lvwerra @harm-devries\r\n(Replaces #21253)", "Code on the Hub is fine too and we are adding better support for it every day :-)", "Hi @sgugger, the next generation of the model will also support this architecture so there should also be significantly more usage. Discussed this also with @LysandreJik previously, what do you think?", "_The documentation is not available anymore as the PR was closed or merged._", "If you prefer @lvwerra and if the architecture is frozen: we won't be able to accommodate changes after it's merged and released in Transformers (no breaking changes in Transformers), whereas it's easier to quickly experiment with code on the Hub. If you feel the model is mature enough and it's time, I'm not opposed :-)", "Thanks a lot for your feedback! Just addressed them all, \r\nSmall note that the cpu/disk offload seem to not work on the testing suite, but I think it is related to the corner case issues we faced with tiny T5 models, as the test pass for the `GPTBigCodModelTest` but does not pass for the `GPTBigCodeMQAModelTest`.\r\nI will also make sure doctests pass before merging", "Please wait a bit before merging, I'll do a final check for the latest changes", "I did a few minor tweaks, I'm OK for merging if it works for everyone. (Assuming CI passes)", "any updates on supporting flash attention ? or do we have a different PR to track it", "cc @younesbelkada I think this is supported in [BetterTransformers](https://huggingface.co/docs/optimum/bettertransformer/tutorials/convert) no? ", "Indeed this should go into `BetterTransformer` API on optimum library: https://github.com/huggingface/optimum \r\nOnce the feature is added there, you can just call `model.to_bettertransformer()` and benefit from flash-attention backend. @bharadwajymg would you mind opening a ticket there and request for BetterTransformer support for GPTBigCode model ? thanks!" ]
1,680
1,689
1,681
CONTRIBUTOR
null
The GPTBigcode model from BigCode. It is the same model as GPT2, with: * Added support for Multi-Query Attention (https://arxiv.org/abs/1911.02150) * A large number of optimizations, mostly targeting inference but also useful in training. Other than MQA, it's the same model as GPT2, just a new implementation (though it's not numerically equivalent and the checkpoints are not compatible) The optimizations (I might be missing some): * Use `gelu_pytorch_tanh` (see #21344 #21345) * Avoid unnecessary synchronizations (added to GPT2 in #20061, but wasn't in the original santacoder). * Use Linear layers instead of Conv1D (good speedup but makes the checkpoints incompatible). * Merge `_attn` and `_upcast_and_reordered_attn`. Always merge the matmul with scaling. Rename `reorder_and_upcast_attn`->`attention_softmax_in_fp32` * Rename `scale_attn_by_inverse_layer_idx`-> `scale_attention_softmax_in_fp32` and change its behavior to match Megatron-LM (divide by layer_idx in fp16, then multiply in fp32). * Cache the attention mask value to avoid recreating it every time. * Use jit to fuse the attention fp32 casting, masking, softmax, and scaling. * Combine the attention and causal masks into a single one, pre-computed for the whole model instead of every layer. * Merge the key and value caches into one (this changes the format of `layer_past`/ `present`, does it risk creating problems?) * Use the memory layout (self.num_heads, 3, self.head_dim) instead of (3, self.num_heads, self.head_dim) for the QKV tensor with MHA. (prevents an overhead with the merged key and values, but makes the checkpoints incompatible). Excluded from this PR (optional/opt-in features, could be added later): * CPU optimization for inference, aka InferenceRunner (huge speedup for generation with pre-allocated tensors, pre-computed views and support; faster than Deepspeed, but too experimental to add now) * KV cache pre-allocation and padding. (Same reason) * MQA with separate Q and KV (MQA2 in bigcode, a bit faster for training , slower for inference) * FlashAttention (planning to add support in near future) * Conversion script for Megatron weights (the MQA part needs the BigCode fork of Megatron) TODO: * Update/fix the tests * Update the docs (should be mostly ok by now) * Address the remaining circleci issues (mostly related to the tests)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22575/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22575/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22575", "html_url": "https://github.com/huggingface/transformers/pull/22575", "diff_url": "https://github.com/huggingface/transformers/pull/22575.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22575.patch", "merged_at": 1681117042000 }
https://api.github.com/repos/huggingface/transformers/issues/22574
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22574/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22574/comments
https://api.github.com/repos/huggingface/transformers/issues/22574/events
https://github.com/huggingface/transformers/pull/22574
1,654,488,617
PR_kwDOCUB6oc5Nng1a
22,574
aml vision benchmark
{ "login": "prathikr", "id": 31260940, "node_id": "MDQ6VXNlcjMxMjYwOTQw", "avatar_url": "https://avatars.githubusercontent.com/u/31260940?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prathikr", "html_url": "https://github.com/prathikr", "followers_url": "https://api.github.com/users/prathikr/followers", "following_url": "https://api.github.com/users/prathikr/following{/other_user}", "gists_url": "https://api.github.com/users/prathikr/gists{/gist_id}", "starred_url": "https://api.github.com/users/prathikr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prathikr/subscriptions", "organizations_url": "https://api.github.com/users/prathikr/orgs", "repos_url": "https://api.github.com/users/prathikr/repos", "events_url": "https://api.github.com/users/prathikr/events{/privacy}", "received_events_url": "https://api.github.com/users/prathikr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for your PR, but we are not interested in this modification of this example.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22574). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,685
1,685
CONTRIBUTOR
null
aml vision benchmark
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22574/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22574", "html_url": "https://github.com/huggingface/transformers/pull/22574", "diff_url": "https://github.com/huggingface/transformers/pull/22574.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22574.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22573
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22573/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22573/comments
https://api.github.com/repos/huggingface/transformers/issues/22573/events
https://github.com/huggingface/transformers/issues/22573
1,654,263,853
I_kwDOCUB6oc5imhAt
22,573
Convert T5x "Scalable T5" models to PyTorch
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @agemagician ,\r\n\r\nthanks for pinging! Could you confirm that the new umT5 checkpoints have this new scalable format:\r\n\r\nhttps://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5x/umt5_small/checkpoint_1000000?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false\r\n\r\n(Yeah, they put the wrong link in the [overview table](https://github.com/google-research/t5x/blob/main/docs/models.md#umt5-checkpoints) for umT5, I will also prepare a PR fixing that...)", "Update:\r\n\r\nI wrote the umT5X conversion script - and it conversion seems to work.\r\n\r\nHere's the initial draft:\r\n\r\nhttps://gist.github.com/stefan-it/5d6a4ec89e7ad97181983881434cb4eb\r\n\r\n@agemagician Could you please check if it's working with your checkpoints?\r\n\r\nI placed that file in `/home/stefan/Repositories/transformers/src/transformers/models/t5`.\r\n\r\nAnd installed latest `t5x` version (Git main branch) and latest `jaxlib`.\r\n\r\nI tried it with umT5 Small checkpoints:\r\n\r\n```bash\r\ngsutil -o GSUtil:parallel_composite_upload_threshold=150M -m cp -r gs://t5-data/pretrained_models/t5x/umt5_small/checkpoint_1000000 .\r\n```\r\n\r\nThen you can call the script with:\r\n\r\n```bash\r\npython3 convert_umt5x_checkpoint_to_flax.py --config_name google/mt5-small --t5x_checkpoint_path ~/Dokumente/umt5/checkpoint_1000000 --flax_dump_folder_path ./exported\r\n```\r\n\r\n-> It's important that the `config_name` matches architecture size of the checkpoint.\r\n\r\nCaveats: I will of course do some downstream tasks experiments to see if conversion works. If @agemagician has a working evaluation pipeline it would be great to hear some feedback of the performance! I will work on the conversion script later - need some sleep now.", "Hi @stefan-it ,\r\n\r\nThanks a lot for your quick reply.\r\n\r\nI have created a small random model based on the new scalable architecture like umT5X to check the conversion script, which was converted successfully.\r\nHowever, I debugged the code to make sure it was converted correctly, and I think it was converted incorrectly.\r\n\r\nI created a small model based on this configuration:\r\n```\r\n{\r\n \"_name_or_path\": \"./\",\r\n \"architectures\": [\r\n \"T5ForConditionalGeneration\"\r\n ],\r\n \"d_ff\": 16,\r\n \"d_kv\": 6,\r\n \"d_model\": 8,\r\n \"decoder_start_token_id\": 0,\r\n \"dense_act_fn\": \"silu\",\r\n \"dropout_rate\": 0.0,\r\n \"eos_token_id\": 1,\r\n \"feed_forward_proj\": \"gated-silu\",\r\n \"initializer_factor\": 1.0,\r\n \"is_encoder_decoder\": true,\r\n \"is_gated_act\": true,\r\n \"layer_norm_epsilon\": 1e-06,\r\n \"model_type\": \"t5\",\r\n \"num_decoder_layers\": 3,\r\n \"num_heads\": 4,\r\n \"num_layers\": 3,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"relative_attention_max_distance\": 128,\r\n \"relative_attention_num_buckets\": 64,\r\n \"tie_word_embeddings\": false,\r\n \"torch_dtype\": \"float32\",\r\n \"transformers_version\": \"4.26.0\",\r\n \"use_cache\": true,\r\n \"vocab_size\": 256\r\n}\r\n```\r\n\r\nI also checked the gin file to make sure it is similar :\r\n```\r\nnetwork.T5Config.dropout_rate = %DROPOUT_RATE\r\nnetwork.T5Config.dtype = 'bfloat16'\r\nnetwork.T5Config.emb_dim = 8\r\nnetwork.T5Config.head_dim = 6\r\nnetwork.T5Config.logits_via_embedding = False\r\nnetwork.T5Config.mlp_activations = ('silu', 'linear')\r\nnetwork.T5Config.mlp_dim = 16\r\nnetwork.T5Config.num_decoder_layers = 3\r\nnetwork.T5Config.num_encoder_layers = 3\r\nnetwork.T5Config.num_heads = 4\r\nnetwork.T5Config.remat_policy = 'minimal'\r\nnetwork.T5Config.scan_layers = True\r\nnetwork.T5Config.vocab_size = 256\r\n```\r\n\r\nThen I added the following print statement to check the dimensions :\r\n```\r\n config = T5Config.from_pretrained(config_name)\r\n flax_model = FlaxT5ForConditionalGeneration(config=config)\r\n t5x_model = checkpoints.load_t5x_checkpoint(t5x_checkpoint_path)\r\n\r\n print(config.num_layers)\r\n print(len(t5x_model[\"target\"][\"encoder\"][\"encoder\"][\"attention\"][\"key\"][\"kernel\"]))\r\n print(t5x_model[\"target\"][\"encoder\"][\"encoder\"][\"attention\"][\"key\"][\"kernel\"][0].shape)\r\n print(flax_model.params[\"encoder\"][\"block\"][str(0)][\"layer\"][\"0\"][\"SelfAttention\"][\"k\"][\r\n \"kernel\"\r\n ].shape)\r\n```\r\n\r\nThe output is:\r\n```\r\n3\r\n8\r\n(3, 4, 6)\r\n(8, 24)\r\n```\r\nAs you can see, the current conversion tries to copy three layers while the checkpoint shows 8.\r\nAlso, you can see the dimension of the selected layer doesn't match (3, 4, 6) vs (8, 24).\r\n\r\nIt seems the new checkpoints are stored based on the following order:\r\n1. emb_dim - > num_layers -> num_heads -> head_dim\r\n\r\nSo, I think we either need to copy a slice during every iteration or rearrange the parameters.\r\nI think we should also have an assert check to make sure both destination and original parameters have the same size.\r\n", "To make it easy for you to debug it, I have created a repo contains this small model which should accelerate testing and debugging :\r\n[agemagician/scalable_t5x_tiny_test](https://huggingface.co/agemagician/scalable_t5x_tiny_test/tree/main)", "Ah, yeah length/shape checks would be the next thing that I would have tried! Many thanks for your feedback, I will work on it today! Also thanks for uploading your checkpoints!!", "Great, I will wait for your updated version.\r\n\r\nMaybe at the end, we could meet and celebrate since we both live in Munich 😉 ", "Hey @agemagician ,\r\n\r\nyeah really good idea :hugs: \r\n\r\nI read through the code and compared the \"normal\" t5x `layer.py` vs. scaled t5x `layer.py`.\r\n\r\nAs you already noticed in the (4, 6) vs (24) notation: old t5x used a `joined_kv`, whereas scaled t5 uses `heads` and `kv` in separate variables. This joining stuff is also \"explained\" in the [readme](https://github.com/google-research/t5x/blob/main/docs/usage/partitioning.md#canonical-logical-axis-names) - markdown in that table is broken, here's a better overview:\r\n\r\n```bash\r\nFor \"heads * kv\" fused dimension of attention matrices,\r\nwhen the kernel is reshaped such that \"heads\" and \"kv\"\r\nare packed in the same dimension. \r\n```\r\n\r\nSo I will try to reshape it to get a `joined_kv`!\r\n\r\n**Update**: yes! Transposing and reshaping yields the correct shape now!!", "Hi @agemagician ,\r\n\r\nI updated the gist: https://gist.github.com/stefan-it/5d6a4ec89e7ad97181983881434cb4eb\r\n\r\nConversion script has now a shape check (it compares the shape of the init. FLAX model with the shape of read T5X checkpoint model).\r\n\r\nI will do some more tests after I got some sleep -> hopefully on downstream tasks to test the performance.\r\n\r\nPlease also test the new version of the script :hugs: ", "Amazing work @stefan-it 👍 \r\n\r\nI went through the code and tested it, and I believe it should lead to a correct conversion.\r\n\r\nIt was a smart idea to use transpose to correct the order and then reshape. The only drawback is that we have to store 0.5x additional memory of the model, either encoder or decoder, during the weights copying process. So, this might be a bit problematic with very large models. However, given this is readable code, I think we should stick with it :)\r\n\r\nYes, I agree that the next step should be a downstream task test, before merge this script to HF.", "I think we should definitely support this is the t5x -> pytorch conversion script! Maybe @ArthurZucker here as well ", "Great, thanks a lot @patrickvonplaten for joining forces 😄 \r\n\r\n@stefan-it , I have created a small Colab example to test the model output vs mt5, which should give somehow a similar output:\r\nhttps://colab.research.google.com/drive/1QrqxNdIK7ugQ3FC8tqxUqZZwP0zdvYE4?usp=sharing\r\n\r\nHowever, the output from the umt5 model is garbage compared to mt5 for the following input:\r\n`\"Wikipedia is a <extra_id_0>\"`\r\numt5:\r\n`<pad>xictoarelor nhaulated辙ktör betroffen syntet Undesagrado硼颤oplasm betroffen nhau痍剖 المختلفةrieks`\r\nmt5:\r\n`<pad> <extra_id_0> political encyclopedia</s>`\r\n\r\nI have checked the paper in case there is something different, and indeed, there is a difference in the architecture:\r\n```\r\nC ADDITIONAL TRAINING DETAILS\r\nThe model architectures used in this study are the same as mT5 models, except that relative position embeddings are not shared across layers. In all of our models, the vocabulary size is 256,000\r\nsubwords, and byte-level fallback is enabled, so unknown tokens are broken down into UTF-8 bytes.\r\nWe use the T5X library (Roberts et al., 2022) to train the models using Google Cloud TPUs. For\r\npretraining, we use Adafactor optimizer (Shazeer & Stern, 2018) with a constant learning rate of\r\n0.01 in the first 10,000 steps and inverse square root decay afterwards. For finetuning, we use\r\nAdafactor with a constant learning rate of 5e−5. Unlike mT5, we do not use loss normalization\r\nfactor. Instead we use the number of real target tokens as the effective loss normalization.\r\nFinally, we do not factorize the second moment of the Adafactor states and we also use momentum,\r\nneither of which are used in T5 and mT5 studies.\r\n```\r\n\r\nSo we can't simply use the current HF mt5 model architecture as it is.\r\n\r\n@patrickvonplaten, any thoughts on how not to share relative position embeddings across layers on mt5 model script ?", "Many thanks for that Notebook! Make things a bit easier - I've also converted the model to PyTorch incl. vocab and uploaded it on the hub:\r\n\r\nhttps://huggingface.co/stefan-it/umt5-small/tree/main\r\n\r\nI noticed one difference - I think it was in `t5x_model[\"target\"][\"decoder\"][\"decoder\"][\"relpos_bias\"][\"rel_embedding\"]` and yeah... it corresponds to the relative position embeddings, oh no!", "Looking more into \"network.py\" for both t5 and scalable_t5, I found it is true what is mentioned in the paper.\r\n\r\nOn t5, they define the relative embedding once, then they call it on each encoder layer:\r\nhttps://github.com/google-research/t5x/blob/main/t5x/examples/t5/network.py#L56\r\n\r\nOn scalable_t5, they define the relative embedding on each encoder layer separately:\r\nhttps://github.com/google-research/t5x/blob/main/t5x/examples/scalable_t5/network.py#L64\r\n\r\nThe same goes for the decoder.\r\n\r\nSo the current implementation of mt5 at huggingface can't work directly with the new umt5 because at mt5 we only have a single shared relative bias, while on umt5 we have a separate relative bias for each layer.", "Yeah, this architecture breaking change is really annoying! It means a lot of copying of code from T5 I guess...", "But this issue is a good pointer where to perform some modifications (in a new umT5 model implementation):\r\n\r\nhttps://github.com/huggingface/transformers/issues/13397", "> But this issue is a good pointer where to perform some modifications (in a new umT5 model implementation):\r\n> \r\n> #13397\r\n\r\nyes, I am already working on a solution for that :)\r\nI will make a PR today that allows umt5 to work with separate relative bias using mt5 code base without the need of a new model.", "I have made the pull request :\r\nhttps://github.com/huggingface/transformers/pull/22613\r\n\r\nAll we need is to set the following parameter in the config :\r\nshare_relative_attention_bias = False", "Hi @agemagician ,\r\n\r\ndo you see the `relative_attention_bias` in all layers? I'm using the PR and it shows:\r\n\r\n```\r\nT5Config {\r\n \"_name_or_path\": \"./\",\r\n \"architectures\": [\r\n \"T5ForConditionalGeneration\"\r\n ],\r\n \"d_ff\": 1024,\r\n \"d_kv\": 64,\r\n \"d_model\": 512,\r\n \"decoder_start_token_id\": 0,\r\n \"dense_act_fn\": \"gelu_new\",\r\n \"dropout_rate\": 0.1,\r\n \"eos_token_id\": 1,\r\n \"feed_forward_proj\": \"gated-gelu\",\r\n \"initializer_factor\": 1.0,\r\n \"is_encoder_decoder\": true,\r\n \"is_gated_act\": true,\r\n \"layer_norm_epsilon\": 1e-06,\r\n \"model_type\": \"t5\",\r\n \"num_decoder_layers\": 8,\r\n \"num_heads\": 6,\r\n \"num_layers\": 8,\r\n \"pad_token_id\": 0,\r\n \"relative_attention_max_distance\": 128,\r\n \"relative_attention_num_buckets\": 32,\r\n \"share_relative_attention_bias\": false,\r\n \"tie_word_embeddings\": false,\r\n \"tokenizer_class\": \"T5Tokenizer\",\r\n \"torch_dtype\": \"float32\",\r\n \"transformers_version\": \"4.28.0.dev0\",\r\n \"use_cache\": true,\r\n \"vocab_size\": 256384\r\n}\r\n\r\nNo GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)\r\nLayer: 0\r\ndict_keys(['k', 'o', 'q', 'relative_attention_bias', 'v'])\r\nLayer: 1\r\ndict_keys(['k', 'o', 'q', 'v'])\r\nLayer: 2\r\ndict_keys(['k', 'o', 'q', 'v'])\r\nLayer: 3\r\ndict_keys(['k', 'o', 'q', 'v'])\r\nLayer: 4\r\ndict_keys(['k', 'o', 'q', 'v'])\r\nLayer: 5\r\ndict_keys(['k', 'o', 'q', 'v'])\r\nLayer: 6\r\ndict_keys(['k', 'o', 'q', 'v'])\r\nLayer: 7\r\ndict_keys(['k', 'o', 'q', 'v'])\r\n```", "But it is there when I'm using PyTorch! I can see a difference between `config.share_relative_attention_bias` = `True` or `False`, but not with Flax implementation at the moment!", "> share_relative_attention_bias\r\n\r\nhmmm, checking ..", "---- Replied Message ----\n| From | Ahmed ***@***.***> |\n| Date | 04/06/2023 18:18 |\n| To | ***@***.***> |\n| Cc | ***@***.***> |\n| Subject | Re: [huggingface/transformers] Convert T5x \"Scalable T5\" models to PyTorch (Issue #22573) |\n\nGreat, thanks a lot @patrickvonplaten for joining forces 😄\n\n@stefan-it , I have created a small Colab example to test the model output vs mt5, which should give somehow a similar output:\nhttps://colab.research.google.com/drive/1QrqxNdIK7ugQ3FC8tqxUqZZwP0zdvYE4?usp=sharing\n\nHowever, the output from the umt5 model is garbage compared to mt5 for the following input:\n\"Wikipedia is a <extra_id_0>\"\numt5:\n<pad>xictoarelor nhaulated辙ktör betroffen syntet Undesagrado硼颤oplasm betroffen nhau痍剖 المختلفةrieks\nmt5:\n<pad> <extra_id_0> political encyclopedia</s>\n\nI have checked the paper in case there is something different and indeed there is a difference in the architecture:\n\nC ADDITIONAL TRAINING DETAILS\nThe model architectures used in this study are the same as mT5 models, except that relative position embeddings are not shared across layers. In all of our models, the vocabulary size is 256,000\nsubwords, and byte-level fallback is enabled, so unknown tokens are broken down into UTF-8 bytes.\nWe use the T5X library (Roberts et al., 2022) to train the models using Google Cloud TPUs. For\npretraining, we use Adafactor optimizer (Shazeer & Stern, 2018) with a constant learning rate of\n0.01 in the first 10,000 steps and inverse square root decay afterwards. For finetuning, we use\nAdafactor with a constant learning rate of 5e−5. Unlike mT5, we do not use loss normalization\nfactor. Instead we use the number of real target tokens as the effective loss normalization.\nFinally, we do not factorize the second moment of the Adafactor states and we also use momentum,\nneither of which are used in T5 and mT5 studies.\n\n\nSo we can't simply use the current HF mt5 model architecture as it is.\n\n—\nReply to this email directly, view it on GitHub, or unsubscribe.\nYou are receiving this because you are subscribed to this thread.Message ID: ***@***.***>", "Hmm, so @sgugger asked to create a separate model for that.\r\nI will do it and share a new PR with u.", "@agemagician please let me know if you need some help with that :hugs: ", "Closed by #24477 " ]
1,680
1,688
1,683
CONTRIBUTOR
null
### Feature request Google T5X library has a new model architecture called "scalable_t5" based on T5, the main difference is that it supports giant model training. It supports Jax Scan and Rematerialization / Checkpointing, allowing it to load and train giant models on TPU or GPU. Links: https://github.com/google-research/t5x/tree/main/t5x/examples/scalable_t5 It has the same architecture as T5, but the checkpoints are stored differently, rather than storing each layer in a separate folder. It stores all layers for the encoder or the decoder in a single folder. This makes the current T5 convertors doesn't work. ### Motivation Training large models like PaLM on TPU pods requires this specific architecture. Unfortunately, trained models using this architecture are not yet convertible to hugging face. This means the community can't use such models at Hugging Face. Pinging: @patrickvonplaten @stefan-it @bastings @ArthurZucker ### Your contribution None
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22573/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22572
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22572/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22572/comments
https://api.github.com/repos/huggingface/transformers/issues/22572/events
https://github.com/huggingface/transformers/issues/22572
1,654,248,719
I_kwDOCUB6oc5imdUP
22,572
Informer not working on basic example
{ "login": "SlimakSlimak", "id": 65352677, "node_id": "MDQ6VXNlcjY1MzUyNjc3", "avatar_url": "https://avatars.githubusercontent.com/u/65352677?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SlimakSlimak", "html_url": "https://github.com/SlimakSlimak", "followers_url": "https://api.github.com/users/SlimakSlimak/followers", "following_url": "https://api.github.com/users/SlimakSlimak/following{/other_user}", "gists_url": "https://api.github.com/users/SlimakSlimak/gists{/gist_id}", "starred_url": "https://api.github.com/users/SlimakSlimak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SlimakSlimak/subscriptions", "organizations_url": "https://api.github.com/users/SlimakSlimak/orgs", "repos_url": "https://api.github.com/users/SlimakSlimak/repos", "events_url": "https://api.github.com/users/SlimakSlimak/events{/privacy}", "received_events_url": "https://api.github.com/users/SlimakSlimak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @kashif ", "thanks @sgugger having a look!", "@SlimakSlimak just to test: the model works if you comment out the `static_real_features` argument to the `model`?\r\n", "Hi, yes that works, thank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
### System Info I am trying minimal example code of Informer specified on HuggingFace website: https://huggingface.co/docs/transformers/model_doc/informer however I am getting this error while running that: ``` File "C:\Users\User\anaconda3\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (1536x23 and 22x32) ``` I am using 4.27.4 version of transformers library. The code from website used: ``` from huggingface_hub import hf_hub_download import torch from transformers import InformerModel file = hf_hub_download( repo_id="kashif/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset" ) batch = torch.load(file) model = InformerModel.from_pretrained("huggingface/informer-tourism-monthly") # during training, one provides both past and future values # as well as possible additional features outputs = model( past_values=batch["past_values"], past_time_features=batch["past_time_features"], past_observed_mask=batch["past_observed_mask"], static_categorical_features=batch["static_categorical_features"], static_real_features=batch["static_real_features"], future_values=batch["future_values"], future_time_features=batch["future_time_features"], ) last_hidden_state = outputs.last_hidden_state ``` The full traceback: ``` outputs = model( File "C:\Users\User\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\User\anaconda3\lib\site-packages\transformers\models\informer\modeling_informer.py", line 1870, in forward outputs = self.model( File "C:\Users\User\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\User\anaconda3\lib\site-packages\transformers\models\informer\modeling_informer.py", line 1704, in forward encoder_outputs = self.encoder( File "C:\Users\User\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\User\anaconda3\lib\site-packages\transformers\models\informer\modeling_informer.py", line 1178, in forward hidden_states = self.value_embedding(inputs_embeds) File "C:\Users\User\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\User\anaconda3\lib\site-packages\transformers\models\informer\modeling_informer.py", line 305, in forward return self.value_projection(x) File "C:\Users\User\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\User\anaconda3\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (1536x23 and 22x32) ``` ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from huggingface_hub import hf_hub_download import torch from transformers import InformerModel file = hf_hub_download( repo_id="kashif/tourism-monthly-batch", filename="train-batch.pt", repo_type="dataset" ) batch = torch.load(file) model = InformerModel.from_pretrained("huggingface/informer-tourism-monthly") # during training, one provides both past and future values # as well as possible additional features outputs = model( past_values=batch["past_values"], past_time_features=batch["past_time_features"], past_observed_mask=batch["past_observed_mask"], static_categorical_features=batch["static_categorical_features"], static_real_features=batch["static_real_features"], future_values=batch["future_values"], future_time_features=batch["future_time_features"], ) last_hidden_state = outputs.last_hidden_state ``` ### Expected behavior no error, `outputs` contains model output
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22572/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22571
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22571/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22571/comments
https://api.github.com/repos/huggingface/transformers/issues/22571/events
https://github.com/huggingface/transformers/issues/22571
1,654,209,185
I_kwDOCUB6oc5imTqh
22,571
seq2seq examples can't handle DataParallel
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No, it is a wrapping problem.\r\n@gante The Seq2SeqTrainer might need to do something to use the unwrapped model (which is the `self.model` attribute) instead of the model. I think just changing line 280 to use `self.model.config` instead of `model.config` will be enough.", "and a new test please! Thank you!", "This was caught by the last scheduled test, looking at the reports right now. It's just that Yih-Dar is off so didn't ping anyone on it :-)", "oh, then all is perfect testing-wise!\r\n\r\nIn the interim perhaps before merging Trainer-related PRs those slow trainer-only tests could be run locally - would require 2 gpus I think.\r\n\r\n------------------\r\n\r\nand I'd imagine the subsequent crash was not detected by the test and it's not wrapping related it seems. (part 2 of my Issue)", "I split off the 2nd issue into its own Issue https://github.com/huggingface/transformers/issues/22634 as they aren't really related\r\n\r\nSo closing this one as the first part has been resolved here https://github.com/huggingface/transformers/pull/22584" ]
1,680
1,680
1,680
CONTRIBUTOR
null
### System Info main ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This was originally reported here: https://github.com/pytorch/pytorch/issues/98102#issuecomment-1496173632 with 2+ gpus: ``` PYTHONPATH=src python examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-base --do_train --do_eval --source_lang en \ --target_lang de --source_prefix 'translate English to German: ' \ --dataset_name stas/wmt14-en-de-pre-processed --output_dir \ /tmp/tst-translation --num_train_epochs 1 --per_device_train_batch_size=1 \ --max_train_samples 10 --overwrite_output_dir --seed 1137 \ --per_device_eval_batch_size 1 --predict_with_generate --fp16 \ --max_eval_samples 10 ``` crashes: ``` [INFO|configuration_utils.py:575] 2023-04-04 09:20:48,136 >> Generate config GenerationConfig { "_from_model_config": true, "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0, "transformers_version": "4.28.0.dev0" } Traceback (most recent call last): File "examples/pytorch/translation/run_translation.py", line 664, in <module> main() File "examples/pytorch/translation/run_translation.py", line 605, in main metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval") File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py", line 159, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 2990, in evaluate output = eval_loop( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 3171, in evaluation_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py", line 280, in prediction_step gen_config = model.generation_config File "/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'DataParallel' object has no attribute 'generation_config' ``` Using a workaround `CUDA_VISIBLE_DEVICES=0` overcomes this problem - so we aren't dealing with wrapping properly here. But then it fails again inside eval: ``` [INFO|trainer.py:3126] 2023-04-04 09:28:07,548 >> ***** Running Evaluation ***** [INFO|trainer.py:3128] 2023-04-04 09:28:07,548 >> Num examples = 10 [INFO|trainer.py:3131] 2023-04-04 09:28:07,548 >> Batch size = 1 [INFO|configuration_utils.py:575] 2023-04-04 09:28:07,552 >> Generate config GenerationConfig { "_from_model_config": true, "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0, "transformers_version": "4.28.0.dev0" } 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 3.72it/s]Traceback (most recent call last): File "examples/pytorch/translation/run_translation.py", line 664, in <module> main() File "examples/pytorch/translation/run_translation.py", line 605, in main metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval") File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py", line 159, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 2990, in evaluate output = eval_loop( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 3278, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "examples/pytorch/translation/run_translation.py", line 546, in compute_metrics decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3445, in batch_decode return [ File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3446, in <listcomp> self.decode( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3485, in decode return self._decode( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_fast.py", line 549, in _decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) OverflowError: out of range integral type conversion attempted ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22571/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22570
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22570/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22570/comments
https://api.github.com/repos/huggingface/transformers/issues/22570/events
https://github.com/huggingface/transformers/issues/22570
1,654,195,076
I_kwDOCUB6oc5imQOE
22,570
Add MobileViT v2
{ "login": "SunHaozhe", "id": 26926814, "node_id": "MDQ6VXNlcjI2OTI2ODE0", "avatar_url": "https://avatars.githubusercontent.com/u/26926814?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunHaozhe", "html_url": "https://github.com/SunHaozhe", "followers_url": "https://api.github.com/users/SunHaozhe/followers", "following_url": "https://api.github.com/users/SunHaozhe/following{/other_user}", "gists_url": "https://api.github.com/users/SunHaozhe/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunHaozhe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunHaozhe/subscriptions", "organizations_url": "https://api.github.com/users/SunHaozhe/orgs", "repos_url": "https://api.github.com/users/SunHaozhe/repos", "events_url": "https://api.github.com/users/SunHaozhe/events{/privacy}", "received_events_url": "https://api.github.com/users/SunHaozhe/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @SunHaozhe , I would like to work on implementing this model." ]
1,680
1,681
null
CONTRIBUTOR
null
### Model description [MobileViT](https://openreview.net/forum?id=vh-0sUt8HlG) is a computer vision model that combines CNNs with transformers that has already been added to Transformers. [MobileViT v2](https://arxiv.org/abs/2206.02680) is the second version; it is constructed by replacing multi-headed self-attention in MobileViT v1 with the proposed separable self-attention. Does Hugging Face have plan to add MobileViT v2 to Transformers? ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation The official implementation is from Apple at this link: [https://github.com/apple/ml-cvnets](https://github.com/apple/ml-cvnets) The timm library also implemented it and has pre-trained weights at this link: [https://github.com/huggingface/pytorch-image-models/blob/82cb47bcf360e1974c00c35c2aa9e242e6b5b565/timm/models/mobilevit.py](https://github.com/huggingface/pytorch-image-models/blob/82cb47bcf360e1974c00c35c2aa9e242e6b5b565/timm/models/mobilevit.py)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22570/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22569
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22569/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22569/comments
https://api.github.com/repos/huggingface/transformers/issues/22569/events
https://github.com/huggingface/transformers/issues/22569
1,654,189,111
I_kwDOCUB6oc5imOw3
22,569
AttributeError: 'GPTJModel' object has no attribute 'first_device'
{ "login": "innat", "id": 17668390, "node_id": "MDQ6VXNlcjE3NjY4Mzkw", "avatar_url": "https://avatars.githubusercontent.com/u/17668390?v=4", "gravatar_id": "", "url": "https://api.github.com/users/innat", "html_url": "https://github.com/innat", "followers_url": "https://api.github.com/users/innat/followers", "following_url": "https://api.github.com/users/innat/following{/other_user}", "gists_url": "https://api.github.com/users/innat/gists{/gist_id}", "starred_url": "https://api.github.com/users/innat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/innat/subscriptions", "organizations_url": "https://api.github.com/users/innat/orgs", "repos_url": "https://api.github.com/users/innat/repos", "events_url": "https://api.github.com/users/innat/events{/privacy}", "received_events_url": "https://api.github.com/users/innat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This comes from your hack of setting the attributes of the model in cell 6. This then makes the model want to try to use the old model parallel API which crashes since you didn't really use it ;-)", "Ah, I see. Sorry, it was bit confusing. As mentioned, model `fb/opt` worked. Also `abeja/gpt-neox-japanese-2.7b` worked either. Is there any easy fix for the newer API?", "It will work with any model that does not implement the `parallelize` API. As for fixes, the issue you originally psoted on will fix the models with head if needed, and the Trainer has been fixed as @younesbelkada mentioned, so you shouldn't need this hack anymore.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
### System Info ``` transformers.__version__ # 4.28.0.dev0 torch.__version__ # 2.0.0+cu117 python # Python 3.7.12 ``` ### Who can help? @sgugger @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The followig gist uses `model_name = 'facebook/opt-2.7b'`, which seems working as expected. [Gist-model-parallel](https://gist.github.com/innat/e6c4826382641f640cc91def95026ad3) But for model like `'EleutherAI/gpt-j-6b'` or `gpt2`, it gives error. ``` AttributeError: 'GPTJModel' object has no attribute 'first_device' ``` Full logs ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[14], line 20 12 trainer = Trainer( 13 model=model, 14 args=training_args, 15 data_collator=data_collator, 16 train_dataset=train_dataset, 17 ) 19 model.config.use_cache = False ---> 20 trainer.train() File /opt/conda/envs/gpt_neox/lib/python3.9/site-packages/transformers/trainer.py:1639, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1634 self.model_wrapped = self.model 1636 inner_training_loop = find_executable_batch_size( 1637 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size 1638 ) -> 1639 return inner_training_loop( 1640 args=args, 1641 resume_from_checkpoint=resume_from_checkpoint, 1642 trial=trial, 1643 ignore_keys_for_eval=ignore_keys_for_eval, 1644 ) File /opt/conda/envs/gpt_neox/lib/python3.9/site-packages/transformers/trainer.py:1906, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1904 tr_loss_step = self.training_step(model, inputs) 1905 else: -> 1906 tr_loss_step = self.training_step(model, inputs) 1908 if ( 1909 args.logging_nan_inf_filter 1910 and not is_torch_tpu_available() 1911 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step)) 1912 ): 1913 # if loss is nan or inf simply add the average of previous logged losses 1914 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged) File /opt/conda/envs/gpt_neox/lib/python3.9/site-packages/transformers/trainer.py:2652, in Trainer.training_step(self, model, inputs) 2649 return loss_mb.reduce_mean().detach().to(self.args.device) 2651 with self.compute_loss_context_manager(): -> 2652 loss = self.compute_loss(model, inputs) 2654 if self.args.n_gpu > 1: 2655 loss = loss.mean() # mean() to average on multi-gpu parallel training File /opt/conda/envs/gpt_neox/lib/python3.9/site-packages/transformers/trainer.py:2684, in Trainer.compute_loss(self, model, inputs, return_outputs) 2682 else: 2683 labels = None -> 2684 outputs = model(**inputs) 2685 # Save past state if it exists 2686 # TODO: this needs to be fixed and made cleaner later. 2687 if self.args.past_index >= 0: File /opt/conda/envs/gpt_neox/lib/python3.9/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/conda/envs/gpt_neox/lib/python3.9/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs) 163 output = old_forward(*args, **kwargs) 164 else: --> 165 output = old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File /opt/conda/envs/gpt_neox/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py:869, in GPTJForCausalLM.forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 867 # Set device for model parallelism 868 if self.model_parallel: --> 869 torch.cuda.set_device(self.transformer.first_device) 870 hidden_states = hidden_states.to(self.lm_head.weight.device) 872 # make sure sampling in fp16 works correctly and 873 # compute loss in fp32 to match with mesh-tf version 874 # https://github.com/EleutherAI/gpt-neo/blob/89ce74164da2fb16179106f54e2269b5da8db333/models/gpt2/gpt2.py#L179 File /opt/conda/envs/gpt_neox/lib/python3.9/site-packages/torch/nn/modules/module.py:1614, in Module.__getattr__(self, name) 1612 if name in modules: 1613 return modules[name] -> 1614 raise AttributeError("'{}' object has no attribute '{}'".format( 1615 type(self).__name__, name)) AttributeError: 'GPTJModel' object has no attribute 'first_device' ``` ### Expected behavior Couldn't interpret the problem (`no attribute 'first_device'`), otherwise, it's expected to work same as other model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22569/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22569/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22568
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22568/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22568/comments
https://api.github.com/repos/huggingface/transformers/issues/22568/events
https://github.com/huggingface/transformers/issues/22568
1,654,182,898
I_kwDOCUB6oc5imNPy
22,568
junk results for int8 for Flan-xl/xxl
{ "login": "i-am-neo", "id": 102043285, "node_id": "U_kgDOBhUOlQ", "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/i-am-neo", "html_url": "https://github.com/i-am-neo", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "repos_url": "https://api.github.com/users/i-am-neo/repos", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @i-am-neo \r\nYou should upgrade your `transformers` version and re-run your inference script as the recent releases contain a fix for T5 family models for fp16 and in8 inference\r\n\r\nhttps://github.com/huggingface/transformers/pull/20683\r\n#20760", "Thanks @younesbelkada . Still junky. Using your notebook and t5-3b-sharded, compare:\r\n```\r\ntext = \"Summarize: Hello my name is Younes and I am a Machine Learning Engineer at Hugging Face\" # outputs \"s.:s. Summarize: Hello my name is Younes.\"\r\ntext = \"summarize: Hello my name is Younes and I am a Machine Learning Engineer at Hugging Face\" # outputs \"Younes is a Machine Learning Engineer at Hugging Face.\"\r\n```\r\n\r\n", "```\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\nimport torch\r\n\r\nmodel_name = \"t5-3b-sharded\"\r\n# T5-3b and T5-11B are supported!\r\n# We need sharded weights otherwise we get CPU OOM errors\r\nmodel_id=f\"ybelkada/{model_name}\"\r\n\r\n#model_id='google/flan-t5-xl'\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\nmodel_8bit = AutoModelForSeq2SeqLM.from_pretrained(model_id, device_map=\"auto\", load_in_8bit=True)\r\n```\r\n\r\n\r\n```\r\n- `transformers` version: 4.29.2\r\n- Platform: Linux-5.15.107+-x86_64-with-glibc2.31\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.14.1\r\n- Safetensors version: not installed\r\n- PyTorch version (GPU?): 2.0.1+cu118 (True)\r\n- Tensorflow version (GPU?): 2.12.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): 0.6.9 (gpu)\r\n- Jax version: 0.4.8\r\n- JaxLib version: 0.4.7\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.27.4 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.8 (gpu) - Jax version: 0.4.7 - JaxLib version: 0.4.7 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada and maybe @philschmid ? ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior: 1. made a copy of notebook [HuggingFace_bnb_int8_T5](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing) 2. set runtime hardware accelerator to GPU, standard 3. > from transformers import AutoModelForSeq2SeqLM, AutoTokenizer > import torch > > model_name = "t5-3b-sharded" # NB. T5-11B does not fit into a GPU in Colab > # T5-3b and T5-11B are supported! > # We need sharded weights otherwise we get CPU OOM errors > model_id=f"ybelkada/{model_name}" > > tokenizer = AutoTokenizer.from_pretrained(model_id) > model_8bit = AutoModelForSeq2SeqLM.from_pretrained(model_id, device_map="cuda", load_in_8bit=True) 4. > model_8bit.get_memory_footprint() 5. > max_new_tokens = 400 > > text = """ > Summarize: Whether out at a restaurant or buying tickets to a concert, modern life counts on the convenience of a credit card to make daily purchases. It saves us from carrying large amounts of cash and also can advance a full purchase that can be paid over time. How do card issuers know we’ll pay back what we charge? That’s a complex problem with many existing solutions—and even more potential improvements, to be explored in this competition. > > Credit default prediction is central to managing risk in a consumer lending business. Credit default prediction allows lenders to optimize lending decisions, which leads to a better customer experience and sound business economics. Current models exist to help manage risk. But it's possible to create better models that can outperform those currently in use. > > American Express is a globally integrated payments company. The largest payment card issuer in the world, they provide customers with access to products, insights, and experiences that enrich lives and build business success. > > In this competition, you’ll apply your machine learning skills to predict credit default. Specifically, you will leverage an industrial scale data set to build a machine learning model that challenges the current model in production. Training, validation, and testing datasets include time-series behavioral data and anonymized customer profile information. You're free to explore any technique to create the most powerful model, from creating features to using the data in a more organic way within a model. > """ > > > input_ids = tokenizer( > text, return_tensors="pt" > ).input_ids > > if torch.cuda.is_available(): > input_ids = input_ids.to('cuda') > > outputs = model_8bit.generate(input_ids, max_new_tokens=max_new_tokens) > print(tokenizer.decode(outputs[0], skip_special_tokens=True)) Resulting output (note the series of blanks at the beginning of the result between the periods). I also tried other prompts and the results were poor/unexpected. My goal was to check that the int8 model _reliably_ produces at least similar results as the non-int8, in order to potentially use the int8 for inference. Please see comparison of results in next section from using the Hosted Inference API or spaces API. What am I missing? > . . You can also use a combination of techniques to create a model that can outperform the current model in production. The goal is to create a model that can outperform the current model in production. The goal is to create a model that can outperform. The ### Expected behavior something akin to: a) > ['Challenge your machine learning skills to predict credit default.'] or b) > Challenge your machine learning skills to predict credit default. a) is the result from trying a space API > response = requests.post("https://awacke1-google-flan-t5-xl.hf.space/run/predict", json={ > > "data": [ > text, > ], > "max_length": 500, > }).json() > > data = response["data"] > print(data) b) is the result from your Hosted inference API Hope you can shed light.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22568/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22567
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22567/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22567/comments
https://api.github.com/repos/huggingface/transformers/issues/22567/events
https://github.com/huggingface/transformers/issues/22567
1,654,115,824
I_kwDOCUB6oc5il83w
22,567
Unable to import VGG16 model transformers
{ "login": "NagaVenkataSaiM", "id": 87435205, "node_id": "MDQ6VXNlcjg3NDM1MjA1", "avatar_url": "https://avatars.githubusercontent.com/u/87435205?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NagaVenkataSaiM", "html_url": "https://github.com/NagaVenkataSaiM", "followers_url": "https://api.github.com/users/NagaVenkataSaiM/followers", "following_url": "https://api.github.com/users/NagaVenkataSaiM/following{/other_user}", "gists_url": "https://api.github.com/users/NagaVenkataSaiM/gists{/gist_id}", "starred_url": "https://api.github.com/users/NagaVenkataSaiM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NagaVenkataSaiM/subscriptions", "organizations_url": "https://api.github.com/users/NagaVenkataSaiM/orgs", "repos_url": "https://api.github.com/users/NagaVenkataSaiM/repos", "events_url": "https://api.github.com/users/NagaVenkataSaiM/events{/privacy}", "received_events_url": "https://api.github.com/users/NagaVenkataSaiM/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "That code sample is plain wrong, there is no VGG16 in Transformers.", "@sgugger Hi, can you please suggest me how to use my vgg16 model using transformers?Also i am new to transformers model i am sorry if i made any mistake in uploading my model.Could please redirect me to some helpful resources for vgg16 with transformers?\r\n\r\n" ]
1,680
1,680
null
NONE
null
### Model description i have recently upload my trained vgg16 model to hugging face.After uploading i have a prompt of instructions to use my model. Although i have followed the prompt i got errors. [https://huggingface.co/Nvsai/DeviceClassification](url) >>> from transformers import VGG16 Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'VGG16' from 'transformers' (/mnt/mydrive/ubantu/programming/openvino/lib/python3.9/site-packages/transformers/__init__.py) >>> >>> model = VGG16.fro ![Screenshot from 2023-04-04 20-45-24](https://user-images.githubusercontent.com/87435205/229841900-e12cee0f-69a1-4dd5-9332-2f65f177e8cf.png) m_pretrained("Nvsai/DeviceClassification") ![Screenshot from 2023-04-04 20-45-56](https://user-images.githubusercontent.com/87435205/229841929-812f7eb6-58e1-4919-aff6-35200aee426c.png) ### Open source status - [ ] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22567/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22566
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22566/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22566/comments
https://api.github.com/repos/huggingface/transformers/issues/22566/events
https://github.com/huggingface/transformers/issues/22566
1,654,108,574
I_kwDOCUB6oc5il7Ge
22,566
Support Streaming to Other Locations Besides STDOUT
{ "login": "sam-h-bean", "id": 43734688, "node_id": "MDQ6VXNlcjQzNzM0Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/43734688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sam-h-bean", "html_url": "https://github.com/sam-h-bean", "followers_url": "https://api.github.com/users/sam-h-bean/followers", "following_url": "https://api.github.com/users/sam-h-bean/following{/other_user}", "gists_url": "https://api.github.com/users/sam-h-bean/gists{/gist_id}", "starred_url": "https://api.github.com/users/sam-h-bean/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sam-h-bean/subscriptions", "organizations_url": "https://api.github.com/users/sam-h-bean/orgs", "repos_url": "https://api.github.com/users/sam-h-bean/repos", "events_url": "https://api.github.com/users/sam-h-bean/events{/privacy}", "received_events_url": "https://api.github.com/users/sam-h-bean/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "@sam-h-bean an iterator class was merged yesterday, but I haven't communicated about it :)\r\n\r\nYou can check its implementation [here](https://github.com/huggingface/transformers/blob/fc5b7419d4c8121d8f1fa915504bcc353422559e/src/transformers/generation/streamers.py#L125). This would be what you are looking for, correct?\r\n\r\nEDIT: communicated [here](https://twitter.com/joao_gante/status/1643330507093196800)", "@gante Is there going to be an option for using this with the pipelines API? I would like to incorporate this feature into langchain but that currently only supports the pipeline API.", "@sam-h-bean yes, it is in the works! :D ", "@gante What about dynamic batching combined with streaming? If I wanted to support dynamic batching for an LLM because I expected a high amount of throughput but I wanted to stream tokens back to each client individually how would I accomplish that?", "@sam-h-bean for now only the [text-generation-inference](https://github.com/huggingface/text-generation-inference) supports it. \r\n\r\nI'd like to add it to `transformers` sometime in the future, but it definitely won't happen in the next months.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
CONTRIBUTOR
null
### Feature request I would like to build a token streaming model that sends the tokens to a web socket or SSE connection. Today I would need to put a redirect from stdout to the other location which is a pain. Instead I would like to receive a raw python generator from the TextStreamer object that I can iterate over in any way I need. ### Motivation I'd like to emulate something like https://github.com/hyperonym/basaran but in native HF code. ### Your contribution TBD
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22566/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22565
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22565/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22565/comments
https://api.github.com/repos/huggingface/transformers/issues/22565/events
https://github.com/huggingface/transformers/issues/22565
1,654,074,074
I_kwDOCUB6oc5ilyra
22,565
VisionEncoderDecoderModel ONNX Conversion - TrOCR
{ "login": "RichardRivaldo", "id": 60037073, "node_id": "MDQ6VXNlcjYwMDM3MDcz", "avatar_url": "https://avatars.githubusercontent.com/u/60037073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RichardRivaldo", "html_url": "https://github.com/RichardRivaldo", "followers_url": "https://api.github.com/users/RichardRivaldo/followers", "following_url": "https://api.github.com/users/RichardRivaldo/following{/other_user}", "gists_url": "https://api.github.com/users/RichardRivaldo/gists{/gist_id}", "starred_url": "https://api.github.com/users/RichardRivaldo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RichardRivaldo/subscriptions", "organizations_url": "https://api.github.com/users/RichardRivaldo/orgs", "repos_url": "https://api.github.com/users/RichardRivaldo/repos", "events_url": "https://api.github.com/users/RichardRivaldo/events{/privacy}", "received_events_url": "https://api.github.com/users/RichardRivaldo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Rocketknight1 maybe?", "@sgugger pinging since there's no response", "It looks like this bug is arising in ONNX export of a PyTorch model, which I don't know too much about!", "I'm quite confused on this one. Any other workarounds on this? I did read some ways like JIT or using the export function of Torch, but not quite sure on how to do it, especially the input part.", "Would need help referring this issue to others @Rocketknight1 @sgugger, appreciate it! :D", "@sgugger @Rocketknight1 I'm also facing this same issue. Any help would be much appreciated. Thanks! ", "@NielsRogge @michaelbenayoun ", "Hi,\r\nCould you try with [Optimum](https://github.com/huggingface/optimum)?\r\n\r\n```\r\noptimum-cli export onnx -m trocr/base/ --task vision2seq-lm onnx/ --atol 1e-3\r\n```\r\nTrying to pinpoint if it comes from the exporting tool or really from some information lacking in the `preprocessor_config.json` file.", "Hi @michaelbenayoun, thank you for the response. Yes, I retried using Optimum and it works. I then continued my conversion to TF and TFLite with these commands.\r\n\r\n```\r\noptimum-cli export onnx --model base/ onnx/ --task vision2seq-lm\r\n\r\nonnx-tf convert -i onnx/encoder_model.onnx -o encoder/\r\nonnx-tf convert -i onnx/decoder_model.onnx -o decoder/\r\n\r\ntflite_convert --saved_model_dir=encoder/ --output_file=encoder.tflite\r\ntflite_convert --saved_model_dir=decoder/ --output_file=decoder.tflite\r\n```\r\n\r\nWhen I check the encoder input shape to use it for inference, I got the following:\r\n```\r\n[{'name': 'serving_default_pixel_values:0',\r\n 'index': 0,\r\n 'shape': array([1, 1, 1, 1], dtype=int32),\r\n 'shape_signature': array([-1, -1, -1, -1], dtype=int32),\r\n 'dtype': numpy.float32,\r\n 'quantization': (0.0, 0),\r\n 'quantization_parameters': {'scales': array([], dtype=float32),\r\n 'zero_points': array([], dtype=int32),\r\n 'quantized_dimension': 0},\r\n 'sparsity_parameters': {}}]\r\n```\r\n\r\nAny idea on how to fix this? It can't be the correct expected shape right?", "We support also the export to TFLIte directly in Optimum, but not for TrOCR yet, just letting you know.\r\n\r\nAbout your issue, if I understand correctly you convert the ONNX models to a TensorFlow SavedModels. \r\n\r\nOnce you have done that, I would suggest convert those SavedModels to TFLite programatically, for each SavedModel try:\r\n\r\n1. Load the SavedModel\r\n2. Create a `tf.function` with the proper input signature from it: \r\n```python\r\nfunc = tf.function(loaded_model, input_signature=[tf.TensorSpec([shape here], dtype=torch.float32)])\r\n```\r\n4. Create a concrete function from `func`:\r\n```python\r\nconcrete_func = func.get_concrete_function()\r\n```\r\n5. Convert the concrete function to TFLite following this [example](https://www.tensorflow.org/lite/models/convert/convert_models?hl=fr#convert_concrete_functions_)\r\n\r\nTell me if it works!", "Wow, thank you for the heads-up @michaelbenayoun, that Optimum feature is surely awaited! \r\n\r\nAnyway, I tried your suggestion. Currently:\r\n```\r\nmodel = tf.saved_model.load(\"converted/tf/encoder/\")\r\nfunc = tf.function(model, input_signature=[tf.TensorSpec([1, 384, 384, 3], dtype=tf.float32)])\r\nconcrete_func = func.get_concrete_function()\r\n```\r\n\r\nHowever, I got this error from the concrete function getter:\r\n```\r\n ValueError: Could not find matching concrete function to call loaded from the SavedModel. Got:\r\n Positional arguments (1 total):\r\n * <tf.Tensor 'None_0:0' shape=(1, 384, 384, 3) dtype=float32>\r\n Keyword arguments: {}\r\n \r\n Expected these arguments to match one of the following 1 option(s):\r\n \r\n Option 1:\r\n Positional arguments (0 total):\r\n * \r\n Keyword arguments: {'pixel_values': TensorSpec(shape=(None, None, None, None), dtype=tf.float32, name='pixel_values')}\r\n```\r\n\r\nFrom my research I think this is because the shape is incorrect, but I don't know how to reshape the input. Any other suggestion on this? TIA! :D", "I think it's because it does not recognize the input signature. \r\n\r\nCould you try:\r\n```python\r\nfunc = tf.function(model, input_signature=[tf.TensorSpec([1, 384, 384, 3], dtype=tf.float32, name=\"pixel_values\")])\r\n```", "Nope, still got the same error with that. ", "any updates on this,\r\nI am also facing this issue @RichardRivaldo @michaelbenayoun ", "no @textyash20 have you found the solution for this?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Does anyone found the solution?", "Open file preprocessor_config.json in pretrained model on HuggingFace, you will see \"feature_extractor_type\" \r\nExample in trocr-small-handwritten \"feature_extractor_type\": \"DeiTFeatureExtractor\"\r\nOpen and paste it into your preprocessor_config.json" ]
1,680
1,693
1,686
NONE
null
I want to convert my TrOCR model into TFLite version. To do that, based on my understanding, I need to convert it first to ONNX, then to TF, and lastly to TFLite. I stumbled upon [#19604](https://github.com/huggingface/transformers/pull/19254). However, it's a bit different. In my case, I used the `trainer.save` function to save my finetuned TrOCR model. As a result, I got the checkpoint files and also these files: ``` config.json generation_config.json preprocessor_config.json pytorch_model.bin training_args.bin ``` Command I used: ``` python -m transformers.onnx --model=trocr/base/ --feature=vision2seq-lm onnx/ --atol 1e-3 ``` Error that I still got: ``` ValueError: Unrecognized feature extractor in base/. Should have a `feature_extractor_type` key in its preprocessor_config.json of config.json, or one of the following `model_type` keys in its config.json: audio-spectrogram-transformer, beit, chinese_clip, clap, clip, clipseg, conditional_detr, convnext, cvt, data2vec-audio, data2vec-vision, deformable_detr, deit, detr, dinat, donut-swin, dpt, flava, glpn, groupvit, hubert, imagegpt, layoutlmv2, layoutlmv3, levit, maskformer, mctct, mobilenet_v1, mobilenet_v2, mobilevit, nat, owlvit, perceiver, poolformer, regnet, resnet, segformer, sew, sew-d, speech_to_text, speecht5, swin, swinv2, table-transformer, timesformer, tvlt, unispeech, unispeech-sat, van, videomae, vilt, vit, vit_mae, vit_msn, wav2vec2, wav2vec2-conformer, wavlm, whisper, xclip, yolos ``` In the `config.json`, I have both `trocr` and `vision-encoder-decoder` as the model type, which is not included in the list given by the error. Any other way to do this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22565/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22564
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22564/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22564/comments
https://api.github.com/repos/huggingface/transformers/issues/22564/events
https://github.com/huggingface/transformers/pull/22564
1,654,047,179
PR_kwDOCUB6oc5NmBZY
22,564
a possible bug in function find_mismatched_keys
{ "login": "Yangr116", "id": 73805072, "node_id": "MDQ6VXNlcjczODA1MDcy", "avatar_url": "https://avatars.githubusercontent.com/u/73805072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yangr116", "html_url": "https://github.com/Yangr116", "followers_url": "https://api.github.com/users/Yangr116/followers", "following_url": "https://api.github.com/users/Yangr116/following{/other_user}", "gists_url": "https://api.github.com/users/Yangr116/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yangr116/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yangr116/subscriptions", "organizations_url": "https://api.github.com/users/Yangr116/orgs", "repos_url": "https://api.github.com/users/Yangr116/repos", "events_url": "https://api.github.com/users/Yangr116/events{/privacy}", "received_events_url": "https://api.github.com/users/Yangr116/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22564). All of your documentation changes will be reflected on that endpoint.", "cc @younesbelkada \r\nThis is not the right fix as `loaded_keys` should only contain keys that are in the `state_dict` and we are looping other that in this piece of code.", "In line [3054-3061](https://github.com/huggingface/transformers/blob/11fd2c773b11c3fcfe0fa25aa4b92db03c83636c/src/transformers/modeling_utils.py#L3054-L3061), the ```original_loaded_keys``` is input to the inner function ```_find_mismatched_keys```.\r\n\r\nBy using ```python -m pdb xxx.py```, I found that some keys in ```original_loaded_keys ``` are not in the ```state_dict``` . \r\nSo, in the line [2977](https://github.com/huggingface/transformers/blob/11fd2c773b11c3fcfe0fa25aa4b92db03c83636c/src/transformers/modeling_utils.py#L2977) , a ```KeyError``` is raised.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ### Issue When setting ```ignore_mismatched_sizes=True``` in ```Blip2ForConditionalGeneration```, a KeyError is raised. ### Reproduce * blip2-flan-t5-xl ``` from transformers import Blip2ForConditionalGeneration model_name = "Salesforce/blip2-flan-t5-xl" model = Blip2ForConditionalGeneration.from_pretrained(model_name, ignore_mismatched_sizes=True) ``` the output is ```KeyError: 'language_model.decoder.block.0.layer.2.DenseReluDense.wi_1.weight'``` * blip2-opt-2.7b ``` from transformers import Blip2ForConditionalGeneration model_name = "Salesforce/blip2-opt-2.7b" model = Blip2ForConditionalGeneration.from_pretrained(model_name, ignore_mismatched_sizes=True) ``` the output is ```KeyError: 'language_model.lm_head.weight'``` Fixes #22563 (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [√ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22564/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22564", "html_url": "https://github.com/huggingface/transformers/pull/22564", "diff_url": "https://github.com/huggingface/transformers/pull/22564.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22564.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22563
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22563/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22563/comments
https://api.github.com/repos/huggingface/transformers/issues/22563/events
https://github.com/huggingface/transformers/issues/22563
1,654,044,354
I_kwDOCUB6oc5ilrbC
22,563
KeyError when setting ignore_mismatched_sizes=True in Blip2ForConditionalGeneration
{ "login": "Yangr116", "id": 73805072, "node_id": "MDQ6VXNlcjczODA1MDcy", "avatar_url": "https://avatars.githubusercontent.com/u/73805072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yangr116", "html_url": "https://github.com/Yangr116", "followers_url": "https://api.github.com/users/Yangr116/followers", "following_url": "https://api.github.com/users/Yangr116/following{/other_user}", "gists_url": "https://api.github.com/users/Yangr116/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yangr116/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yangr116/subscriptions", "organizations_url": "https://api.github.com/users/Yangr116/orgs", "repos_url": "https://api.github.com/users/Yangr116/repos", "events_url": "https://api.github.com/users/Yangr116/events{/privacy}", "received_events_url": "https://api.github.com/users/Yangr116/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I fixed this possible issue in #22564, and I would like to know whether there are any other reasons.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
### System Info ### Issue When setting ```ignore_mismatched_sizes=True``` in ```Blip2ForConditionalGeneration```, a KeyError is raised. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ### Env ``` transformers==4.28.0.dev0 ``` ### Reproduction * blip2-flan-t5-xl ``` from transformers import Blip2ForConditionalGeneration model_name = "Salesforce/blip2-flan-t5-xl" model = Blip2ForConditionalGeneration.from_pretrained(model_name, ignore_mismatched_sizes=True) ``` the output is ```KeyError: 'language_model.decoder.block.0.layer.2.DenseReluDense.wi_1.weight'``` * blip2-opt-2.7b ``` from transformers import Blip2ForConditionalGeneration model_name = "Salesforce/blip2-opt-2.7b" model = Blip2ForConditionalGeneration.from_pretrained(model_name, ignore_mismatched_sizes=True) ``` the output is ```KeyError: 'language_model.lm_head.weight'``` ### Expected behavior I would like to revised the input resolution, which needs to set ``` ignore_mismatched_sizes=True```.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22563/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22562
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22562/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22562/comments
https://api.github.com/repos/huggingface/transformers/issues/22562/events
https://github.com/huggingface/transformers/issues/22562
1,654,001,518
I_kwDOCUB6oc5ilg9u
22,562
A potential bug in get_class_in_module by using subprocess to copy files among temp dir.
{ "login": "maofagui", "id": 9445799, "node_id": "MDQ6VXNlcjk0NDU3OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/9445799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maofagui", "html_url": "https://github.com/maofagui", "followers_url": "https://api.github.com/users/maofagui/followers", "following_url": "https://api.github.com/users/maofagui/following{/other_user}", "gists_url": "https://api.github.com/users/maofagui/gists{/gist_id}", "starred_url": "https://api.github.com/users/maofagui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maofagui/subscriptions", "organizations_url": "https://api.github.com/users/maofagui/orgs", "repos_url": "https://api.github.com/users/maofagui/repos", "events_url": "https://api.github.com/users/maofagui/events{/privacy}", "received_events_url": "https://api.github.com/users/maofagui/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This has been fixed by #22537 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.16 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no need - Using distributed or parallel set-up in script?: no need ### Who can help? @ydshieh @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The function in `dynamic_module_utils.py` as below will cause a `No such file or directory` error in parallel env (namely multiple process). ``` def get_class_in_module(class_name, module_path): """ Import a module on the cache directory for modules and extract a class from it. """ with tempfile.TemporaryDirectory() as tmp_dir: module_dir = Path(HF_MODULES_CACHE) / os.path.dirname(module_path) module_file_name = module_path.split(os.path.sep)[-1] + ".py" # Copy to a temporary directory. We need to do this in another process to avoid strange and flaky error # `ModuleNotFoundError: No module named 'transformers_modules.[module_dir_name].modeling'` shutil.copy(f"{module_dir}/{module_file_name}", tmp_dir) # On Windows, we need this character `r` before the path argument of `os.remove` cmd = f'import os; os.remove(r"{module_dir}{os.path.sep}{module_file_name}")' # We don't know which python binary file exists in an environment. For example, if `python3` exists but not # `python`, the call `subprocess.run(["python", ...])` gives `FileNotFoundError` (about python binary). Notice # that, if the file to be removed is not found, we also have `FileNotFoundError`, but it is not raised to the # caller's process. try: subprocess.run(["python", "-c", cmd]) except FileNotFoundError: try: subprocess.run(["python3", "-c", cmd]) except FileNotFoundError: pass # copy back the file that we want to import shutil.copyfile(f"{tmp_dir}/{module_file_name}", f"{module_dir}/{module_file_name}") # import the module module_path = module_path.replace(os.path.sep, ".") module = importlib.import_module(module_path) return getattr(module, class_name) ``` The below error can be reproduced by the same code fragment in [issue22555](https://github.com/huggingface/transformers/issues/22555). ``` Process p4: /var/folders/pv/nyl4rqb54tq1bslm06h34m840000gp/T/tmpebghdmvd/configuration_glm.py Traceback (most recent call last): File "/opt/miniconda3/envs/py38_torch/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/opt/miniconda3/envs/py38_torch/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Users/mfg/Code/torch_submit/mp_load_model/main.py", line 9, in func model = AutoModel.from_pretrained(local_dir, trust_remote_code=True) File "/Users/mfg/Code/transformers/src/transformers/models/auto/auto_factory.py", line 441, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/Users/mfg/Code/transformers/src/transformers/models/auto/configuration_auto.py", line 923, in from_pretrained config_class = get_class_from_dynamic_module( File "/Users/mfg/Code/transformers/src/transformers/dynamic_module_utils.py", line 400, in get_class_from_dynamic_module return get_class_in_module(class_name, final_module.replace(".py", "")) File "/Users/mfg/Code/transformers/src/transformers/dynamic_module_utils.py", line 178, in get_class_in_module module = importlib.import_module(module_path) File "/opt/miniconda3/envs/py38_torch/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 839, in exec_module File "<frozen importlib._bootstrap_external>", line 975, in get_code File "<frozen importlib._bootstrap_external>", line 1032, in get_data FileNotFoundError: [Errno 2] No such file or directory: '/Users/mfg/.cache/huggingface/modules/transformers_modules/glm-10b/configuration_glm.py' ``` This error only occurs in single machine for the reason of race condition on a same file or dir. So I think it is a good way to solve it by using FileLock as below: ``` import fcntl import os class FileLock(object): def __init__(self, file_path): self.file_path = file_path self.fd = None def __enter__(self): while True: try: self.fd = os.open(self.file_path, os.O_RDWR | os.O_CREAT) fcntl.lockf(self.fd, fcntl.LOCK_EX) return except: pass def __exit__(self, exc_type, exc_val, exc_tb): fcntl.lockf(self.fd, fcntl.LOCK_UN) os.close(self.fd) ``` Usage: ``` def get_class_in_module(class_name, module_path): """ Import a module on the cache directory for modules and extract a class from it. """ with tempfile.TemporaryDirectory() as tmp_dir: module_dir = Path(HF_MODULES_CACHE) / os.path.dirname(module_path) module_file_name = module_path.split(os.path.sep)[-1] + ".py" lock_file = f"./transformers/{module_file_name}_lockfile" lock = FileLock(lock_file) with lock: # Copy to a temporary directory. We need to do this in another process to avoid strange and flaky error # `ModuleNotFoundError: No module named 'transformers_modules.[module_dir_name].modeling'` shutil.copy(f"{module_dir}/{module_file_name}", tmp_dir) ... ``` I have test this solution in my local machine and meet no error any more. Any comments are welcome~ ### Expected behavior Can work in multiple process.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22562/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22561
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22561/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22561/comments
https://api.github.com/repos/huggingface/transformers/issues/22561/events
https://github.com/huggingface/transformers/issues/22561
1,653,950,092
I_kwDOCUB6oc5ilUaM
22,561
Make all Transformer models compatible with model parallelism
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "I think I can help with this Issue :) ", "I would like to work on this issue - BART model :)", "Hi, I can take this up 🙌🏻\r\n", "Indeed, this fix is required for BLOOM. https://github.com/huggingface/transformers/compare/main...zsc:transformers:main (my fix is hacky and not PR-ready. Just FYI)", "Just to make sure does `LlamaForCausalLM` supports this feature already?(https://github.com/huggingface/transformers/issues/22546 ) it seems that, still there are some errors when using `device_map=\"auto\"` for this task.", "Hi, I'd like to pick up the GPT-2 model!", "Hi! I am taking this up for `LlamaForSequenceClassification`. ", "> Just to make sure does `LlamaForCausalLM` supports this feature already?(#22546 ) it seems that, still there are some errors when using `device_map=\"auto\"` for this task.\r\n\r\nIt does (#22329). I have started seeing similar errors to #22546, but only after updating my drivers from 525 to 530, similar to https://github.com/huggingface/transformers/issues/22546#issuecomment-1498348442\r\n\r\n(which is good news to me, I had no idea why that gpu started disappearing occasionally. It seems it can happen when that gpu is under any load, not just during training)\r\n\r\nEdit: seems like the errors I was getting were actually caused by GPU sag. I haven't yet reproduced that exact error, but it has been reported elsewhere. It is certainly not consistent though.", "@younesbelkada @sgugger \r\nDoes this fix (moving label/logit to same device) supposed to work (model parallelism) for all models (listed above)? Or, a crucial step toward it? Also, this design fix is only for pytorch model and not for jax or tf?", "I think it is supposed to work for all models listed above, as long as you are loading your model with `device_map=xxx`. And yes this should be for Pytorch only, though I am not really aware of how model parallelism work on TF & Jax", "> I think it is supposed to work for all models listed above, as long as you are loading your model with device_map=xxx\r\n\r\nI tried with such fix here https://github.com/huggingface/transformers/pull/22591#issuecomment-1498013324 but sadly it didn't work out. Any catch?", "@sgugger \r\nAs the goal of this ticket is to enable model parallelism with easy fix, have the merged PR(s) checked on multi-gpu? I couldn't find any test script here https://github.com/huggingface/transformers/pull/22663/ regarding that .", "I would love to work with BridgeTower", "Hi. I would like to try with \"Whisper\"", "I'd like to claim OPT model if no one else has picked it up.", "Taking this up for the remaining GPT models", "Hello, I just completed the GPT-J code. Just filling in the PR now.", "Hello! I'd like to work in Whisper model", "Hi, is there any model on which I can work, please? Thanks.", "Is there any remaining model on which I can work ? Thanks .", "@sgugger Hello, can I work on the JukeBox?", "Hello @sgugger , I'd like to work on `m2m100`", "@sgugger I would love to work on CodeGen if it is unclaimed", "Hi @sgugger I can work on `Luke` if it has not been taken", "@sgugger I would like to work on SwitchTransformer, if not taken.", "@sgugger I think all transformers are covered, I have checked for others also...for example, switch transformers have parallelism implemented already. i think we can close this issue. The only pending models are clip,jukebox,owlvit, and Nllb , may be model parallelism is not applicable for some of there models\r\n", "Indeed, all models have been covered. Thanks a lot everyone!" ]
1,680
1,682
1,682
COLLABORATOR
null
Accelerate makes it easy to load a model on multiple GPUs with `device_map="auto"`. This in turn allows users to train model with naive model parallelism if they have several GPUs. A problem that happens in Transformers, with model with heads (so not XxxModel but for instance XxxModelForSequenceClassification) is that the labels end up on a different device than the logits and there is a device mistmatch error. Thankfully, there is an easy fix for that! #22535 shows how to fix this for T5 by just moving the labels to the same device as the logits they are compared to. This is a noop when the devices are the same, and fixes the issue if devices are different. We would like help from the community to extend this to all models that support model parallelism, which are: - [x] BART - [x] BigBirdPegasus - [x] BLIP2 - [x] BLOOM - [x] BridgeTower - [x] CamemBERT - [x] CLIP - [x] CLIPSeg - [x] CodeGen - [x] Data2Vec Text - [x] Deit - [x] ESM - [x] GPT-2 - [x] GPT-Neo - [x] GPT-NeoX - [x] GPT-NeoX Japanese - [x] GPT-J - [x] GPT-San - [x] JukeBox - [x] Lilt - [x] LLaMA (`LlamaForSequenceClassification` only) - [x] Longformer - [x] LongT5 - [x] Luke - [x] M2M100 - [x] mBART - [x] mT5 - [x] NLLB - [x] OPT - [x] Owl-ViT - [x] Pix2Struct - [x] PLBART - [x] RoBERTa - [x] RoBERTa PreLayerNorm - [x] SwitchTransformer - [x] T5 - [x] Vilt - [x] ViT - [x] ViT-Hybrid - [x] Whisper - [x] XLM-RoBERTa If you would like to grab one of those models and apply the same fix as #22535 to all the model with heads, please leave a comment here!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22561/reactions", "total_count": 10, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 5, "rocket": 3, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/22561/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22560
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22560/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22560/comments
https://api.github.com/repos/huggingface/transformers/issues/22560/events
https://github.com/huggingface/transformers/pull/22560
1,653,931,978
PR_kwDOCUB6oc5Nlofw
22,560
[WIP]🌐 [i18n-KO] Translated `tasks/translation.mdx` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Due to WSL compatibility issues causing unreachable commits, I have opened another PR on a different branch.", "Closing in favor of https://github.com/huggingface/transformers/pull/22678" ]
1,680
1,681
1,681
CONTRIBUTOR
null
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니당 --> # What does this PR do? Partially translated the `tasks/translation.mdx` file of the documentation to Korean. I will finish off the rest by Thursday if possible. PseudoLab team members will review the quality of the translation by then. Thank you in advance for your review. ❤️ Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제출 전 체크리스트로, 가짜연구소만의 체크리스트도 <details>로 감싸서 만들어두면 더 좋을 것 같아요. --> ## Who can review? <!-- 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22560/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22560", "html_url": "https://github.com/huggingface/transformers/pull/22560", "diff_url": "https://github.com/huggingface/transformers/pull/22560.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22560.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22559
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22559/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22559/comments
https://api.github.com/repos/huggingface/transformers/issues/22559/events
https://github.com/huggingface/transformers/pull/22559
1,653,749,415
PR_kwDOCUB6oc5NlBAl
22,559
fixing a bug about gradient accumulation in codeparrot_training
{ "login": "ArmelRandy", "id": 76953833, "node_id": "MDQ6VXNlcjc2OTUzODMz", "avatar_url": "https://avatars.githubusercontent.com/u/76953833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArmelRandy", "html_url": "https://github.com/ArmelRandy", "followers_url": "https://api.github.com/users/ArmelRandy/followers", "following_url": "https://api.github.com/users/ArmelRandy/following{/other_user}", "gists_url": "https://api.github.com/users/ArmelRandy/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArmelRandy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArmelRandy/subscriptions", "organizations_url": "https://api.github.com/users/ArmelRandy/orgs", "repos_url": "https://api.github.com/users/ArmelRandy/repos", "events_url": "https://api.github.com/users/ArmelRandy/events{/privacy}", "received_events_url": "https://api.github.com/users/ArmelRandy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22559). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
# What does this PR do? The gradient accumulation was not pausing. In order to fix this issue, I modified the training loop in order to incorporate a better use of `Accelerator` to handle gradient accumulation. I also modified the declaration of accelerator in order to incorporate the argument `gradient_accumulation_steps`. To be tested. Fixes #22541 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue #22541 ? - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @lvwerra @loubnabnl
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22559/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22559", "html_url": "https://github.com/huggingface/transformers/pull/22559", "diff_url": "https://github.com/huggingface/transformers/pull/22559.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22559.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22558
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22558/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22558/comments
https://api.github.com/repos/huggingface/transformers/issues/22558/events
https://github.com/huggingface/transformers/pull/22558
1,653,703,377
PR_kwDOCUB6oc5Nk3IT
22,558
Add id2label and label2id to model's config in run_xnil
{ "login": "maziyarpanahi", "id": 5762953, "node_id": "MDQ6VXNlcjU3NjI5NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/5762953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maziyarpanahi", "html_url": "https://github.com/maziyarpanahi", "followers_url": "https://api.github.com/users/maziyarpanahi/followers", "following_url": "https://api.github.com/users/maziyarpanahi/following{/other_user}", "gists_url": "https://api.github.com/users/maziyarpanahi/gists{/gist_id}", "starred_url": "https://api.github.com/users/maziyarpanahi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maziyarpanahi/subscriptions", "organizations_url": "https://api.github.com/users/maziyarpanahi/orgs", "repos_url": "https://api.github.com/users/maziyarpanahi/repos", "events_url": "https://api.github.com/users/maziyarpanahi/events{/privacy}", "received_events_url": "https://api.github.com/users/maziyarpanahi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Models fine-tune via `run_xnil.py` script don't have any labels in their `id2label` and `label2id` fields in config. They are just placeholder like LABEL_0 etc. This is similar to this issue #2487 and is based on this PR #2945 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22558/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22558", "html_url": "https://github.com/huggingface/transformers/pull/22558", "diff_url": "https://github.com/huggingface/transformers/pull/22558.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22558.patch", "merged_at": 1680614938000 }
https://api.github.com/repos/huggingface/transformers/issues/22557
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22557/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22557/comments
https://api.github.com/repos/huggingface/transformers/issues/22557/events
https://github.com/huggingface/transformers/pull/22557
1,653,670,726
PR_kwDOCUB6oc5Nkv-h
22,557
corrected the code comment for the output of find_pruneable_heads_and_indices
{ "login": "SunHaozhe", "id": 26926814, "node_id": "MDQ6VXNlcjI2OTI2ODE0", "avatar_url": "https://avatars.githubusercontent.com/u/26926814?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunHaozhe", "html_url": "https://github.com/SunHaozhe", "followers_url": "https://api.github.com/users/SunHaozhe/followers", "following_url": "https://api.github.com/users/SunHaozhe/following{/other_user}", "gists_url": "https://api.github.com/users/SunHaozhe/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunHaozhe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunHaozhe/subscriptions", "organizations_url": "https://api.github.com/users/SunHaozhe/orgs", "repos_url": "https://api.github.com/users/SunHaozhe/repos", "events_url": "https://api.github.com/users/SunHaozhe/events{/privacy}", "received_events_url": "https://api.github.com/users/SunHaozhe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? This PR improves the doc (code comment), because the code comment of find_pruneable_heads_and_indices was not correct. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @anmolsjoshi @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22557/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22557", "html_url": "https://github.com/huggingface/transformers/pull/22557", "diff_url": "https://github.com/huggingface/transformers/pull/22557.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22557.patch", "merged_at": 1680622183000 }
https://api.github.com/repos/huggingface/transformers/issues/22556
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22556/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22556/comments
https://api.github.com/repos/huggingface/transformers/issues/22556/events
https://github.com/huggingface/transformers/pull/22556
1,653,644,247
PR_kwDOCUB6oc5NkqN5
22,556
[`bnb`] Fix typo
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Fixes a small typo, in fact the correct argument name is `llm_int8_enable_fp32_cpu_offload`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22556/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22556", "html_url": "https://github.com/huggingface/transformers/pull/22556", "diff_url": "https://github.com/huggingface/transformers/pull/22556.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22556.patch", "merged_at": 1680614806000 }
https://api.github.com/repos/huggingface/transformers/issues/22555
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22555/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22555/comments
https://api.github.com/repos/huggingface/transformers/issues/22555/events
https://github.com/huggingface/transformers/issues/22555
1,653,553,617
I_kwDOCUB6oc5ijznR
22,555
get_class_from_dynamic_module may throw exception in multiple process
{ "login": "maofagui", "id": 9445799, "node_id": "MDQ6VXNlcjk0NDU3OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/9445799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maofagui", "html_url": "https://github.com/maofagui", "followers_url": "https://api.github.com/users/maofagui/followers", "following_url": "https://api.github.com/users/maofagui/following{/other_user}", "gists_url": "https://api.github.com/users/maofagui/gists{/gist_id}", "starred_url": "https://api.github.com/users/maofagui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maofagui/subscriptions", "organizations_url": "https://api.github.com/users/maofagui/orgs", "repos_url": "https://api.github.com/users/maofagui/repos", "events_url": "https://api.github.com/users/maofagui/events{/privacy}", "received_events_url": "https://api.github.com/users/maofagui/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I will try to fix this bug and push a commit soon.", "Awesome thanks for reporting this! ", "This will be fixed by #22537", "> This will be fixed by #22537\r\n\r\nOK~", "\r\nI met the same `AttributeError` issue. But the problem still exist after I update the master branch code with this fix https://github.com/huggingface/transformers/pull/22537. I'm wondering if that pr really fix this issue?\r\n\r\n<img width=\"1140\" alt=\"image\" src=\"https://user-images.githubusercontent.com/17028350/231663726-b80068a8-10f3-44d8-94aa-9a96102aed08.png\">\r\n\r\nSome information:\r\n\r\nI saved glm-10b model files on NFS storage, launch 8 process one node. Add debug code as blow:\r\n```python\r\n# /opt/conda/lib/python3.8/site-packages/transformers/dynamic_module_utils.py\r\ndef get_class_in_module(class_name, module_path):\r\n \"\"\"\r\n Import a module on the cache directory for modules and extract a class from it.\r\n \"\"\"\r\n module_path = module_path.replace(os.path.sep, \".\")\r\n module = importlib.import_module(module_path)\r\n try:\r\n return getattr(module, class_name)\r\n except:\r\n with open('/root/.cache/huggingface/modules/transformers_modules/glm-10b-chinese/configuration_glm.py', 'r') as f:\r\n print('print /root/.cache configuration_glm.py')\r\n print(f.read())\r\n raise\r\n```\r\n\r\nWhen the `AttributeError` exception happened, I found this code print `configuration_glm.py` file in `/root/.cache` is empty.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
### System Info - `transformers` version: 4.22.1 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no need - Using distributed or parallel set-up in script?: no need ### Who can help? @ArthurZucker @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction To simplify the problem, I design a simple code fragment to reproduce the bug: ``` import multiprocessing as mp def func(): from transformers import AutoTokenizer, AutoModel local_dir = "/Users/mfg/Code/huggingface/glm-10b" # change to your local dir tokenizer = AutoTokenizer.from_pretrained(local_dir, trust_remote_code=True) model = AutoModel.from_pretrained(local_dir, trust_remote_code=True) print(tokenizer) if __name__ == '__main__': procs = [] for i in range(10): p = mp.Process(target=func) p.start() procs.append(p) for p in procs: p.join() print("done") ``` All files in dir "/Users/mfg/Code/huggingface/glm-10b" can be found in https://huggingface.co/THUDM/glm-10b/tree/main . (no need to download the large file [pytorch_model.bin](https://huggingface.co/THUDM/glm-10b/blob/main/pytorch_model.bin) for that the exception happens before loading model) After you run, you may meet the exception as below: ``` Process Process-6: Process Process-5: Traceback (most recent call last): File "/opt/miniconda3/envs/py38_torch/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/opt/miniconda3/envs/py38_torch/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Users/mfg/Code/torch_submit/mp_load_model/main.py", line 7, in func tokenizer = AutoTokenizer.from_pretrained("/Users/mfg/Code/huggingface/glm-10b", trust_remote_code=True) File "/opt/miniconda3/envs/py38_torch/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 594, in from_pretrained tokenizer_class = get_class_from_dynamic_module( File "/opt/miniconda3/envs/py38_torch/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 375, in get_class_from_dynamic_module return get_class_in_module(class_name, final_module.replace(".py", "")) File "/opt/miniconda3/envs/py38_torch/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 148, in get_class_in_module return getattr(module, class_name) AttributeError: module 'transformers_modules.local.tokenization_glm' has no attribute 'GLMChineseTokenizer' Traceback (most recent call last): copy /Users/mfg/Code/huggingface/glm-10b/tokenization_glm.py /Users/mfg/.cache/huggingface/modules/transformers_modules/local/tokenization_glm.py File "/opt/miniconda3/envs/py38_torch/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/opt/miniconda3/envs/py38_torch/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Users/mfg/Code/torch_submit/mp_load_model/main.py", line 7, in func tokenizer = AutoTokenizer.from_pretrained("/Users/mfg/Code/huggingface/glm-10b", trust_remote_code=True) File "/opt/miniconda3/envs/py38_torch/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 594, in from_pretrained tokenizer_class = get_class_from_dynamic_module( File "/opt/miniconda3/envs/py38_torch/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 375, in get_class_from_dynamic_module return get_class_in_module(class_name, final_module.replace(".py", "")) Process Process-10: File "/opt/miniconda3/envs/py38_torch/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 148, in get_class_in_module return getattr(module, class_name) Traceback (most recent call last): File "/opt/miniconda3/envs/py38_torch/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/opt/miniconda3/envs/py38_torch/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Users/mfg/Code/torch_submit/mp_load_model/main.py", line 7, in func tokenizer = AutoTokenizer.from_pretrained("/Users/mfg/Code/huggingface/glm-10b", trust_remote_code=True) File "/opt/miniconda3/envs/py38_torch/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 594, in from_pretrained tokenizer_class = get_class_from_dynamic_module( File "/opt/miniconda3/envs/py38_torch/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 375, in get_class_from_dynamic_module return get_class_in_module(class_name, final_module.replace(".py", "")) File "/opt/miniconda3/envs/py38_torch/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 148, in get_class_in_module return getattr(module, class_name) AttributeError: module 'transformers_modules.local.tokenization_glm' has no attribute 'GLMChineseTokenizer' AttributeError: module 'transformers_modules.local.tokenization_glm' has no attribute 'GLMChineseTokenizer' ``` In my opinion, the bug is caused by running `shutil.copy` and `importlib.import_module(module_path)` concurrently. ``` # lib/python3.8/site-packages/transformers/dynamic_module_utils.py #get_cached_module_file if submodule == "local": # We always copy local files (we could hash the file to see if there was a change, and give them the name of # that hash, to only copy when there is a modification but it seems overkill for now). # The only reason we do the copy is to avoid putting too many folders in sys.path. shutil.copy(resolved_module_file, submodule_path / module_file) for module_needed in modules_needed: module_needed = f"{module_needed}.py" shutil.copy(os.path.join(pretrained_model_name_or_path, module_needed), submodule_path / module_needed) ``` ``` # lib/python3.8/site-packages/transformers/dynamic_module_utils.py def get_class_in_module(class_name, module_path): """ Import a module on the cache directory for modules and extract a class from it. """ module_path = module_path.replace(os.path.sep, ".") module = importlib.import_module(module_path) return getattr(module, class_name) ``` Looking forward to your reply. Thanks a lot. ### Expected behavior no exception
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22555/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22554
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22554/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22554/comments
https://api.github.com/repos/huggingface/transformers/issues/22554/events
https://github.com/huggingface/transformers/pull/22554
1,653,546,635
PR_kwDOCUB6oc5NkU_k
22,554
Add `torch_dtype` attribute
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The `torch_dtype` in the config can't be used for that? ", "I don't think so because sometimes (and very often) you load `fp32` models from the Hub, and not all models on the Hub have the `torch_dtype` attribute", "Also there is this : \r\n```python \r\n @property\r\n def dtype(self) -> torch.dtype:\r\n \"\"\"\r\n `torch.dtype`: The dtype of the module (assuming that all the module parameters have the same dtype).\r\n \"\"\"\r\n return get_parameter_dtype(self)\r\n```\r\nbut maybe all the parameters do not have the same dtype? \r\n", "Ah that works! Thanks for the pointer! I should have digged further :D", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Disclaimer: Maybe there is a more canonical way to retrieve the `torch_dtype` of a loaded model ! I propose to add the attribute `torch_dtype` inside `PreTrainedModel` so that it can be conveniently retrieved. Useful for example for `peft` where I see this solution as one of the possible solution to fix forward pass issues in half-precision for `PrefixTuning` models. To provide more context, the prefix tuning models feed to the base model [new `past_key_values`](https://github.com/huggingface/peft/blob/main/src/peft/tuners/prefix_tuning.py#L103-L109). Those are computed by default in `float32` (and should always stay in `float32`). However, if the base model is in half-precision, the forward pass would fail (`dtype` mismatch errors). This PR would make the retrieving process of the base model's `dtype` super easy, thus handling this error. cc @sgugger @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22554/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22554", "html_url": "https://github.com/huggingface/transformers/pull/22554", "diff_url": "https://github.com/huggingface/transformers/pull/22554.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22554.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22553
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22553/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22553/comments
https://api.github.com/repos/huggingface/transformers/issues/22553/events
https://github.com/huggingface/transformers/issues/22553
1,653,460,031
I_kwDOCUB6oc5ijcw_
22,553
ValueError in finetuning NLLB
{ "login": "molokanov50", "id": 85157008, "node_id": "MDQ6VXNlcjg1MTU3MDA4", "avatar_url": "https://avatars.githubusercontent.com/u/85157008?v=4", "gravatar_id": "", "url": "https://api.github.com/users/molokanov50", "html_url": "https://github.com/molokanov50", "followers_url": "https://api.github.com/users/molokanov50/followers", "following_url": "https://api.github.com/users/molokanov50/following{/other_user}", "gists_url": "https://api.github.com/users/molokanov50/gists{/gist_id}", "starred_url": "https://api.github.com/users/molokanov50/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/molokanov50/subscriptions", "organizations_url": "https://api.github.com/users/molokanov50/orgs", "repos_url": "https://api.github.com/users/molokanov50/repos", "events_url": "https://api.github.com/users/molokanov50/events{/privacy}", "received_events_url": "https://api.github.com/users/molokanov50/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should use the [forums](https://discuss.huggingface.co/) to help debug your training code.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "There still was no help from the forums. See https://discuss.huggingface.co/t/valueerror-in-finetuning-nllb/35533" ]
1,680
1,684
1,683
NONE
null
### System Info - `transformers` version: 4.21.1 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.10.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction It is surprising why there is still no example of finetuning any of NLLB models (at least, the smallest one) in a huggingface transformers environment. So I have followed [this](https://huggingface.co/docs/transformers/tasks/translation) guide and adapted the code to my case, namely, `nllb-200-distilled-600M`. My custom train and eval datasets I want to finetune `nllb-200-distilled-600M` on consist of 2 entries each, see my attached code. Running this code gives me `ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds`. ``` from transformers import AutoModelForSeq2SeqLM, NllbTokenizer, Seq2SeqTrainingArguments, Seq2SeqTrainer, DataCollatorForSeq2Seq from datasets import Dataset import numpy as np import evaluate trainPart = [] evalPart = [] def buildDataset(): trainPart.append({'id': 0, 'translation': { 'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.', 'ru': 'Но это высокое плато имело размер всего в несколько саженей, и вскоре мы снова оказались в своей стихии.'}}) trainPart.append({'id': 1, 'translation': { 'en': 'What awakened us was a sound which sent chills of fear down my spine: the howling of the monsters\' sirens, and the reverberations of distant explosions.', 'ru': 'Разбудили нас звуки, от которых у меня по спине побежали мурашки страха, - завывания сирен чудовищ и эхо отдаленных взрывов.'}}) evalPart.append({'id': 0, 'translation': { 'en': 'It could be coming from reverberations, deeper caverns caught in currents.', 'ru': 'Это, наверное, от ревербераций в глубинных полостях, вызванных течениями.'}}) evalPart.append({'id': 1, 'translation': { 'en': 'There’s a four to five second reverberation.', 'ru': 'Реверберация длится от четырех до пяти секунд.'}}) def postprocess_text(preds, labels): preds = [pred.strip() for pred in preds] labels = [[label.strip()] for label in labels] return preds, labels def run(): modelName = "nllb-200-distilled-600M" model = AutoModelForSeq2SeqLM.from_pretrained(modelName, use_auth_token=True) tokenizer = NllbTokenizer.from_pretrained( modelName, src_lang='eng_Latn', tgt_lang='rus_Cyrl' ) trainSet = Dataset.from_list(trainPart) evalSet = Dataset.from_list(evalPart) def preprocess_function(examples): inputs = [example['en'] for example in examples["translation"]] targets = [example['ru'] for example in examples["translation"]] model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True) return model_inputs def compute_metrics(eval_preds): preds, labels = eval_preds if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) result = metric.compute(predictions=decoded_preds, references=decoded_labels) result = {"bleu": result["score"]} prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] result["gen_len"] = np.mean(prediction_lens) result = {k: round(v, 4) for k, v in result.items()} return result tokenized_trainset = trainSet.map(preprocess_function, batched=True) tokenized_evalset = evalSet.map(preprocess_function, batched=True) data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model) # или modelName? metric = evaluate.load("sacrebleu") training_args = Seq2SeqTrainingArguments( output_dir="test_ft", evaluation_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=1, per_device_eval_batch_size=1, weight_decay=0.01, save_total_limit=3, num_train_epochs=2, predict_with_generate=True, fp16=True, push_to_hub=False, ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_trainset, eval_dataset=tokenized_evalset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) trainer.train() buildDataset() run() ``` ### Expected behavior A set of finetuned model's files in my `output_dir`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22553/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22553/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22552
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22552/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22552/comments
https://api.github.com/repos/huggingface/transformers/issues/22552/events
https://github.com/huggingface/transformers/pull/22552
1,653,376,468
PR_kwDOCUB6oc5NjwkT
22,552
fix bug because of implicit attention_mask argument in generation
{ "login": "Soonhwan-Kwon", "id": 7395166, "node_id": "MDQ6VXNlcjczOTUxNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/7395166?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Soonhwan-Kwon", "html_url": "https://github.com/Soonhwan-Kwon", "followers_url": "https://api.github.com/users/Soonhwan-Kwon/followers", "following_url": "https://api.github.com/users/Soonhwan-Kwon/following{/other_user}", "gists_url": "https://api.github.com/users/Soonhwan-Kwon/gists{/gist_id}", "starred_url": "https://api.github.com/users/Soonhwan-Kwon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Soonhwan-Kwon/subscriptions", "organizations_url": "https://api.github.com/users/Soonhwan-Kwon/orgs", "repos_url": "https://api.github.com/users/Soonhwan-Kwon/repos", "events_url": "https://api.github.com/users/Soonhwan-Kwon/events{/privacy}", "received_events_url": "https://api.github.com/users/Soonhwan-Kwon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hey @Soonhwan-Kwon 👋 \r\n\r\nI am unable to reproduce the issue you describe (see script below). Can you share a reproducer for the exception you're seeing?\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\ntok = AutoTokenizer.from_pretrained(\"gpt2\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"gpt2\")\r\n\r\ninputs = tok([\"This cat is\"], return_tensors=\"pt\")\r\ngen_out = model.generate(inputs.input_ids, attention_mask=inputs.attention_mask, do_sample=True)\r\nprint(tok.decode(gen_out[0]))\r\n```", "> Hey @Soonhwan-Kwon 👋\r\n> \r\n> I am unable to reproduce the issue you describe (see script below). Can you share a reproducer for the exception you're seeing?\r\n> \r\n> ```python\r\n> from transformers import AutoModelForCausalLM, AutoTokenizer\r\n> \r\n> tok = AutoTokenizer.from_pretrained(\"gpt2\")\r\n> model = AutoModelForCausalLM.from_pretrained(\"gpt2\")\r\n> \r\n> inputs = tok([\"This cat is\"], return_tensors=\"pt\")\r\n> gen_out = model.generate(inputs.input_ids, attention_mask=inputs.attention_mask, do_sample=True)\r\n> print(tok.decode(gen_out[0]))\r\n> ```\r\n\r\nSure, it occurs in GPT2LMHeadModel and below is the reproduction code.\r\n```\r\nfrom transformers import AutoModelWithLMHead, AutoTokenizer\r\ntok = AutoTokenizer.from_pretrained(\"gpt2\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"gpt2\")\r\n# GPT2LMHeadModel\r\ninputs = tok([\"This cat is\"], return_tensors=\"pt\")\r\ngen_out = model.generate(inputs.input_ids, attention_mask=inputs.attention_mask, do_sample=True)\r\nprint(tok.decode(gen_out[0]))\r\n```", "@Soonhwan-Kwon I can't reproduce the issue with the script you shared. What version of transformers are you using?" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Fixes # (issue) gpt2 prepares generation input with attention mask not in explicit way like beolw, and it conflicts with self._validate_model_kwargs which checks unused arguments and gets error. ``` /usr/local/lib/python3.9/site-packages/torch/utils/_contextlib.py in decorate_context(*args, **kwargs) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) 116 117 return decorate_context /usr/local/lib/python3.9/site-packages/transformers/generation/utils.py in generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, streamer, **kwargs) 1229 model_kwargs = generation_config.update(**kwargs) # All unused kwargs must be model kwargs 1230 generation_config.validate() -> 1231 self._validate_model_kwargs(model_kwargs.copy()) 1232 1233 # 2. Set generation parameters if not already defined /usr/local/lib/python3.9/site-packages/transformers/generation/utils.py in _validate_model_kwargs(self, model_kwargs) 1107 1108 if unused_model_args: -> 1109 raise ValueError( 1110 f"The following `model_kwargs` are not used by the model: {unused_model_args} (note: typos in the" 1111 " generate arguments will also show up in this list)" ValueError: The following `model_kwargs` are not used by the model: ['attention_mask'] (note: typos in the generate arguments will also show up in this list) ``` below is gpt2 ln:1007 https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py ``` def prepare_inputs_for_generation(self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs): token_type_ids = kwargs.get("token_type_ids", None) # only last token for inputs_ids if past is defined in kwargs if past_key_values: input_ids = input_ids[:, -1].unsqueeze(-1) if token_type_ids is not None: token_type_ids = token_type_ids[:, -1].unsqueeze(-1) attention_mask = kwargs.get("attention_mask", None) position_ids = kwargs.get("position_ids", None) ``` ## Before submitting - [v] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [v] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [v] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [v] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22552/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22552", "html_url": "https://github.com/huggingface/transformers/pull/22552", "diff_url": "https://github.com/huggingface/transformers/pull/22552.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22552.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22551
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22551/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22551/comments
https://api.github.com/repos/huggingface/transformers/issues/22551/events
https://github.com/huggingface/transformers/pull/22551
1,653,267,788
PR_kwDOCUB6oc5NjZiJ
22,551
Extend Transformers Trainer Class to Enable XPU
{ "login": "mingxiaoh", "id": 31092310, "node_id": "MDQ6VXNlcjMxMDkyMzEw", "avatar_url": "https://avatars.githubusercontent.com/u/31092310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mingxiaoh", "html_url": "https://github.com/mingxiaoh", "followers_url": "https://api.github.com/users/mingxiaoh/followers", "following_url": "https://api.github.com/users/mingxiaoh/following{/other_user}", "gists_url": "https://api.github.com/users/mingxiaoh/gists{/gist_id}", "starred_url": "https://api.github.com/users/mingxiaoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mingxiaoh/subscriptions", "organizations_url": "https://api.github.com/users/mingxiaoh/orgs", "repos_url": "https://api.github.com/users/mingxiaoh/repos", "events_url": "https://api.github.com/users/mingxiaoh/events{/privacy}", "received_events_url": "https://api.github.com/users/mingxiaoh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22551). All of your documentation changes will be reflected on that endpoint.", "Thanks for your PR, but we are not going to rename core utils like this.", "> Thanks for your PR, but we are not going to rename core utils like this.\r\n\r\nthanks for the review, if so, I just rolled back the rename. ", "@sgugger May you please help review it again? thanks.", "Thanks for your PR. We can leave this branch as is for people who want to use XPUs but:\r\n1. We do not have the ability to test them, so cannot commit to maintain this yet.\r\n2. The Trainer will be rewritten to use Accelerate very soon, so the support should be added in Accelerate (I believe there is already a PR open) and then will come in the Trainer for free.", "> Thanks for your PR. We can leave this branch as is for people who want to use XPUs but:\r\n> \r\n> 1. We do not have the ability to test them, so cannot commit to maintain this yet.\r\n> 2. The Trainer will be rewritten to use Accelerate very soon, so the support should be added in Accelerate (I believe there is already a PR open) and then will come in the Trainer for free.\r\n\r\nYes, my colleague is working with me preparing the PR for accelerate now. But, we still would like to merge this change into transformer since intel-extension-for-pytorch cpu backend is already in transformer, and we expect user can use it directly even without accelerate.", "The Trainer will require a dependency on Accelerate in roughly a month, so that point is moot.", "@sgugger thanks for the info. May you please help explain how transformer depend on Accelerate? Does it will look like below?\r\nfirst, use accelerate to wrap model,optimizer, data\r\n model, optimizer, data = accelerator.prepare(model, optimizer, input)\r\nsecond, pass the model/optimizer/data to Trainer of transformer \r\n Trainer(model,args, ...)\r\nBesides, accelerate seems only cover training path, how about inference path when Trainer has dependency on Accelerate?\r\n\r\n", "@sgugger sorry for being push, but your reply is quite important to us, thanks in advance.", "@sgugger we are investigating how to provide solution for xpu now, another PR is in https://github.com/huggingface/accelerate/pull/1118, detailed info about how transformer depend on accelerate is important to us, may you please take time to explain it to us? Thanks in advance.", "@mingxiaoh I would appreciate that you stop pinging me on this PR repeatedly. I said migrating the Trainer to use Accelerate for all the boilerplate code is a work in progress that will take roughly a month, so I would appreciate your patience on this and let us do the actual work.\r\nOnce the migration is done and the PR on the Accelerate side is merged, there will be no need for this PR or any other kind of PR, XPU will just work out-of-the-box with the Trainer.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "hello, @pacman100 \r\nMay you please take some time to help explain this issue? Thanks in advance.\r\nI found that transormers Trainer is using Accelerate now, is it the final solution? Why I ask so is because I found that, currently, if a model is wrapped by transformers trainer(e.g., IPEX CPU), it won't use accelerate to prepare model(see https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1734-L1737), which means, for IPEX cpu, it won't use accelerate to prepare model, for IPEX xpu, it will use accelerate to prepare model, this might make users confused. Besides, for accelerate, it could not tell whether it is inference mode or training mode, currently, in accelerate, IPEX xpu always assumes it is training mode and wrap model into training mode, in this case, we could not run transformers in inference mode for IPEX xpu, any comments for this issue?\r\n", "@pacman100 my colleague told me that you are the owner of integration of traniner & accelerate, if it is not too trouble, may you please take time to help explain it a little bit for us? Thanks.", "Hello @mingxiaoh, we are still in the process of migration. The next steps would involve shifting to Accelerate for ipex and adding the functionality in Accelerate if it isn't available there yet. Gentle ping @muellerzr who will be looking into this. ", "@pacman100 hello, may I know the the process of migration? Intel would like to extend it to xpu asap for customers' usge. Thanks in advance.", "@mingxiaoh XPU support is currently happening in Accelerate due to efforts of @abhilash1910 and @sywangyi. It was in the last Accelerate release, so from here we need to look at the specific DDP efforts the Trainer has that need to be handled with XPU specific support that *isn't* already done via the Accelerator object. (aka add an arg to configure the `state` properly if needed, otherwise most of the trainer should work OOTB with the XPU). (v 0.20.0). Related PR here: https://github.com/huggingface/accelerate/pull/1118\r\n\r\nAs the Trainer is now using Accelerate for all of it's device/compute configuration and specialized code", "@muellerzr Thanks, I know https://github.com/huggingface/accelerate/pull/1118 and xpu is supported in accelerate, we were working together for it before. But the issue is, we would like to know how the process of migration, in accelerate currently, inference is not supported on xpu, but for transformers, we must consider inference case. So, may I know the process of migration? we were told there is some problem about one month ago." ]
1,680
1,688
1,684
NONE
null
Rename GPU utils to CUDA Add XPU backend Doc on XPU backend
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22551/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22551", "html_url": "https://github.com/huggingface/transformers/pull/22551", "diff_url": "https://github.com/huggingface/transformers/pull/22551.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22551.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22550
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22550/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22550/comments
https://api.github.com/repos/huggingface/transformers/issues/22550/events
https://github.com/huggingface/transformers/issues/22550
1,653,242,036
I_kwDOCUB6oc5iini0
22,550
OverflowError with device="mps" using dedicated GPU
{ "login": "cmdrf", "id": 12779694, "node_id": "MDQ6VXNlcjEyNzc5Njk0", "avatar_url": "https://avatars.githubusercontent.com/u/12779694?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cmdrf", "html_url": "https://github.com/cmdrf", "followers_url": "https://api.github.com/users/cmdrf/followers", "following_url": "https://api.github.com/users/cmdrf/following{/other_user}", "gists_url": "https://api.github.com/users/cmdrf/gists{/gist_id}", "starred_url": "https://api.github.com/users/cmdrf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cmdrf/subscriptions", "organizations_url": "https://api.github.com/users/cmdrf/orgs", "repos_url": "https://api.github.com/users/cmdrf/repos", "events_url": "https://api.github.com/users/cmdrf/events{/privacy}", "received_events_url": "https://api.github.com/users/cmdrf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This looks similar to #22529 and this is not a bug in Transformers but in PyTorch, so you will have to wait for them to release a fix.", "Thanks for the quick answer!\r\n\r\nNot holding my breath for a fix though. It's one out of 10K+ open issues in pytorch...", "> Thanks for the quick answer!\r\n> \r\n> Not holding my breath for a fix though. It's one out of 10K+ open issues in pytorch...\r\n\r\nYeah that's the same issue. It just got marked high priority a few minutes ago so they're definitely looking at it.\r\n\r\nIn the meantime you can get it working if you make some manual fixes to your local copy of transformers. Not pretty, but it works. \r\n\r\nIn brief, I worked around it locally by searching `<python-install>/lib/python3.X/site-packages/transformers` for all references to `argmax`, and changing all relevant references such that `X.argmax(...)` is changed to `X.max(...).indices`. I think I changed it in 5 or 6 files total. Which references are relevant will depend on what you're doing. There's a ton of references under `models/` but you'd only need to change the ones you might actually need. I'm currently only looking at Llama models and there were no calls to `argmax` under `models/llama` so I didn't change any files under `models/`.\r\n\r\nIf you want to try that I can send you a list of files I had to changed, relative to `4.28.0.dev0`\r\n\r\nThen you'd also need check your client code to see if it's making any of its own calls to `torch.argmax`, and change those too.\r\n\r\nFinally, if you're using an Intel system with AMD GPU, then due to separate issue https://github.com/pytorch/pytorch/issues/92752 you also need to check for calls to `torch.multinomial` and rewrite those. There weren't any in transformers that affected me, but there was one in the client code I was using. I described how I changed that here: https://github.com/jankais3r/LLaMA_MPS/issues/14#issuecomment-1494959026 . Apparently Silicon systems aren't affected by this bug.\r\n\r\nIt's a bit of a mess at the moment due to those MPS bugs - but it is possible to get it working if you're willing to hack transformers and check your client code.", "> It just got marked high priority a few minutes ago so they're definitely looking at it.\r\n\r\nI pinged the PyTorch team on it ;-)", "Much appreciated!", "Actually running LLaMa was my goal, I was just trying something simpler first.\r\n\r\nNow I tried LLaMa using the following:\r\n```python\r\nfrom transformers import AutoTokenizer, LlamaForCausalLM, pipeline\r\n\r\nmodel = LlamaForCausalLM.from_pretrained(\"/path/to/models/llama-7b/\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"/path/to/models/llama-7b/\")\r\npipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer, device=\"mps\")\r\npipe(\"In this course, we will teach you how to\")\r\n```\r\n\r\nResult:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/Caskroom/miniconda/base/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/text_generation.py\", line 209, in __call__\r\n return super().__call__(text_inputs, **kwargs)\r\n File \"/usr/local/Caskroom/miniconda/base/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1109, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"/usr/local/Caskroom/miniconda/base/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1117, in run_single\r\n outputs = self.postprocess(model_outputs, **postprocess_params)\r\n File \"/usr/local/Caskroom/miniconda/base/envs/textgen/lib/python3.10/site-packages/transformers/pipelines/text_generation.py\", line 270, in postprocess\r\n text = self.tokenizer.decode(\r\n File \"/usr/local/Caskroom/miniconda/base/envs/textgen/lib/python3.10/site-packages/transformers/tokenization_utils_base.py\", line 3485, in decode\r\n return self._decode(\r\n File \"/usr/local/Caskroom/miniconda/base/envs/textgen/lib/python3.10/site-packages/transformers/tokenization_utils.py\", line 931, in _decode\r\n filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)\r\n File \"/usr/local/Caskroom/miniconda/base/envs/textgen/lib/python3.10/site-packages/transformers/tokenization_utils.py\", line 912, in convert_ids_to_tokens\r\n tokens.append(self._convert_id_to_token(index))\r\n File \"/usr/local/Caskroom/miniconda/base/envs/textgen/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama.py\", line 119, in _convert_id_to_token\r\n token = self.sp_model.IdToPiece(index)\r\n File \"/usr/local/Caskroom/miniconda/base/envs/textgen/lib/python3.10/site-packages/sentencepiece/__init__.py\", line 1045, in _batched_func\r\n return _func(self, arg)\r\n File \"/usr/local/Caskroom/miniconda/base/envs/textgen/lib/python3.10/site-packages/sentencepiece/__init__.py\", line 1038, in _func\r\n raise IndexError('piece id is out of range.')\r\nIndexError: piece id is out of range.\r\n```\r\n Which sounds like \"minus nine trillion something\" indices happening somewhere again. I didn't find \"multinomial\" or \"argmax\" under models/llama, but it's possible of course that those functions are called somewhere else.", "> Which sounds like \"minus nine trillion something\" indices happening somewhere again. I didn't find \"multinomial\" or \"argmax\" under models/llama, but it's possible of course that those functions are called somewhere else.\r\n\r\nYes, it is not referenced anywhere under `models/llama` but is referenced multiple other places throughout `transformers`. In my earlier reply I described the process I followed to change those.\r\n\r\nThat test code works for me with my locally hacked copy of `transformers`.\r\n\r\nCode:\r\n```python\r\nfrom transformers import LlamaTokenizer, LlamaForCausalLM, pipeline\r\n\r\nmodel = LlamaForCausalLM.from_pretrained(\"/Users/tomj/src/llama.cpp/models/llama-7b-HF\")\r\ntokenizer = LlamaTokenizer.from_pretrained(\"/Users/tomj/src/llama.cpp/models/llama-7b-HF\")\r\npipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer, device=\"mps\")\r\nprint(pipe(\"In this course, we will teach you how to\"))\r\n```\r\n\r\nOutput:\r\n```\r\ntomj@Eddie ~/src $ ~/anaconda3/envs/torch21/bin/python ./test_llama.py\r\nLoading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:20<00:00, 1.61it/s]\r\nThe tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.\r\nThe tokenizer class you load from this checkpoint is 'LLaMATokenizer'.\r\nThe class this function is called from is 'LlamaTokenizer'.\r\n/Users/tomj/anaconda3/envs/torch21/lib/python3.10/site-packages/transformers/generation/utils.py:1219: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)\r\n warnings.warn(\r\n/Users/tomj/anaconda3/envs/torch21/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.\r\n warnings.warn(\r\n[{'generated_text': 'In this course, we will teach you how to use the most popular and powerful tools in the industry'}]\r\n```", "Same error with torch nightly version: 2.1.0.dev20230428 and\r\n 'MPS' on a 2020 iMac 27\" with an AMD Radeon 5700 XT gpu in \r\n\r\nhttps://github.com/andreamad8/FSB", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,685
1,685
NONE
null
### System Info - 2019 Mac Pro - AMD Radeon Pro W5700X 16 GB - macOS Ventura 13.3 `transformers-cli env`: - `transformers` version: 4.27.4 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.1.0.dev20230403 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Shell: ``` conda create -n transformerstest conda activate transformerstest conda install -c huggingface transformers conda install pytorch torchvision torchaudio -c pytorch-nightly ``` Python: ``` from transformers import pipeline generator = pipeline("text-generation", device="mps") generator("In this course, we will teach you how to") ``` The system is then compiling Metal shaders and doing something on the GPU, but the result is: ``` Traceback (most recent call last): File "/Users/fabian/devel/transformers-course/test.py", line 4, in <module> generator("In this course, we will teach you how to") File "/usr/local/Caskroom/miniconda/base/envs/transformerstest/lib/python3.9/site-packages/transformers/pipelines/text_generation.py", line 209, in __call__ return super().__call__(text_inputs, **kwargs) File "/usr/local/Caskroom/miniconda/base/envs/transformerstest/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1109, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/usr/local/Caskroom/miniconda/base/envs/transformerstest/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1117, in run_single outputs = self.postprocess(model_outputs, **postprocess_params) File "/usr/local/Caskroom/miniconda/base/envs/transformerstest/lib/python3.9/site-packages/transformers/pipelines/text_generation.py", line 270, in postprocess text = self.tokenizer.decode( File "/usr/local/Caskroom/miniconda/base/envs/transformerstest/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 3476, in decode return self._decode( File "/usr/local/Caskroom/miniconda/base/envs/transformerstest/lib/python3.9/site-packages/transformers/tokenization_utils_fast.py", line 549, in _decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) OverflowError: out of range integral type conversion attempted ``` ### Expected behavior Generating output. This works on a MacBook Pro M1 with `device="mps"` (utilizing the GPU AFAICT) or on the Mac Pro without it (not utilizing GPU). Thanks for your support!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22550/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22549
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22549/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22549/comments
https://api.github.com/repos/huggingface/transformers/issues/22549/events
https://github.com/huggingface/transformers/pull/22549
1,653,129,955
PR_kwDOCUB6oc5Ni77u
22,549
[i18n-KO] fix: docs: ko: sagemaker anchors and `_toctree.yml`
{ "login": "jungnerd", "id": 46880056, "node_id": "MDQ6VXNlcjQ2ODgwMDU2", "avatar_url": "https://avatars.githubusercontent.com/u/46880056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungnerd", "html_url": "https://github.com/jungnerd", "followers_url": "https://api.github.com/users/jungnerd/followers", "following_url": "https://api.github.com/users/jungnerd/following{/other_user}", "gists_url": "https://api.github.com/users/jungnerd/gists{/gist_id}", "starred_url": "https://api.github.com/users/jungnerd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungnerd/subscriptions", "organizations_url": "https://api.github.com/users/jungnerd/orgs", "repos_url": "https://api.github.com/users/jungnerd/repos", "events_url": "https://api.github.com/users/jungnerd/events{/privacy}", "received_events_url": "https://api.github.com/users/jungnerd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<img src=\"https://user-images.githubusercontent.com/29195190/229707009-e1fca054-e464-4b98-af53-809e4c0778db.jpg\" width=\"200px\">\r\n\r\nWe are getting a 500 error (probably due to spacing issues in `_toctree.yml`). Please do not merge until the problem is resolved.", "> I think the error in the doc preview comes from bad yaml syntax in the modified toctree.\r\n\r\nYes, I agree @sgugger . There were two titles for a section on L13-19. Editing `_toctree.yml` like so fixed the issue for me locally.\r\n![image](https://user-images.githubusercontent.com/29195190/229802966-94eabdb3-0fff-4453-a2ca-caf188dee1f9.png)\r\n", "Also when squashing the commits, please fix the typo on `Co-auth*e*red-by` to `Co-auth*o*red-by` and add arrow brackets (`<>`) to the email.\r\nYou may also use Github Desktop to ease the process. Thank you for your PR @jungnerd and feel free to ask me any questions.", "Great work, @jungnerd !\nYou solved the issue; now please squash the commits into one. You can use\n- chat-gpt for the commit message and \n- Github Desktop to add co-authors\n\nif you want. :raised_hands: Good night!", "@jungnerd we can remove the `_toctree.yml` change completely as we updated it in the upstream `ko: complete toctree` commit. After that and rebasing, this PR should be good to go! Let's try to do this on Thursday.", "May you please review this PR?\r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,680
1,681
1,681
CONTRIBUTOR
null
Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com> # What does this PR do? I fixed the anchors and `_toctree.yml` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Part of #20179 (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> Please review this PR: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22549/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22549/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22549", "html_url": "https://github.com/huggingface/transformers/pull/22549", "diff_url": "https://github.com/huggingface/transformers/pull/22549.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22549.patch", "merged_at": 1681731713000 }
https://api.github.com/repos/huggingface/transformers/issues/22548
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22548/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22548/comments
https://api.github.com/repos/huggingface/transformers/issues/22548/events
https://github.com/huggingface/transformers/issues/22548
1,653,110,466
I_kwDOCUB6oc5iiHbC
22,548
[HOW TO FINETUNE CLIP OPENAI LAION2B MODELS FOR IMAGE CLASSIFICATION]
{ "login": "lamnt2008", "id": 124332581, "node_id": "U_kgDOB2kqJQ", "avatar_url": "https://avatars.githubusercontent.com/u/124332581?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lamnt2008", "html_url": "https://github.com/lamnt2008", "followers_url": "https://api.github.com/users/lamnt2008/followers", "following_url": "https://api.github.com/users/lamnt2008/following{/other_user}", "gists_url": "https://api.github.com/users/lamnt2008/gists{/gist_id}", "starred_url": "https://api.github.com/users/lamnt2008/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lamnt2008/subscriptions", "organizations_url": "https://api.github.com/users/lamnt2008/orgs", "repos_url": "https://api.github.com/users/lamnt2008/repos", "events_url": "https://api.github.com/users/lamnt2008/events{/privacy}", "received_events_url": "https://api.github.com/users/lamnt2008/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "There is no need to yell at us in all caps. No one can do anything without seeing the code you run." ]
1,680
1,680
null
NONE
null
I try to finetune CLIP model by using pretrained: https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K But I met a bug: ![image](https://user-images.githubusercontent.com/124332581/229679584-34db65ad-5a43-423e-bcd9-c54902fe7d6b.png) Help me! Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22548/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22548/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22547
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22547/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22547/comments
https://api.github.com/repos/huggingface/transformers/issues/22547/events
https://github.com/huggingface/transformers/issues/22547
1,653,083,495
I_kwDOCUB6oc5iiA1n
22,547
Compared OneFormer by transformers with original GitHub code, It's worked bad . Why ?
{ "login": "onefish51", "id": 21029719, "node_id": "MDQ6VXNlcjIxMDI5NzE5", "avatar_url": "https://avatars.githubusercontent.com/u/21029719?v=4", "gravatar_id": "", "url": "https://api.github.com/users/onefish51", "html_url": "https://github.com/onefish51", "followers_url": "https://api.github.com/users/onefish51/followers", "following_url": "https://api.github.com/users/onefish51/following{/other_user}", "gists_url": "https://api.github.com/users/onefish51/gists{/gist_id}", "starred_url": "https://api.github.com/users/onefish51/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/onefish51/subscriptions", "organizations_url": "https://api.github.com/users/onefish51/orgs", "repos_url": "https://api.github.com/users/onefish51/repos", "events_url": "https://api.github.com/users/onefish51/events{/privacy}", "received_events_url": "https://api.github.com/users/onefish51/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @alaradirik and @amyeroberts ", "Hi @onefish51, thanks for reporting this issue! It's definitely not the desired behaviour, and I'm digging into it now. \r\n\r\nAs a first pass, it seems this is related to specific checkpoints and inputs. For example, for the checkpoint you provided, `shi-labs/oneformer_coco_dinat_large` I also see poor segmentation for the example image: \r\n![image](https://user-images.githubusercontent.com/22614925/230117033-ecbb3585-be95-4146-a13e-bc6720d83fa1.png)\r\n\r\nBut good segmentation for a different input image:\r\n![image](https://user-images.githubusercontent.com/22614925/230115145-c1269b8d-d97a-40db-8801-b44a8456904e.png)\r\n\r\nAnd a different oneformer checkpoint `shi-labs/oneformer_coco_swin_large` outputs a reasonable segmentation map:\r\n![image](https://user-images.githubusercontent.com/22614925/230115821-3980230e-471a-4335-b983-784fe1084055.png)\r\n\r\nThis indicates to me that the differences are more likely to be coming from the model outputs than the pre/post processing steps, TBC.\r\n\r\nTo help narrow down the effects, could you share some more information about the environment you're using (run `transformers-cli env` to get the info) and the device the model is being run on e.g. `\"cpu\"`?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "cc @praeclarumjj3", "> Hi @onefish51, thanks for reporting this issue! It's definitely not the desired behaviour, and I'm digging into it now.\r\n> \r\n> As a first pass, it seems this is related to specific checkpoints and inputs. For example, for the checkpoint you provided, `shi-labs/oneformer_coco_dinat_large` I also see poor segmentation for the example image: ![image](https://user-images.githubusercontent.com/22614925/230117033-ecbb3585-be95-4146-a13e-bc6720d83fa1.png)\r\n> \r\n> But good segmentation for a different input image: ![image](https://user-images.githubusercontent.com/22614925/230115145-c1269b8d-d97a-40db-8801-b44a8456904e.png)\r\n> \r\n> And a different oneformer checkpoint `shi-labs/oneformer_coco_swin_large` outputs a reasonable segmentation map: ![image](https://user-images.githubusercontent.com/22614925/230115821-3980230e-471a-4335-b983-784fe1084055.png)\r\n> \r\n> This indicates to me that the differences are more likely to be coming from the model outputs than the pre/post processing steps, TBC.\r\n> \r\n> To help narrow down the effects, could you share some more information about the environment you're using (run `transformers-cli env` to get the info) and the device the model is being run on e.g. `\"cpu\"`?\r\n\r\nhi @amyeroberts thanks for reporting ! I guess it has nothing to do with the env, and I run it by GPU V100\r\n```sh\r\n$ transformers-cli env\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.28.0.dev0\r\n- Platform: Linux-5.4.0-146-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.16\r\n- Huggingface_hub version: 0.13.3\r\n- Safetensors version: not installed\r\n- PyTorch version (GPU?): 1.10.1 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```\r\nand you said \r\n>the differences are more likely to be coming from the model outputs than the pre/post processing steps, TBC.\r\n\r\nmay right", "and I will create a colab to compare it , if needed", "@onefish51 it's also worth mentioning that OneFormer was added by the authors themselves. I'd recommend comparing the output logits of models, which should be the same.", "Hi, @onefish51, thanks for your interest in OneFormer and bringing up this issue. I am sorry for not replying earlier. I am currently occupied with a few other things.\r\n\r\nThis strange case could be image specific for the DiNAT-L OneFormer checkpoint. As @alaradirik suggested, comparing the HF model outputs **for this image** to the model outputs using the official GitHub repo could provide some hints. The tests suggest they should be the same, but it's good to look.\r\n\r\nI'll do this myself once I get some time. Please let us know if you compare these on your end.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Any update on this? @onefish51 \r\nThe GH and Huggingface examples both use the same checkpoint, so the output should have been the same, right? " ]
1,680
1,691
1,686
NONE
null
### System Info transformers == 4.26.0 Python == 3.8.8 ### Who can help? @praeclarumjj3 @NielsRogge ### Reproduction Thanks for your great Work ! I compared OneFormer in transformers with original [ original GitHub code](https://github.com/SHI-Labs/OneFormer), It sometimes worked bad . OneFormer by transformers: ``` from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation from collections import defaultdict import matplotlib.pyplot as plt from matplotlib import cm import matplotlib.patches as mpatches def draw_panoptic_segmentation(segmentation, segments_info): segmentation= segmentation.to(cpu_device) # get the used color map viridis = cm.get_cmap('viridis', torch.max(segmentation)) fig, ax = plt.subplots() ax.imshow(segmentation) instances_counter = defaultdict(int) handles = [] # for each segment, draw its legend for segment in segments_info: segment_id = segment['id'] segment_label_id = segment['label_id'] segment_label = model_oneformer.config.id2label[segment_label_id] label = f"{segment_label}-{instances_counter[segment_label_id]}" instances_counter[segment_label_id] += 1 color = viridis(segment_id) handles.append(mpatches.Patch(color=color, label=label)) ax.legend(handles=handles) plt.savefig('./panoptic.png') processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_coco_dinat_large") model = OneFormerForUniversalSegmentation.from_pretrained("shi-labs/oneformer_coco_dinat_large").to(device) image = original_image.resize((512, 512)) inputs = processor(image, ["panoptic"], return_tensors="pt").to(device) with torch.no_grad(): outputs = model(**inputs) # you can pass them to processor for panoptic postprocessing panoptic_segmentation = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] print(panoptic_segmentation.keys()) draw_panoptic_segmentation(**panoptic_segmentation) ``` original GitHub code: ``` from detectron2.config import get_cfg from detectron2.projects.deeplab import add_deeplab_config # from detectron2.data import MetadataCatalog from oneformer import ( add_oneformer_config, add_common_config, # add_swin_config, add_dinat_config, ) from demo.defaults import DefaultPredictor def setup_cfg(): # load config from file and command-line arguments cfg = get_cfg() add_deeplab_config(cfg) add_common_config(cfg) # add_swin_config(cfg) add_oneformer_config(cfg) add_dinat_config(cfg) cfg_path = "OneFormer/configs/coco/oneformer_dinat_large_bs16_100ep.yaml" cfg.merge_from_file(cfg_path) if torch.cuda.is_available(): cfg.MODEL.DEVICE = 'cuda' else: cfg.MODEL.DEVICE = 'cpu' # cfg.MODEL.WEIGHTS = hf_hub_download(repo_id="shi-labs/oneformer_coco_dinat_large", # filename="150_16_dinat_l_oneformer_coco_100ep.pth", local_dir=local_dir) cfg.MODEL.WEIGHTS = 'OneFormer/oneformer_coco_dinat_large/150_16_dinat_l_oneformer_coco_100ep.pth' cfg.freeze() return cfg predictor = setup_modules() img = cv2.resize(img.astype(np.uint8), (512, 512),interpolation=cv2.INTER_AREA) predictions = predictor(img, "panoptic") panoptic_seg, segments_info = predictions["panoptic_seg"] ``` ### Expected behavior input image: ![13201678780389_ pic](https://user-images.githubusercontent.com/21029719/229674653-c03cb9c3-240c-458c-904a-f9c3a5d6e71c.jpg) OneFormer by transformers output: ![panoptic](https://user-images.githubusercontent.com/21029719/229674824-18bf9b5b-412d-4fd6-82cb-cac8c058a178.png) OneFormer by original GitHub output: ![out](https://user-images.githubusercontent.com/21029719/229674928-f9f59080-d47d-4b4e-b97a-58d74faf1269.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22547/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22546
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22546/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22546/comments
https://api.github.com/repos/huggingface/transformers/issues/22546/events
https://github.com/huggingface/transformers/issues/22546
1,652,706,809
I_kwDOCUB6oc5igk35
22,546
RuntimeError: CUDA error: device-side assert triggered when running Llama on multiple gpus
{ "login": "TerryCM", "id": 33166112, "node_id": "MDQ6VXNlcjMzMTY2MTEy", "avatar_url": "https://avatars.githubusercontent.com/u/33166112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TerryCM", "html_url": "https://github.com/TerryCM", "followers_url": "https://api.github.com/users/TerryCM/followers", "following_url": "https://api.github.com/users/TerryCM/following{/other_user}", "gists_url": "https://api.github.com/users/TerryCM/gists{/gist_id}", "starred_url": "https://api.github.com/users/TerryCM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TerryCM/subscriptions", "organizations_url": "https://api.github.com/users/TerryCM/orgs", "repos_url": "https://api.github.com/users/TerryCM/repos", "events_url": "https://api.github.com/users/TerryCM/events{/privacy}", "received_events_url": "https://api.github.com/users/TerryCM/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This repo is not in sync with the model and tokenizer as implemented in the Transformers library. Sadly, we do not have permission to distribute the weights, so there is no official checkpoint you can use. After you get the official weights from Meta and run the conversion command as documented, you shouldn't have any problem with the model.", "@sgugger I'm experiencing exactly the same error when using official llama weights converted using the huggingface conversion script from the master branch. It happens on the master branch when running inference with accelerate on multiple GPUs (I tried 2x4090 and 4x4090). To reproduce:\r\n\r\n```\r\np=pipeline(\"text-generation\", \"path/to/converted-llama-30b-hf\", torch_dtype=torch.float16, device_map=\"auto\")\r\np(\"hi there\")\r\n```\r\n\r\nThis used to work last week. I don't have the exact branch commit ID, but could do git bisect if it'd help.\r\n\r\nI'm using pytorch==2.0.0, cuda 11.7, and recent versions of accelearte and bitsandbytes (yes, it also shows the same error with load_in_8bits=True).\r\n", "@emvw7yf Could you print `pipeline.model.hf_device_map` and report that here? This would help us debug this issue.", "@sgugger I managed to use the official llama weights and still getting the same error, for llama 7B, using the code from https://github.com/huggingface/transformers/issues/22546#issuecomment-1496891148, and printing `pipeline.model.hf_device_map` I'm getting the following:\r\n\r\n{'model.embed_tokens': 0, 'model.layers.0': 0, 'model.layers.1': 0, 'model.layers.2': 0, 'model.layers.3': 0, 'model.layers.4': 0, 'model.layers.5': 0, 'model.layers.6': 0, 'model.layers.7': 0, 'model.layers.8': 0, 'model.layers.9': 0, 'model.layers.10': 0, 'model.layers.11': 0, 'model.layers.12': 0, 'model.layers.13': 0, 'model.layers.14': 0, 'model.layers.15': 0, 'model.layers.16': 1, 'model.layers.17': 1, 'model.layers.18': 1, 'model.layers.19': 1, 'model.layers.20': 1, 'model.layers.21': 1, 'model.layers.22': 1, 'model.layers.23': 1, 'model.layers.24': 1, 'model.layers.25': 1, 'model.layers.26': 1, 'model.layers.27': 1, 'model.layers.28': 1, 'model.layers.29': 1, 'model.layers.30': 1, 'model.layers.31': 1, 'model.norm': 1, 'lm_head': 1}", "So I skimmed through the existing repos to look for one that has the same weights/tokenizer as what I get after the conversion script is applied. Applying this code:\r\n```\r\np=pipeline(\"text-generation\", \"huggyllama/llama-7b\", torch_dtype=torch.float16, device_map=\"auto\")\r\np(\"hi there\")\r\n```\r\ngives me the exact same device map as you @TerryCM and works without any issue. I am on Transformers main and Accelerate latest version.", "@sgugger I'm also on Transformers main and accelerate version (I used pip install accelerate), could be this a drivers problem? Im using the following drivers\r\n![Screenshot 2023-04-05 at 10 43 27 AM](https://user-images.githubusercontent.com/33166112/230161057-20d0dfbd-957a-4ed1-be9c-ff4a4fb90eda.png)\r\n", "Are you using the same repository as me? I'm on CUDA 11.8 and 520 drivers.", "I can reliably reproduce it on both runpod.io and vast.ai. I'm using 2x4090 GPUs and the default docker image on each service (runpod/pytorch:3.10-2.0.0-117 and pytorch/pytorch:2.0.0-cuda11.7-cudnn8-devel).\r\n\r\nI'm running the following:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers.git accelerate sentencepiece\r\n\r\nimport torch\r\nfrom transformers import pipeline\r\n\r\np=pipeline(\"text-generation\", \"huggyllama/llama-7b\", torch_dtype=torch.float16, device_map=\"auto\")\r\np(\"hi there\")\r\n```\r\n\r\nThis results in the assertion error above. When I restrict it to a single GPU (using CUDA_VISIBLE_DEVICES), it works without errors.\r\n\r\nVersions (taken on vast.ai):\r\n\r\n```\r\nroot@C.6113089:/$ nvidia-smi\r\nWed Apr 5 23:57:43 2023 \r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 525.78.01 Driver Version: 525.78.01 CUDA Version: 12.0 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|===============================+======================+======================|\r\n| 0 NVIDIA GeForce ... On | 00000000:41:00.0 Off | Off |\r\n| 0% 21C P8 18W / 450W | 1MiB / 24564MiB | 0% Default |\r\n| | | N/A |\r\n+-------------------------------+----------------------+----------------------+\r\n| 1 NVIDIA GeForce ... On | 00000000:42:00.0 Off | Off |\r\n| 0% 21C P8 24W / 450W | 1MiB / 24564MiB | 0% Default |\r\n| | | N/A |\r\n+-------------------------------+----------------------+----------------------+\r\n \r\n+-----------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=============================================================================|\r\n| No running processes found |\r\n+-----------------------------------------------------------------------------+\r\nroot@C.6113089:/$ nvcc --version\r\nnvcc: NVIDIA (R) Cuda compiler driver\r\nCopyright (c) 2005-2022 NVIDIA Corporation\r\nBuilt on Wed_Jun__8_16:49:14_PDT_2022\r\nCuda compilation tools, release 11.7, V11.7.99\r\nBuild cuda_11.7.r11.7/compiler.31442593_0\r\nroot@C.6113089:/$ python -c \"import torch; print(torch.version.cuda)\"\r\n11.7\r\n```\r\n\r\nHow could I help debugging this?", "I actually realized that the error I'm getting is slightly different (even though the assertion is the same), pasting it below:\r\n\r\n```\r\n/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.\r\n warnings.warn(\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [64,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [65,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [66,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [67,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [68,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [69,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [70,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [71,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [72,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [73,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [74,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [75,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [76,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [77,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [78,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [79,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [80,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [81,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [82,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [83,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [84,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [85,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [86,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [87,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [88,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [89,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [90,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [91,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [92,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [93,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [94,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [95,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [0,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [1,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [2,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [3,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [4,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [5,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [6,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [7,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [8,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [9,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [10,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [11,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [12,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [13,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [14,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [15,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [16,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [17,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [18,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [19,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [20,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [21,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [22,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [23,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [24,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [25,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [26,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [27,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [28,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [29,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [30,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [31,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [96,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [97,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [98,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [99,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [100,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [101,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [102,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [103,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [104,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [105,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [106,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [107,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [108,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [109,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [110,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [111,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [112,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [113,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [114,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [115,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [116,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [117,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [118,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [119,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [120,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [121,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [122,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [123,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [124,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [125,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [126,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [127,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [32,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [33,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [34,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [35,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [36,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [37,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [38,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [39,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [40,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [41,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [42,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [43,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [44,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [45,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [46,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [47,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [48,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [49,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [50,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [51,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [52,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [53,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [54,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [55,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [56,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [57,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [58,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [59,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [60,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [61,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [62,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n/opt/conda/conda-bld/pytorch_1678411187366/work/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [63,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && \"index out of bounds\"` failed.\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[2], line 5\r\n 2 from transformers import pipeline\r\n 4 p=pipeline(\"text-generation\", \"huggyllama/llama-7b\", torch_dtype=torch.float16, device_map=\"auto\")\r\n----> 5 p(\"hi there\")\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/pipelines/text_generation.py:209, in TextGenerationPipeline.__call__(self, text_inputs, **kwargs)\r\n 168 def __call__(self, text_inputs, **kwargs):\r\n 169 \"\"\"\r\n 170 Complete the prompt(s) given as inputs.\r\n 171 \r\n (...)\r\n 207 ids of the generated text.\r\n 208 \"\"\"\r\n--> 209 return super().__call__(text_inputs, **kwargs)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py:1109, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)\r\n 1101 return next(\r\n 1102 iter(\r\n 1103 self.get_iterator(\r\n (...)\r\n 1106 )\r\n 1107 )\r\n 1108 else:\r\n-> 1109 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py:1116, in Pipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params)\r\n 1114 def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):\r\n 1115 model_inputs = self.preprocess(inputs, **preprocess_params)\r\n-> 1116 model_outputs = self.forward(model_inputs, **forward_params)\r\n 1117 outputs = self.postprocess(model_outputs, **postprocess_params)\r\n 1118 return outputs\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py:1015, in Pipeline.forward(self, model_inputs, **forward_params)\r\n 1013 with inference_context():\r\n 1014 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)\r\n-> 1015 model_outputs = self._forward(model_inputs, **forward_params)\r\n 1016 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device(\"cpu\"))\r\n 1017 else:\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/pipelines/text_generation.py:251, in TextGenerationPipeline._forward(self, model_inputs, **generate_kwargs)\r\n 249 prompt_text = model_inputs.pop(\"prompt_text\")\r\n 250 # BS x SL\r\n--> 251 generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)\r\n 252 out_b = generated_sequence.shape[0]\r\n 253 if self.framework == \"pt\":\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:1437, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, streamer, **kwargs)\r\n 1431 raise ValueError(\r\n 1432 f\"num_return_sequences has to be 1, but is {generation_config.num_return_sequences} when doing\"\r\n 1433 \" greedy search.\"\r\n 1434 )\r\n 1436 # 11. run greedy search\r\n-> 1437 return self.greedy_search(\r\n 1438 input_ids,\r\n 1439 logits_processor=logits_processor,\r\n 1440 stopping_criteria=stopping_criteria,\r\n 1441 pad_token_id=generation_config.pad_token_id,\r\n 1442 eos_token_id=generation_config.eos_token_id,\r\n 1443 output_scores=generation_config.output_scores,\r\n 1444 return_dict_in_generate=generation_config.return_dict_in_generate,\r\n 1445 synced_gpus=synced_gpus,\r\n 1446 streamer=streamer,\r\n 1447 **model_kwargs,\r\n 1448 )\r\n 1450 elif is_contrastive_search_gen_mode:\r\n 1451 if generation_config.num_return_sequences > 1:\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py:2248, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)\r\n 2245 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)\r\n 2247 # forward pass to get next token\r\n-> 2248 outputs = self(\r\n 2249 **model_inputs,\r\n 2250 return_dict=True,\r\n 2251 output_attentions=output_attentions,\r\n 2252 output_hidden_states=output_hidden_states,\r\n 2253 )\r\n 2255 if synced_gpus and this_peer_finished:\r\n 2256 continue # don't waste resources running the code we don't need\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:687, in LlamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 684 return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n 686 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)\r\n--> 687 outputs = self.model(\r\n 688 input_ids=input_ids,\r\n 689 attention_mask=attention_mask,\r\n 690 position_ids=position_ids,\r\n 691 past_key_values=past_key_values,\r\n 692 inputs_embeds=inputs_embeds,\r\n 693 use_cache=use_cache,\r\n 694 output_attentions=output_attentions,\r\n 695 output_hidden_states=output_hidden_states,\r\n 696 return_dict=return_dict,\r\n 697 )\r\n 699 hidden_states = outputs[0]\r\n 700 logits = self.lm_head(hidden_states)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:577, in LlamaModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 569 layer_outputs = torch.utils.checkpoint.checkpoint(\r\n 570 create_custom_forward(decoder_layer),\r\n 571 hidden_states,\r\n (...)\r\n 574 None,\r\n 575 )\r\n 576 else:\r\n--> 577 layer_outputs = decoder_layer(\r\n 578 hidden_states,\r\n 579 attention_mask=attention_mask,\r\n 580 position_ids=position_ids,\r\n 581 past_key_value=past_key_value,\r\n 582 output_attentions=output_attentions,\r\n 583 use_cache=use_cache,\r\n 584 )\r\n 586 hidden_states = layer_outputs[0]\r\n 588 if use_cache:\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:292, in LlamaDecoderLayer.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache)\r\n 289 hidden_states = self.input_layernorm(hidden_states)\r\n 291 # Self Attention\r\n--> 292 hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n 293 hidden_states=hidden_states,\r\n 294 attention_mask=attention_mask,\r\n 295 position_ids=position_ids,\r\n 296 past_key_value=past_key_value,\r\n 297 output_attentions=output_attentions,\r\n 298 use_cache=use_cache,\r\n 299 )\r\n 300 hidden_states = residual + hidden_states\r\n 302 # Fully Connected\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py:243, in LlamaAttention.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache)\r\n 240 attn_output = attn_output.transpose(1, 2)\r\n 241 attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)\r\n--> 243 attn_output = self.o_proj(attn_output)\r\n 245 if not output_attentions:\r\n 246 attn_weights = None\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)\r\n 1496 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1497 # this function, and just call forward.\r\n 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1499 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1500 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1501 return forward_call(*args, **kwargs)\r\n 1502 # Do not call functions when jit is used\r\n 1503 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(*args, **kwargs)\r\n 163 output = old_forward(*args, **kwargs)\r\n 164 else:\r\n--> 165 output = old_forward(*args, **kwargs)\r\n 166 return module._hf_hook.post_forward(module, output)\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input)\r\n 113 def forward(self, input: Tensor) -> Tensor:\r\n--> 114 return F.linear(input, self.weight, self.bias)\r\n\r\nRuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasGemmEx( handle, opa, opb, m, n, k, &falpha, a, CUDA_R_16F, lda, b, CUDA_R_16F, ldb, &fbeta, c, CUDA_R_16F, ldc, CUDA_R_32F, CUBLAS_GEMM_DFALT_TENSOR_OP)`\r\n\r\n```\r\n\r\n", "Interestingly, I'm not getting this error on my home machine. I'm using the same GPUs and the same docker image, so the versions are exactly the same - except the nvidia driver is 525.89.02 instead of 525.78.01.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I encountered the same error, with CUDA Version: 11.7 and Driver Version: 515.86.01 ", "> \r\n\r\nIn \"config.json\" change \"pad_token_id=-1\" to \"pad_token_id=2\". This happens because during batch generation, the model sometimes generates pad_token_id=-1", "> I encountered the same error, with CUDA Version: 11.7 and Driver Version: 515.86.01\r\n\r\nhow to solve?\r\n", "> pad_token_id\r\n\r\nThanks! This solve my problem.", "same problem", "same error when I load model on multiple gpus eg. 4,which set bu CUDA_VISIBLE_DEVICES=0,1,2,3. but when I load model only in 1 gpu, It can generate result succesfully. my code:\r\n`\r\ntokenizer = LlamaTokenizer.from_pretrained(hf_model_path)\r\nmodel = LlamaForCausalLM.from_pretrained(\r\nhf_model_path,\r\ntorch_dtype=torch.float16,\r\nlow_cpu_mem_usage=True,\r\ndevice_map=\"auto\",\r\nload_in_8bit=True\r\n)\r\ngeneration_output = model.generate(**inputs,\r\n return_dict_in_generate=True,\r\n output_scores=True,\r\n #max_length=512,\r\n max_new_tokens=512,\r\n do_sample=False,\r\n early_stopping=True,\r\n #top_p = 0.6,\r\n num_beams=3,\r\n #eos_token_id=tokenizer.eos_token_id,\r\n num_return_sequences = 1)\r\n\r\n sentence = tokenizer.decode(generation_output.sequences[0])\r\n`\r\nhow to explain this problem? \r\ntransformer version: 4.30.2\r\naccelerate version: 0.20.3", "> same error when I load model on multiple gpus \r\n\r\nI'm experiencing the same issue with two gpus. When I replace `device_map=\"auto\"` to `device_map={\"\":\"cuda:0\"}` the model generates as expected.\r\nI'm using two A6000s.\r\nCUDA Version: 12.2 \r\nCUDA Driver: 535.54.03\r\ntransformer version: 4.28.1\r\naccelerate version: 0.20.3\r\nPython: 3.8.10\r\ntorch: 2.0.1\r\n", "same problem when running with multiple gpus", "same problem here", "Please stop commenting with \"same problem\" without providing a reproducer. We can't do anything about a bug we can't reproduce.", "@sgugger sorry, here's my environment:\r\nTwo A6000s.\r\nCUDA Version: 11.7\r\ntransformer version: 4.32.0.dev\r\naccelerate version: 0.21.0\r\nPython: 3.9.16\r\ntorch: 2.0.1", "> > \r\n> \r\n> In \"config.json\" change \"pad_token_id=-1\" to \"pad_token_id=2\". This happens because during batch generation, the model sometimes generates pad_token_id=-1\r\n\r\nwhy set pad_token_id to 2 instead of 0? Does this (set pad_token_id to 2) have any impact on the model performance?", "I could reproduce the error with following env.\r\n\r\n2x A100 (80 GB each)\r\n\r\npython 3.10.6\r\ntorch==2.0.0\r\ntransformers[torch]==4.31.0\r\naccelerate==0.21.0\r\nDriver Version: 535.54.03\r\n\r\ntorch.version.cuda -> 11.8\r\n\r\nModel \"meta-llama/Llama-2-70b-chat-hf\"\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n model = AutoModelForCausalLM.from_pretrained(\r\n model_name,\r\n torch_dtype=torch.bfloat16,\r\n trust_remote_code=True,\r\n load_in_8bit=False,\r\n device_map=\"auto\",\r\n )\r\n pipeline_ = pipeline(\r\n \"text-generation\",\r\n model=model,\r\n tokenizer=tokenizer,\r\n )\r\n```\r\n\r\nWorked fine with 1xA100 with 8 bit model set to true. I just wanted to run some test with 16 bit mode.", "I could reproduce the error with following env.\r\n\r\n\r\n2x A6000 (48 GB each)\r\n\r\npython 3.8\r\ntorch==1.13.1\r\ntransformers[torch]==4.31.0\r\naccelerate==0.21.0\r\nDriver Version: 535.54.03\r\n\r\ntorch.version.cuda -> 11.7\r\n\r\nModel \"meta-llama/Llama-2-7b-chat-hf\"\r\n\r\nFollowing is part of my code for loading model:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nkwargs = {\"device_map\":\"balanced_low_0\",\"torch_dtype\":torch.float16}\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name, **kwargs)\r\n```\r\nFollowing is part of my code for process data:\r\n```python\r\n# make llama 2 chat prompt\r\ninput_sentences = [\r\n \"DeepSpeed is a machine learning framework\",\r\n \"He is working on\",\r\n \"He has a\",\r\n \"He got all\",\r\n \"Everyone is happy and I can\",\r\n \"The new movie that got Oscar this year\",\r\n \"In the far far distance from our galaxy,\",\r\n \"Peace is the only way\",\r\n]\r\ninputs = input_sentences[: args.batch_size]\r\nsystem_message = \"You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\"\r\n\r\nprompt_template = f'''[INST] <<SYS>>\r\n{system_message}\r\n<</SYS>>\r\n\r\n{{prompt}} [/INST]\r\n'''\r\nprocessed_prompts = [prompt_template.format(prompt=p) for p in inputs]\r\n```\r\nFollowing is part of my code for generate:\r\n```python\r\ndef generate():\r\n \"\"\"returns a list of zipped inputs, outputs and number of new tokens\"\"\"\r\n input_tokens = tokenizer.batch_encode_plus(\r\n processed_prompts, return_tensors=\"pt\", padding=False)\r\n for t in input_tokens:\r\n if torch.is_tensor(input_tokens[t]):\r\n input_tokens[t] = input_tokens[t].to(\"cuda:0\")\r\n\r\n outputs = model.generate(**input_tokens, **generate_kwargs)\r\n input_tokens_lengths = [x.shape[0] for x in input_tokens.input_ids]\r\n output_tokens_lengths = [x.shape[0] for x in outputs]\r\n\r\n total_new_tokens = [o - i for i,\r\n o in zip(input_tokens_lengths, output_tokens_lengths)]\r\n outputs = tokenizer.batch_decode(\r\n outputs, skip_special_tokens=True, spaces_between_special_tokens=False)\r\n\r\n return zip(inputs, outputs, total_new_tokens)\r\n```\r\n\r\nThis error occurs only when I am running on multiple Gpus, and when I debug it turns out that the model inference results are obviously wrong,like this\r\n<img width=\"1096\" alt=\"截屏2023-08-10 11 21 32\" src=\"https://github.com/huggingface/transformers/assets/71417331/50d60be4-c34b-4d2a-a939-3dd542d6f067\">\r\nAfter checking my model weights,I'm sure it has no problem. I also try to covert Llama-2-7b-chat to Llama-2-7b-chat-hf by myself, but the error still occur.Hope someone can help me solve this problem\r\n", "@thusithaC Does it only happen for the 70b model or does it also happen with the 7b model? cc @SunMarc \r\n\r\n@shl518 This is not a reproducer, it lacks how the model is created or what prompts you pass it.", "> @thusithaC Does it only happen for the 70b model or does it also happen with the 7b model? cc @SunMarc\r\n> \r\n> @shl518 This is not a reproducer, it lacks how the model is created or what prompts you pass it.\r\n\r\nsorry, I have update my issue\r\n", "hi @sgugger I could only reproduce it for the 70B model. The trigger condition seemed to be getting split up in multiple GPUs, and was difficult to do so with the 13/7 B models. ", "I had a similar error.\r\nIn my case, the cause is the late communication speed between GPUs.( I checked p2pBandwidthLatencyTest )\r\nI solved it by changing iommu setting like [this](https://github.com/pytorch/pytorch/issues/1637#issuecomment-338268158).\r\n\r\n**reference**\r\n- https://stackoverflow.com/questions/59690008/multi-gpu-peer-to-peer-slow-between-particular-pairs", "This is happening for me too:\r\n\r\nI'm running on 2 24G A5000 GPUs:\r\n<img width=\"484\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/131666247/58afac45-15f7-4bc1-980f-94804c3cbeb4\">\r\n\r\n\r\nThis my code for loading the model:\r\n```python\r\n def __init__(self):\r\n model_name = \"meta-llama/Llama-2-70b-chat-hf\"\r\n tokenizer = AutoTokenizer.from_pretrained(model_name)\r\n tokenizer.pad_token_id = tokenizer.eos_token_id\r\n tokenizer.pad_token = \"[PAD]\"\r\n tokenizer.padding_side = \"left\"\r\n\r\n bnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.float16,\r\n bnb_4bit_use_double_quant=True\r\n )\r\n model = AutoModelForCausalLM.from_pretrained(\r\n model_name,\r\n quantization_config=bnb_config,\r\n device_map=\"auto\",\r\n trust_remote_code=True,\r\n )\r\n\r\n pipe = pipeline(\r\n \"text-generation\",\r\n model=model,\r\n tokenizer=tokenizer,\r\n trust_remote_code=True,\r\n device_map=\"auto\"\r\n )\r\n\r\n self.tokenizer = tokenizer\r\n self.model = model\r\n self.pipe = pipe\r\n```\r\n\r\nWhen I try prompting it it crashes with this error:\r\n\r\n```txt\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\n\r\n<details> \r\n <summary> Full trace: </summary>\r\n\r\n```txt\r\nTraceback (most recent call last):\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/runpy.py\", line 198, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/runpy.py\", line 88, in _run_code\r\n exec(code, run_globals)\r\n File \"/root/.vscode-server/extensions/ms-python.python-2023.16.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py\", line 39, in <module>\r\n cli.main()\r\n File \"/root/.vscode-server/extensions/ms-python.python-2023.16.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 430, in main\r\n run()\r\n File \"/root/.vscode-server/extensions/ms-python.python-2023.16.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 284, in run_file\r\n runpy.run_path(target, run_name=\"__main__\")\r\n File \"/root/.vscode-server/extensions/ms-python.python-2023.16.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 321, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/.vscode-server/extensions/ms-python.python-2023.16.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 135, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/root/.vscode-server/extensions/ms-python.python-2023.16.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 124, in _run_code\r\n exec(code, run_globals)\r\n File \"/storage/kerenganon/floor_plans/dataset_creation/llms/extract_metadata_info.py\", line 232, in <module>\r\n main()\r\n File \"/storage/kerenganon/floor_plans/dataset_creation/llms/extract_metadata_info.py\", line 202, in main\r\n add_column_to_df(\r\n File \"/storage/kerenganon/floor_plans/dataset_creation/llms/extract_metadata_info.py\", line 164, in add_column_to_df\r\n df[col_name] = iterable_df.progress_apply(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/tqdm/std.py\", line 920, in inner\r\n return getattr(df, df_function)(wrapper, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/pandas/core/frame.py\", line 9568, in apply\r\n return op.apply().__finalize__(self, method=\"apply\")\r\n ^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/pandas/core/apply.py\", line 764, in apply\r\n return self.apply_standard()\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/pandas/core/apply.py\", line 891, in apply_standard\r\n results, res_index = self.apply_series_generator()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/pandas/core/apply.py\", line 907, in apply_series_generator\r\n results[i] = self.f(v)\r\n ^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/tqdm/std.py\", line 915, in wrapper\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/storage/kerenganon/floor_plans/dataset_creation/llms/extract_metadata_info.py\", line 165, in <lambda>\r\n lambda row: col_func(llm, prompt, row),\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/storage/kerenganon/floor_plans/dataset_creation/llms/extract_metadata_info.py\", line 139, in get_legend_existence_answer\r\n ans = get_single_token_answer(llm, prompt)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/storage/kerenganon/floor_plans/dataset_creation/llms/extract_metadata_info.py\", line 93, in get_single_token_answer\r\n ans_raw = llm.run_prompt(prompt, max_new_tokens=1)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/storage/kerenganon/floor_plans/dataset_creation/llms/llm.py\", line 46, in run_prompt\r\n out = self.pipe(p, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/pipelines/text_generation.py\", line 205, in __call__\r\n return super().__call__(text_inputs, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/pipelines/base.py\", line 1140, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/pipelines/base.py\", line 1147, in run_single\r\n model_outputs = self.forward(model_inputs, **forward_params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/pipelines/base.py\", line 1046, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/pipelines/text_generation.py\", line 268, in _forward\r\n generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/generation/utils.py\", line 1648, in generate\r\n return self.sample(\r\n ^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/generation/utils.py\", line 2730, in sample\r\n outputs = self(\r\n ^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 820, in forward\r\n outputs = self.model(\r\n ^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 708, in forward\r\n layer_outputs = decoder_layer(\r\n ^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 424, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n ^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 333, in forward\r\n query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 187, in apply_rotary_pos_emb\r\n k_embed = (k * cos) + (rotate_half(k) * sin)\r\n ~~~~~~~~~~~~~~~^~~~~\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\n</details>\r\n\r\npipe.model.hf_device_map returns:\r\n```txt\r\n{'model.embed_tokens': 0, 'model.layers.0': 0, 'model.layers.1': 0, 'model.layers.2': 0, 'model.layers.3': 0, 'model.layers.4': 0, \r\n'model.layers.5': 0, 'model.layers.6': 0, 'model.layers.7': 0, 'model.layers.8': 0, 'model.layers.9': 0, 'model.layers.10': 0, \r\n'model.layers.11': 0, 'model.layers.12': 0, 'model.layers.13': 0, 'model.layers.14': 0, 'model.layers.15': 0, 'model.layers.16': 0, \r\n'model.layers.17': 0, 'model.layers.18': 0, 'model.layers.19': 0, 'model.layers.20': 0, 'model.layers.21': 0, 'model.layers.22': 0, \r\n'model.layers.23': 0, 'model.layers.24': 0, 'model.layers.25': 0, 'model.layers.26': 0, 'model.layers.27': 0, 'model.layers.28': 0, \r\n'model.layers.29': 0, 'model.layers.30': 0, 'model.layers.31': 0, 'model.layers.32': 0, 'model.layers.33': 0, 'model.layers.34': 0, \r\n'model.layers.35': 0, 'model.layers.36': 1, 'model.layers.37': 1, 'model.layers.38': 1, 'model.layers.39': 1, 'model.layers.40': 1, \r\n'model.layers.41': 1, 'model.layers.42': 1, 'model.layers.43': 1, 'model.layers.44': 1, 'model.layers.45': 1, 'model.layers.46': 1, \r\n'model.layers.47': 1, 'model.layers.48': 1, 'model.layers.49': 1, 'model.layers.50': 1, 'model.layers.51': 1, 'model.layers.52': 1, \r\n'model.layers.53': 1, 'model.layers.54': 1, 'model.layers.55': 1, 'model.layers.56': 1, 'model.layers.57': 1, 'model.layers.58': 1, \r\n'model.layers.59': 1, 'model.layers.60': 1, 'model.layers.61': 1, 'model.layers.62': 1, 'model.layers.63': 1, 'model.layers.64': 1, \r\n'model.layers.65': 1, 'model.layers.66': 1, 'model.layers.67': 1, 'model.layers.68': 1, 'model.layers.69': 1, 'model.layers.70': 1, \r\n'model.layers.71': 1, 'model.layers.72': 1, 'model.layers.73': 1, 'model.layers.74': 1, 'model.layers.75': 1, 'model.layers.76': 1, \r\n'model.layers.77': 1, 'model.layers.78': 1, 'model.layers.79': 1, 'model.norm': 1, 'lm_head': 1}\r\n```\r\n\r\n\r\nI've tried running the 13B model on 2 GPUs and got a slightly different behaviour.\r\n\r\nI was able to prompt it for a few times and it returned random words, and then crashed with the same error, but from a different line:\r\n```\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 346, in forward\r\n attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)\r\n```\r\n<details> \r\n <summary> Full trace: </summary>\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/storage/kerenganon/floor_plans/dataset_creation/llms/llm.py\", line 46, in run_prompt\r\n out = self.pipe(p, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/pipelines/text_generation.py\", line 205, in __call__\r\n return super().__call__(text_inputs, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/pipelines/base.py\", line 1140, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/pipelines/base.py\", line 1147, in run_single\r\n model_outputs = self.forward(model_inputs, **forward_params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/pipelines/base.py\", line 1046, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/pipelines/text_generation.py\", line 268, in _forward\r\n generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/generation/utils.py\", line 1648, in generate\r\n return self.sample(\r\n ^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/generation/utils.py\", line 2730, in sample\r\n outputs = self(\r\n ^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 820, in forward\r\n outputs = self.model(\r\n ^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 708, in forward\r\n layer_outputs = decoder_layer(\r\n ^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 424, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n ^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = old_forward(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/miniconda3/envs/floorplans/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 346, in forward\r\n attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)\r\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\n\r\n</details>\r\n\r\nI've seen this error too:\r\n```\r\nError device-side assert triggered at line 88 in file /mmfs1/gscratch/zlab/timdettmers/git/bitsandbytes/csrc/ops.cu\r\n```\r\n\r\nI've managed to run the 13B model on 1 GPU with this code successfully, so I assume there is an issue with the GPU communication?\r\nAny help would be appreciated🙏\r\n\r\n\r\n" ]
1,680
1,706
1,683
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-3.10.0-1160.81.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.11.2 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger @MKhalusova @ArthurZucker @younesbelkada I am experiencing an assertion error in ScatterGatherKernel.cu when using LlamaTokenizer and multi-GPU inference with any variant of Llama model. The error occurs during the model.generate() call. ``` import os # os.environ['TRANSFORMERS_CACHE'] = '/tmp/cache/' # os.environ['NCCL_P2P_DISABLE'] = '1' from transformers import AutoModelForCausalLM,AutoConfig,LlamaTokenizer from accelerate import init_empty_weights, infer_auto_device_map import torch def get_device_map(model_path): with init_empty_weights(): config = AutoConfig.from_pretrained(model_path) model = AutoModelForCausalLM.from_config(config) d = {0: "18GiB"} for i in range(1, 5): d[i] = "26GiB" device_map = infer_auto_device_map( model, max_memory=d,dtype=torch.float16, no_split_module_classes=["BloomBlock", "OPTDecoderLayer", "LLaMADecoderLayer", "LlamaDecoderLayer"] ) print(device_map) del model return device_map tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-13b-hf") model = AutoModelForCausalLM.from_pretrained("decapoda-research/llama-13b-hf", torch_dtype=torch.float16, device_map=get_device_map("decapoda-research/llama-13b-hf")) generate_kwargs = { "max_new_tokens": 200, "min_new_tokens": 100, "temperature": 0.1, "do_sample": False, # The three options below used together leads to contrastive search "top_k": 4, "penalty_alpha": 0.6, } prompt = "Puma is a " with torch.no_grad(): input_ids = tokenizer(prompt, return_tensors="pt").input_ids assert len(input_ids) == 1, len(input_ids) if input_ids[0][-1] == 2: # 2 is EOS, hack to remove. If the prompt is ending with EOS, often the generation will stop abruptly. input_ids = input_ids[:, :-1] input_ids = input_ids.to(0) #input_ids = tokenizer(prompt, padding=True, truncation=True, return_tensors="pt").input_ids.to(0) generated_ids = model.generate( input_ids, #stopping_criteria=stopping_criteria, **generate_kwargs ) result = tokenizer.batch_decode(generated_ids.cpu(), skip_special_tokens=True) print(result) ``` The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'LLaMATokenizer'. The class this function is called from is 'LlamaTokenizer'. normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization. {'model.embed_tokens': 0, 'model.layers.0': 0, 'model.layers.1': 0, 'model.layers.2': 0, 'model.layers.3': 0, 'model.layers.4': 0, 'model.layers.5': 0, 'model.layers.6': 0, 'model.layers.7': 0, 'model.layers.8': 0, 'model.layers.9': 0, 'model.layers.10': 0, 'model.layers.11': 0, 'model.layers.12': 0, 'model.layers.13': 0, 'model.layers.14': 0, 'model.layers.15': 0, 'model.layers.16': 0, 'model.layers.17': 0, 'model.layers.18': 0, 'model.layers.19': 0, 'model.layers.20': 0, 'model.layers.21': 0, 'model.layers.22': 0, 'model.layers.23': 0, 'model.layers.24': 0, 'model.layers.25': 0, 'model.layers.26': 0, 'model.layers.27': 0, 'model.layers.28': 0, 'model.layers.29': 1, 'model.layers.30': 1, 'model.layers.31': 1, 'model.layers.32': 1, 'model.layers.33': 1, 'model.layers.34': 1, 'model.layers.35': 1, 'model.layers.36': 1, 'model.layers.37': 1, 'model.layers.38': 1, 'model.layers.39': 1, 'model.layers.40': 1, 'model.layers.41': 1, 'model.layers.42': 1, 'model.layers.43': 1, 'model.layers.44': 1, 'model.layers.45': 1, 'model.layers.46': 1, 'model.layers.47': 1, 'model.layers.48': 1, 'model.layers.49': 1, 'model.layers.50': 1, 'model.layers.51': 1, 'model.layers.52': 1, 'model.layers.53': 1, 'model.layers.54': 1, 'model.layers.55': 1, 'model.layers.56': 1, 'model.layers.57': 1, 'model.layers.58': 1, 'model.layers.59': 2, 'model.norm': 2, 'lm_head': 2} Loading checkpoint shards: 100%|██████████████████████████| 61/61 [00:25<00:00, 2.43it/s] /home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/transformers/generation/utils.py:1219: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation) warnings.warn( ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [64,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [65,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [66,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [67,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [68,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [69,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [70,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [71,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [72,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [73,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [74,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [75,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [76,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [77,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [78,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [79,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [80,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [81,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [82,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [83,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [84,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [85,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [86,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [87,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [88,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [89,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [90,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [91,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [92,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [93,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [94,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [95,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [96,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [97,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [98,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [99,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [100,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [101,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [102,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [103,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [104,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [105,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [106,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [107,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [108,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [109,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [110,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [111,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [112,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [113,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [114,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [115,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [116,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [117,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [118,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [119,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [120,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [121,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [122,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [123,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [124,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [125,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [126,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [127,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [0,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [1,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [2,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [3,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [4,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [5,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [6,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [7,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [8,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [9,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [10,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [11,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [12,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [13,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [14,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [15,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [16,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [17,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [18,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [19,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [20,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [21,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [22,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [23,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [24,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [25,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [26,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [27,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [28,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [29,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [30,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [31,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [32,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [33,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [34,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [35,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [36,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [37,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [38,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [39,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [40,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [41,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [42,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [43,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [44,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [45,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [46,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [47,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [48,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [49,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [50,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [51,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [52,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [53,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [54,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [55,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [56,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [57,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [58,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [59,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [60,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [61,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [62,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [63,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [64,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [65,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [66,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [67,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [68,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [69,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [70,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [71,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [72,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [73,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [74,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [75,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [76,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [77,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [78,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [79,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [80,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [81,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [82,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [83,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [84,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [85,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [86,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [87,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [88,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [89,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [90,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [91,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [92,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [93,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [94,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [95,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [96,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [97,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [98,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [99,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [100,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [101,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [102,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [103,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [104,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [105,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [106,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [107,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [108,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [109,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [110,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [111,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [112,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [113,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [114,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [115,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [116,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [117,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [118,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [119,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [120,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [121,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [122,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [123,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [124,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [125,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [126,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [127,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [0,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [1,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [2,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [3,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [4,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [5,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [6,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [7,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [8,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [9,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [10,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [11,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [12,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [13,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [14,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [15,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [16,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [17,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [18,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [19,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [20,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [21,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [22,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [23,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [24,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [25,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [26,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [27,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [28,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [29,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [30,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [31,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [32,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [33,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [34,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [35,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [36,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [37,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [38,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [39,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [40,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [41,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [42,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [43,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [44,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [45,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [46,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [47,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [48,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [49,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [50,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [51,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [52,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [53,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [54,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [55,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [56,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [57,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [58,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [59,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [60,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [61,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [62,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [1,0,0], thread: [63,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed. Traceback (most recent call last): File "/home/u30/terrycruz/chatPaper.py", line 48, in <module> generated_ids = model.generate( ^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/transformers/generation/utils.py", line 1457, in generate return self.contrastive_search( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/transformers/generation/utils.py", line 1871, in contrastive_search outputs = self( ^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 687, in forward outputs = self.model( ^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 577, in forward layer_outputs = decoder_layer( ^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 292, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( ^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/u30/terrycruz/anaconda3/envs/multiple_gpu/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 241, in forward attn_output = attn_output.reshape(bsz, q_len, self.hidden_size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. run my script: `CUDA_LAUNCH_BLOCKING=1 python script.py` ### Expected behavior The puma bla bla bla.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22546/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/22546/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22545
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22545/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22545/comments
https://api.github.com/repos/huggingface/transformers/issues/22545/events
https://github.com/huggingface/transformers/issues/22545
1,652,655,004
I_kwDOCUB6oc5igYOc
22,545
Model.eval() always returns the same logits for SequenceClassfication models with binary labels
{ "login": "Mogady", "id": 20929301, "node_id": "MDQ6VXNlcjIwOTI5MzAx", "avatar_url": "https://avatars.githubusercontent.com/u/20929301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mogady", "html_url": "https://github.com/Mogady", "followers_url": "https://api.github.com/users/Mogady/followers", "following_url": "https://api.github.com/users/Mogady/following{/other_user}", "gists_url": "https://api.github.com/users/Mogady/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mogady/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mogady/subscriptions", "organizations_url": "https://api.github.com/users/Mogady/orgs", "repos_url": "https://api.github.com/users/Mogady/repos", "events_url": "https://api.github.com/users/Mogady/events{/privacy}", "received_events_url": "https://api.github.com/users/Mogady/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That's because your model training diverged. It has nothing to do with the Transformers library and is probably due to your very high learning rate. You should go on the [forums](https://discuss.huggingface.co/) if you need help debugging your trainings.", "Yes thanks this was the reason" ]
1,680
1,680
1,680
NONE
null
### System Info On GoogleColab - `transformers` version: 4.27.4 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.8 (gpu) - Jax version: 0.4.7 - JaxLib version: 0.4.7 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: not ### Who can help? @younesbelkada @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from sklearn.model_selection import train_test_split import numpy as np import torch from torch.utils.data import Dataset, DataLoader from evaluate import load from tqdm import tqdm from transformers import AutoTokenizer, AutoModelForSequenceClassification from torch import cuda from datasets import load_dataset from transformers import TrainingArguments, Trainer device = 'cuda' if cuda.is_available() else 'cpu' tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2) dataset = load_dataset("ethos", "binary") def tokenize_function(examples): return tokenizer(examples["text"], max_length = 512, padding="max_length", truncation=True) tokenized_datasets = dataset.map(tokenize_function, batched=True) small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(0, 898)) small_eval_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(898, 998)) load_accuracy = load("accuracy") load_f1 = load("f1") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) accuracy = load_accuracy.compute(predictions=predictions, references=labels)["accuracy"] f1 = load_f1.compute(predictions=predictions, references=labels)["f1"] return {"accuracy": accuracy, "f1": f1} training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch", learning_rate = 1e-03, per_device_train_batch_size =16, per_device_eval_batch_size=4, num_train_epochs=1) trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, ) trainer.train() ### Expected behavior ```trainer.predict(small_eval_dataset)``` should return different logits but it returns the same for all the test examples ``` PredictionOutput(predictions=array([[ 0.30563816, -0.10853065], [ 0.30566448, -0.10852079], [ 0.3056038 , -0.10854296], [ 0.30563852, -0.10852969], [ 0.30562696, -0.10853519], [ 0.3057046 , -0.10850368], [ 0.3056232 , -0.10853792], [ 0.3056584 , -0.10852299], [ 0.30566052, -0.10852136], [ 0.30566704, -0.10851857], [ 0.30566064, -0.10852212], [ 0.30565894, -0.10852377], [ 0.30565098, -0.10852514], [ 0.30566713, -0.10852013], .... ```` ```python inputs = torch.tensor(small_eval_dataset['input_ids']).to(device) mask = torch.tensor(small_eval_dataset['attention_mask']).to(device) model.train() model(inputs[0:10], mask[0:10]) ``` ``` SequenceClassifierOutput(loss=None, logits=tensor([[ 0.2427, -0.2602], [ 0.2804, -0.2819], [ 0.0620, -0.1497], [ 0.6520, -0.3421], [ 0.5095, -0.1113], [ 0.3538, 0.0181], [ 0.2826, 0.1292], [ 0.4033, 0.0041], [ 0.4308, -0.1813], [ 0.3979, -0.2117]], device='cuda:0', grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None) ``` ```python model.eval() model(inputs[0:10], mask[0:10]) ``` ``` SequenceClassifierOutput(loss=None, logits=tensor([[ 0.3056, -0.1085], [ 0.3057, -0.1085], [ 0.3056, -0.1085], [ 0.3056, -0.1085], [ 0.3056, -0.1085], [ 0.3057, -0.1085], [ 0.3056, -0.1085], [ 0.3057, -0.1085], [ 0.3057, -0.1085], [ 0.3057, -0.1085]], device='cuda:0', grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22545/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22544
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22544/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22544/comments
https://api.github.com/repos/huggingface/transformers/issues/22544/events
https://github.com/huggingface/transformers/pull/22544
1,652,594,033
PR_kwDOCUB6oc5NhI65
22,544
Generate: Add text streamer decoding options
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
MEMBER
null
# What does this PR do? In advance of communicating the iterator streamer with Gradio demos, adds two important options: 1. option to skip the prompt in the streamer (e.g. for chatbots) 2. option to receive `decode()` kwargs (e.g. to skip special tokens) It also makes use of the changes in #22516 to make the iterator streamer much more compact -- it is now a child class of the stdout streamer, with a few modifications.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22544/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22544", "html_url": "https://github.com/huggingface/transformers/pull/22544", "diff_url": "https://github.com/huggingface/transformers/pull/22544.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22544.patch", "merged_at": 1680595394000 }
https://api.github.com/repos/huggingface/transformers/issues/22543
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22543/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22543/comments
https://api.github.com/repos/huggingface/transformers/issues/22543/events
https://github.com/huggingface/transformers/pull/22543
1,652,556,606
PR_kwDOCUB6oc5NhAyQ
22,543
Update test_image_processing_pix2struct.py
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? This PR should fix the failing test on `main`. The fix is to replace the previous image with the one I have uploaded on the Hub: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/australia.jpg cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22543/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22543", "html_url": "https://github.com/huggingface/transformers/pull/22543", "diff_url": "https://github.com/huggingface/transformers/pull/22543.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22543.patch", "merged_at": 1680549996000 }
https://api.github.com/repos/huggingface/transformers/issues/22542
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22542/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22542/comments
https://api.github.com/repos/huggingface/transformers/issues/22542/events
https://github.com/huggingface/transformers/pull/22542
1,652,490,863
PR_kwDOCUB6oc5Ngyn8
22,542
Backbone add mixin tests
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
COLLABORATOR
null
# What does this PR do? Adds a set of tests specifically for the Backbone class ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22542/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22542", "html_url": "https://github.com/huggingface/transformers/pull/22542", "diff_url": "https://github.com/huggingface/transformers/pull/22542.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22542.patch", "merged_at": 1680785416000 }
https://api.github.com/repos/huggingface/transformers/issues/22541
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22541/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22541/comments
https://api.github.com/repos/huggingface/transformers/issues/22541/events
https://github.com/huggingface/transformers/issues/22541
1,652,453,151
I_kwDOCUB6oc5ifm8f
22,541
Issue with gradient accumulation in CodeParrot example
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
MEMBER
null
There is a bug in the gradient accumulation that causes the [training script](https://github.com/huggingface/transformers/blob/main/examples/research_projects/codeparrot/scripts/codeparrot_training.py) to run slower than necessary. Currently we have the following: ```python for step, batch in enumerate(train_dataloader, start=1): if args.resume_from_checkpoint and step < resume_step: continue # we need to skip steps until we reach the resumed step loss = model(batch, labels=batch, use_cache=False).loss avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean() loss_tracking += avg_loss.item() / args.gradient_accumulation_steps log_metrics(step, {"samples": step * samples_per_step, "loss_per_step/train": loss.item()}) loss = loss / args.gradient_accumulation_steps if step % args.gradient_accumulation_steps != 0: # Prevent backward from doing gradient all_reduce in every step if accelerator.distributed_type == DistributedType.MULTI_GPU: with model.no_sync(): accelerator.backward(loss) else: accelerator.backward(loss) else: lr = get_lr() accelerator.backward(loss) accelerator.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() lr_scheduler.step() optimizer.zero_grad() elapsed_time = time.time() - t_start tflops = compute_tflops(elapsed_time, accelerator, args) log_metrics( step, { "steps": completed_steps, "loss/train": loss_tracking, "lr": lr, "tflops": tflops, "time_per_iteration": elapsed_time, }, ) t_start = time.time() loss_tracking = 0 completed_steps += 1 ``` When it should be something along the lines of this: ```python for step, batch in enumerate(train_dataloader, start=1): with accelerator.accumulate(model): if args.resume_from_checkpoint and step < resume_step: continue # we need to skip steps until we reach the resumed step lr = get_lr() loss = model(batch, labels=batch, use_cache=False).loss avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean() loss_tracking += avg_loss.item() / args.gradient_accumulation_steps log_metrics(step, {"samples": step * samples_per_step, "loss_per_step/train": loss.item()}) accelerator.clip_grad_norm_(model.parameters(), 1.0) accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() if accelerator.gradient_state.sync_gradients: elapsed_time = time.time() - t_start tflops = compute_tflops(elapsed_time, accelerator, args) log_metrics( step, { "steps": completed_steps, "loss/train": loss_tracking, "lr": lr, "tflops": tflops, "time_per_iteration": elapsed_time, }, ) t_start = time.time() loss_tracking = 0 completed_steps += 1 ``` We're not actually pausing the gradient accumulation. Here's an example: https://github.com/huggingface/accelerate/blob/92d072043eb24eddf714edd578bceff07a2d9470/examples/by_feature/gradient_accumulation.py#L171-L183 And here some explanation: https://huggingface.co/docs/accelerate/concept_guides/gradient_synchronization. This could speed-up training up to 2x as [reported](https://github.com/muellerzr/timing_experiments) by @muellerzr! Thanks for reporting! ![Screenshot 2023-04-03 at 19 08 31](https://user-images.githubusercontent.com/8264887/229579509-5780b821-3492-4380-98d5-1b0520ea3db0.png) cc @ArmelRandy @loubnabnl
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22541/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22541/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22540
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22540/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22540/comments
https://api.github.com/repos/huggingface/transformers/issues/22540/events
https://github.com/huggingface/transformers/pull/22540
1,652,353,984
PR_kwDOCUB6oc5NgVHZ
22,540
Fix inverted conditional in TF common test!
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "As expected this has raised a few bugs in the cross-test that were silent before - I'll see what I can do in this PR", "Most likely - I'll investigate them all soon!", "Quick summary of the fixes needed:\r\n\r\nESM: `TFEsmForTokenClassification` copied the computation from `TFBertForTokenClassification`, but this has some slightly odd BERT-specific behaviour and doesn't mask -100 in the same way as other models. Replaced it with the loss block from `TFRobertaForTokenClassification` and all tests pass.\r\n\r\nGPT2: For model classes that take rank-3 inputs (e.g. `MultipleChoice` or `DoubleHeads`), when `output_hidden_states=True` , inputs have their second two dims flattened internally in the main model stem. This means that the output `hidden_states` are rank 3 `(bsz, seq_len * num_choices, hidden_dim)` and not rank 4 `(bsz, num_choices, seq_len, hidden_dim)`. However, the PT model un-flattens the output for the final `hidden_states`, which means the last hidden state is rank-4, unlike the others which remain rank-3. In the old TF model, all hidden states are rank-3. I modified the TF code to un-flatten the last hidden state in the same way.\r\n\r\nHUBERT: Loss computation especially for CTC overflows a lot with the default labels, which creates lots of `inf` values and makes it very hard to compare TF and PT losses. I skipped PT-TF equivalence testing for the losses, but keep it for all non-loss outputs.\r\n\r\nWav2Vec2: Same as HUBERT\r\n\r\nXGLM: The PT XGLM model does a weird thing where it shifts labels by 1 and then adds `pad_token_id` as the final label to all samples. I'm not sure this is correct, but I modified the TF code to do the same. It's possible the TF code is the right one here though, in which case we should revert it and change the PT code instead.", "@gante I fixed all the bugs that this surfaced, explained above ^\r\n\r\ncc @sgugger for final review too", "Thank you for the fix @Rocketknight1 ❤️ . And I apologize for the mistake I introduced ..." ]
1,680
1,680
1,680
MEMBER
null
Noticed a rather alarming conditional being backwards in the `test_pt_tf_model_equivalence` common test. This probably resulted in a lot of tests being skipped!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22540/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22540/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22540", "html_url": "https://github.com/huggingface/transformers/pull/22540", "diff_url": "https://github.com/huggingface/transformers/pull/22540.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22540.patch", "merged_at": 1680641994000 }
https://api.github.com/repos/huggingface/transformers/issues/22539
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22539/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22539/comments
https://api.github.com/repos/huggingface/transformers/issues/22539/events
https://github.com/huggingface/transformers/pull/22539
1,652,336,185
PR_kwDOCUB6oc5NgRTN
22,539
[setup] migrate setup script to `pyproject.toml`
{ "login": "XuehaiPan", "id": 16078332, "node_id": "MDQ6VXNlcjE2MDc4MzMy", "avatar_url": "https://avatars.githubusercontent.com/u/16078332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XuehaiPan", "html_url": "https://github.com/XuehaiPan", "followers_url": "https://api.github.com/users/XuehaiPan/followers", "following_url": "https://api.github.com/users/XuehaiPan/following{/other_user}", "gists_url": "https://api.github.com/users/XuehaiPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/XuehaiPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XuehaiPan/subscriptions", "organizations_url": "https://api.github.com/users/XuehaiPan/orgs", "repos_url": "https://api.github.com/users/XuehaiPan/repos", "events_url": "https://api.github.com/users/XuehaiPan/events{/privacy}", "received_events_url": "https://api.github.com/users/XuehaiPan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Follows discussion in https://github.com/huggingface/transformers/pull/22531#issuecomment-1494493545 Changes: - migrate setup script to `pyproject.toml` - migrate `pytest` configs to `pyproject.toml` - cleanup `isort` and `flake8` configs ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22539/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22539/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22539", "html_url": "https://github.com/huggingface/transformers/pull/22539", "diff_url": "https://github.com/huggingface/transformers/pull/22539.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22539.patch", "merged_at": 1680545021000 }
https://api.github.com/repos/huggingface/transformers/issues/22538
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22538/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22538/comments
https://api.github.com/repos/huggingface/transformers/issues/22538/events
https://github.com/huggingface/transformers/issues/22538
1,652,335,807
I_kwDOCUB6oc5ifKS_
22,538
Cross Attention of MarianMT translation model, inconsistent with paper!
{ "login": "42694647426", "id": 44487593, "node_id": "MDQ6VXNlcjQ0NDg3NTkz", "avatar_url": "https://avatars.githubusercontent.com/u/44487593?v=4", "gravatar_id": "", "url": "https://api.github.com/users/42694647426", "html_url": "https://github.com/42694647426", "followers_url": "https://api.github.com/users/42694647426/followers", "following_url": "https://api.github.com/users/42694647426/following{/other_user}", "gists_url": "https://api.github.com/users/42694647426/gists{/gist_id}", "starred_url": "https://api.github.com/users/42694647426/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/42694647426/subscriptions", "organizations_url": "https://api.github.com/users/42694647426/orgs", "repos_url": "https://api.github.com/users/42694647426/repos", "events_url": "https://api.github.com/users/42694647426/events{/privacy}", "received_events_url": "https://api.github.com/users/42694647426/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @42694647426 👋 \r\n\r\nAs per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗 \r\n\r\nIn this case, I'd also advise attempting to reach out to the original authors of the paper you linked, as well as the creators of the Marian models in question (Helsinki-NLP)!", "> Hey @42694647426 👋\r\n> \r\n> As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗\r\n> \r\n> In this case, I'd also advise attempting to reach out to the original authors of the paper you linked, as well as the creators of the Marian models in question (Helsinki-NLP)!\r\n\r\nHi @gante, thank you for the quick reply! Since a few papers have proven that the last two encoder-decoder layers should give the best alignment (the second last layer is actually the best from the paper mentioned above) and it also makes sense that the last layer should have gained the most information to generate output. \r\n\r\nIs there any possibility that the cross_attention in the output sequence is ordered reversely(layer 0 is actually the last layer i.e. the layer closest to the output)? \r\n\r\nThank you for your help.", "\r\nI don't think the cross_attentions is outputted in reverse. Looking at line 1075 for [marian](https://github.com/huggingface/transformers/blob/main/src/transformers/models/marian/modeling_marian.py) we see that the cross attentions is added one layer at a time with the correct order:\r\n```\r\nfor idx, decoder_layer in enumerate(self.layers):\r\n ...\r\n if encoder_hidden_states is not None:\r\n all_cross_attentions += (layer_outputs[2],)\r\n```\r\n\r\nLooking at the generation utils with beam search, I don't see any rearranging of the cross attentions happening either.\r\n\r\nBut now that you mention it, it is kind of weird how the alignment in the early layers give the best results - but it's not a bug from the looks of it.\r\n\r\nIs it possible the model is still providing correct outputs because the positional info is being propagated to the successive layers? [P-Transformer](https://arxiv.org/pdf/2212.05830.pdf) claims this isn't generally the case, but that's document translation.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
### System Info When I'm inspecting the cross-attention layers from the pretrained transformer translation model (MarianMT model), It is very strange that the cross attention from layer 0 and 1 provide best alignment between input and output. I used bertviz to visualize all heads from all 6 layers, and tried different language, english to german and english to chinese, it all gives the same results, which does not make sense because the last layers should be more accurate according to the paper _Jointly Learning to Align and Translate with Transformer Models_ [https://arxiv.org/pdf/1909.02074.pdf](url) ![image](https://user-images.githubusercontent.com/44487593/229558767-deeb4fe1-8e62-41aa-9116-cf4e55ccfac6.png) But when I'm looking at the cross attention of model _Helsinki-NLP/opus-mt-en-de_ and _Helsinki-NLP/opus-mt-en-zh_ , the layer 1 gives the best alignment. the code is below: ```python from transformers import AutoTokenizer, AutoModel import os os.environ['TRANSFORMERS_CACHE'] = '/data2/hanyings/.cache' tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = AutoModel.from_pretrained("Helsinki-NLP/opus-mt-en-de", output_attentions=True) encoder_input_ids = tokenizer("She sees the small elephant.", return_tensors="pt", add_special_tokens=True).input_ids with tokenizer.as_target_tokenizer(): decoder_input_ids = tokenizer("Sie sieht den kleinen Elefanten.", return_tensors="pt", add_special_tokens=True).input_ids outputs = model(input_ids=encoder_input_ids, decoder_input_ids=decoder_input_ids) encoder_text = tokenizer.convert_ids_to_tokens(encoder_input_ids[0]) decoder_text = tokenizer.convert_ids_to_tokens(decoder_input_ids[0]) from bertviz import model_view model_view( encoder_attention=outputs.encoder_attentions, decoder_attention=outputs.decoder_attentions, cross_attention=outputs.cross_attentions, encoder_tokens= encoder_text, decoder_tokens = decoder_text ) ``` And the results are: ![image](https://user-images.githubusercontent.com/44487593/229560299-f6792ad1-5984-4a29-80fb-79403855b43a.png) ![image](https://user-images.githubusercontent.com/44487593/229561124-f84d41d0-ceed-49ac-98b6-91ce47f14424.png) From the above pictures, I observed that the first 2 layers give the best alignment whereas the last layers do not align the input and output tokens properly. Can you please help me to explain why this happens? and If the alignment of the last layer is not accurate, how does the model provide correct predictions? @ArthurZucker @younesbelkada @gante Please! It is very important for my research project! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer, AutoModel import os os.environ['TRANSFORMERS_CACHE'] = '/data2/hanyings/.cache' tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = AutoModel.from_pretrained("Helsinki-NLP/opus-mt-en-de", output_attentions=True) encoder_input_ids = tokenizer("She sees the small elephant.", return_tensors="pt", add_special_tokens=True).input_ids with tokenizer.as_target_tokenizer(): decoder_input_ids = tokenizer("Sie sieht den kleinen Elefanten.", return_tensors="pt", add_special_tokens=True).input_ids outputs = model(input_ids=encoder_input_ids, decoder_input_ids=decoder_input_ids) encoder_text = tokenizer.convert_ids_to_tokens(encoder_input_ids[0]) decoder_text = tokenizer.convert_ids_to_tokens(decoder_input_ids[0]) from bertviz import model_view model_view( encoder_attention=outputs.encoder_attentions, decoder_attention=outputs.decoder_attentions, cross_attention=outputs.cross_attentions, encoder_tokens= encoder_text, decoder_tokens = decoder_text ) ``` ### Expected behavior The bottom layers give better alignment (layer0 and 1)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22538/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22538/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22537
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22537/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22537/comments
https://api.github.com/repos/huggingface/transformers/issues/22537/events
https://github.com/huggingface/transformers/pull/22537
1,652,330,077
PR_kwDOCUB6oc5NgP_c
22,537
Remove hack for dynamic modules and use Python functions instead
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I confirm that this fixes a race condition discovered today with:\r\n\r\n```\r\nfrom transformers import GPT2Config, GPT2LMHeadModel, AutoModelForCausalLM, AutoConfig\r\nmodel_name = \"bigcode/santacoder\"\r\nconfig = AutoConfig.from_pretrained(model_name, trust_remote_code=True)\r\n```\r\n\r\nwhich fails randomly when multiple processes run it in parallel.\r\n\r\n```\r\n[default0]:Traceback (most recent call last):\r\n[default0]: File \"<string>\", line 1, in <module>\r\n[default0]:FileNotFoundError: [Errno 2] No such file or directory: '/fsx/m4/modules/transformers_modules/bigcode/santacoder/bb63c0e145ad465df0a97dec285a949c9042523c/configuration_gpt2_mq.py'\r\n```\r\n\r\nthe problem goes away with this PR.\r\n\r\nI suspect a race condition is:\r\n\r\nhttps://github.com/huggingface/transformers/blob/159ff3342c576ccf26cb00fb9510666ed626f42d/src/transformers/dynamic_module_utils.py#L173\r\n\r\nwhich copies from multiple files to the same single target destination. This fails on FSX distributed filesystem with more than 2 dist processes.\r\n\r\nPlease note that the exception is not trapped and comes from the sub-process.", "> It turns out there is a simple way to reset the cache which should be used when copying new modules, this PR uses that solution. I have run the scripts to reproduce the flakiness given in https://github.com/huggingface/transformers/pull/21646 and didn't get any issue with the changes in this PR.\r\n\r\nI did notice that the SHA of the cached entry kept on changing. Why would one need to reset the cache, other than in special cases? Isn't the whole point of a cache is not to do anything but to load the file immediately? \r\n\r\nThe problem we had was not on CI, so I'm not sure why the reset code was even run. Perhaps there is a need to check the reset isn't performed unless asked explicitly? Since it appears to be happening since we shouldn't have run into this issue in the first place if this code was meant to be run on reset only. Please correct me if I'm missing something.", "> Why would one need to reset the cache, other than in special cases?\r\n\r\nIf you do not reset the cache after adding a new module, the import system of Python will not find it (see the doc of [`importlib.invalidate_caches()`](https://docs.python.org/3/library/importlib.html#importlib.invalidate_caches).\r\n\r\nThe function is only called when such a new module is added (new init, or newly copied dynamic code file) as you can see in the PR. It will only happen repeatedly if you keep downloading new models with dynamic code (or make it appear as such by doing save_pretrained then from_pretrained from different temp folders).\r\n\r\nIn any case, this situation will become even more rare in the near future when we will stop moving around those files with the code in each repo but trust only one source of truth.", "But I'm not adding a new module, I'm rerunning the [same 1 line of code](https://github.com/huggingface/transformers/pull/22537#issuecomment-1495215758).\r\n\r\nIs there something special about `\"bigcode/santacoder\"` that it never gets cached?\r\n\r\nThere should be 2 different behaviors:\r\n1. first time - when it's downloaded \r\n2. 2nd and onward time when it's already cached.\r\n\r\nno?", "I am confused about what sha of the cached entry keeps changing. Could you elaborate? I added a print statement the five times `importlib.invalidate_caches()` is called after this PR and I can confirm it is never called once the model is cached when I run your sample above:\r\n```py\r\nfrom transformers import GPT2Config, GPT2LMHeadModel, AutoModelForCausalLM, AutoConfig\r\nmodel_name = \"bigcode/santacoder\"\r\nconfig = AutoConfig.from_pretrained(model_name, trust_remote_code=True)\r\n```", "I'm not exactly sure of the exact behavior, I was using several debug scripts and it seemed to be cycling between `bb63c0e145ad465df0a97dec285a949c9042523c` and `6a4fb77ff71c32c34dc8c61af500c7a7ca17c1a6`\r\n\r\nBut I wasn't talking about this, I was asking why was re-running this script with 4 processes:\r\n\r\n```\r\nfrom transformers import GPT2Config, GPT2LMHeadModel, AutoModelForCausalLM, AutoConfig\r\nmodel_name = \"bigcode/santacoder\"\r\nconfig = AutoConfig.from_pretrained(model_name, trust_remote_code=True)\r\n```\r\n\r\nkept re-rerunning the reset code - as it was failing most of the time in 1 or 2 out of 4 processes.\r\n\r\nMy thinking is that with the caching happened, even with the bug in resetting code, that resetting code shouldn't have been run.\r\n\r\ne.g. here I reverted to transformers before this PR's fix:\r\n\r\n1. run and make sure it's cached:\r\n\r\n```\r\n$ python test.py\r\nExplicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\n```\r\nit should be cached now for sure, right?\r\n\r\n2. now it should just read the cached module\r\n\r\n```\r\n$ python -m torch.distributed.run --nproc_per_node=4 --nnodes=1 --tee 3 test.py \r\n[...]\r\n[default0]:Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\n[default1]:Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\n[default3]:Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\n[default2]:Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\n[default1]:Traceback (most recent call last):\r\n[default1]: File \"<string>\", line 1, in <module>\r\n[default1]:FileNotFoundError: [Errno 2] No such file or directory: '/fsx/m4/modules/transformers_modules/bigcode/santacoder/6a4fb77ff71c32c34dc8c61af500c7a7ca17c1a6/configuration_gpt2_mq.py'\r\n[default0]:ModuleNotFoundError: No module named\r\n[default0]:'transformers_modules.bigcode.santacoder.6a4fb77ff71c32c34dc8c61af500c7a7ca17c1a6.configuration_gpt2_mq'\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2722338) of binary: /fsx/m4/conda/stas-m4/bin/python\r\n```\r\nso that was my concern.", "I am very confused, if you revert before this PR's fix, you will have the issue with the race condition. I don't get what the problem is after this PR?", "there is no problem after this PR. It's OK, Sylvain.", "If your question is why there was a change even with the file cached before this PR, it was because of a hack we implemented instead of using the proper way provided with `importlib.invalidate_cache()` (which we didn't know about), where the file with the code was deleted and recreated before each use. This is obviously bad for race conditions, hence the proper fix in this PR :-)", "Aha! Thank you for clarifying the cause, Sylvain.", "Thank you for removing the ugly hack I added 💯 " ]
1,680
1,680
1,680
COLLABORATOR
null
# What does this PR do? #21646 introduced a new way of dealing with dynamic modules locally to avoid a flaky failure in the CI, using a bit of hack with subprocess python commands. This seems to cause problem with distributed runs using this code (#22506 and was also reported internally as failing with big code experiment in the cluster). It turns out there is a simple way to reset the cache which should be used when copying new modules, this PR uses that solution. I have run the scripts to reproduce the flakiness given in #21646 and didn't get any issue with the changes in this PR. Fixes #22506
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22537/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22537/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22537", "html_url": "https://github.com/huggingface/transformers/pull/22537", "diff_url": "https://github.com/huggingface/transformers/pull/22537.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22537.patch", "merged_at": 1680614413000 }
https://api.github.com/repos/huggingface/transformers/issues/22536
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22536/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22536/comments
https://api.github.com/repos/huggingface/transformers/issues/22536/events
https://github.com/huggingface/transformers/pull/22536
1,652,290,927
PR_kwDOCUB6oc5NgHiZ
22,536
Fix missing metrics with multiple eval datasets
{ "login": "hawkeoni", "id": 27156990, "node_id": "MDQ6VXNlcjI3MTU2OTkw", "avatar_url": "https://avatars.githubusercontent.com/u/27156990?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hawkeoni", "html_url": "https://github.com/hawkeoni", "followers_url": "https://api.github.com/users/hawkeoni/followers", "following_url": "https://api.github.com/users/hawkeoni/following{/other_user}", "gists_url": "https://api.github.com/users/hawkeoni/gists{/gist_id}", "starred_url": "https://api.github.com/users/hawkeoni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hawkeoni/subscriptions", "organizations_url": "https://api.github.com/users/hawkeoni/orgs", "repos_url": "https://api.github.com/users/hawkeoni/repos", "events_url": "https://api.github.com/users/hawkeoni/events{/privacy}", "received_events_url": "https://api.github.com/users/hawkeoni/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "It seems like the tests failed on an irrelevant \"File not found error\" in tokenizers, but I can not rerun the tests. \r\n@sgugger would you kindly trigger them again?", ":call_me_hand: " ]
1,680
1,680
1,680
CONTRIBUTOR
null
Fixes #22530 [Issue](https://github.com/huggingface/transformers/issues/22530) **tl;dr** - `Trainer` only keeps the last metric when using multiple eval datasets. This PR fixes that by merging metrics from all eval datasets into one dict. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22536/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22536/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22536", "html_url": "https://github.com/huggingface/transformers/pull/22536", "diff_url": "https://github.com/huggingface/transformers/pull/22536.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22536.patch", "merged_at": 1680537837000 }
https://api.github.com/repos/huggingface/transformers/issues/22535
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22535/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22535/comments
https://api.github.com/repos/huggingface/transformers/issues/22535/events
https://github.com/huggingface/transformers/pull/22535
1,652,277,930
PR_kwDOCUB6oc5NgEva
22,535
[`T5`] Enable naive Pipeline Parallelism training for T5
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Similarly as https://github.com/huggingface/transformers/pull/22329 this PR enables training `T5` models in a "Naive Pipeline Parallelism" setup. What is termed as "Naive Pipeline Parallelism" is simply to spread the model across multiple GPUs and run naively the forward/backward pass by communicating the activations and gradients between each GPU. Without this fix, users will encounter device mismatch issues when training this model that has been loaded across multiple GPUs. Hence, the fix is to manually set the device of the `labels` to the same device as `lm_logits`. A simple snippet to reproduce the behaviour below (this needs to be run on a multi-gpu env): ```python import torch from transformers import AutoModelForSeq2SeqLM model_id = "google/flan-t5-base" model = AutoModelForSeq2SeqLM.from_pretrained(model_id, device_map="balanced") print(set(model.hf_device_map.values())) # >>> {0, 1} dummy_input = torch.LongTensor([[1, 2, 3, 4, 5]]) loss = model(input_ids=dummy_input, labels=dummy_input).loss ``` Error trace: ```bash │ 1746 │ │ loss = None │ │ 1747 │ │ if labels is not None: │ │ 1748 │ │ │ loss_fct = CrossEntropyLoss(ignore_index=-100) │ │ ❱ 1749 │ │ │ loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) │ │ 1750 │ │ │ # TODO(thom): Add z_loss https://github.com/tensorflow/mesh/blob/fa19d69eafc │ │ 1751 │ │ │ │ 1752 │ │ if not return_dict: │ │ │ │ /home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/site-packages/torch/nn/module │ │ s/module.py:1501 in _call_impl │ │ │ │ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │ │ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │ │ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │ │ 1502 │ │ # Do not call functions when jit is used │ │ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1504 │ │ backward_pre_hooks = [] │ │ │ │ /home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/site-packages/torch/nn/module │ │ s/loss.py:1174 in forward │ │ │ │ 1171 │ │ self.label_smoothing = label_smoothing │ │ 1172 │ │ │ 1173 │ def forward(self, input: Tensor, target: Tensor) -> Tensor: │ │ ❱ 1174 │ │ return F.cross_entropy(input, target, weight=self.weight, │ │ 1175 │ │ │ │ │ │ │ ignore_index=self.ignore_index, reduction=self.reduction, │ │ 1176 │ │ │ │ │ │ │ label_smoothing=self.label_smoothing) │ │ 1177 │ │ │ │ /home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/site-packages/torch/nn/functi │ │ onal.py:3029 in cross_entropy │ │ │ │ 3026 │ │ ) │ │ 3027 │ if size_average is not None or reduce is not None: │ │ 3028 │ │ reduction = _Reduction.legacy_get_string(size_average, reduce) │ │ ❱ 3029 │ return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(re │ │ 3030 │ │ 3031 │ │ 3032 def binary_cross_entropy( │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward) ``` cc @sgugger ## Related issues: https://github.com/huggingface/peft/issues/242 https://github.com/huggingface/peft/issues/205
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22535/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22535", "html_url": "https://github.com/huggingface/transformers/pull/22535", "diff_url": "https://github.com/huggingface/transformers/pull/22535.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22535.patch", "merged_at": 1680537338000 }
https://api.github.com/repos/huggingface/transformers/issues/22534
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22534/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22534/comments
https://api.github.com/repos/huggingface/transformers/issues/22534/events
https://github.com/huggingface/transformers/pull/22534
1,652,274,506
PR_kwDOCUB6oc5NgD_2
22,534
🌐 [i18n-KO] Translated `custom_models.mdx` to Korean
{ "login": "HanNayeoniee", "id": 33839093, "node_id": "MDQ6VXNlcjMzODM5MDkz", "avatar_url": "https://avatars.githubusercontent.com/u/33839093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HanNayeoniee", "html_url": "https://github.com/HanNayeoniee", "followers_url": "https://api.github.com/users/HanNayeoniee/followers", "following_url": "https://api.github.com/users/HanNayeoniee/following{/other_user}", "gists_url": "https://api.github.com/users/HanNayeoniee/gists{/gist_id}", "starred_url": "https://api.github.com/users/HanNayeoniee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HanNayeoniee/subscriptions", "organizations_url": "https://api.github.com/users/HanNayeoniee/orgs", "repos_url": "https://api.github.com/users/HanNayeoniee/repos", "events_url": "https://api.github.com/users/HanNayeoniee/events{/privacy}", "received_events_url": "https://api.github.com/users/HanNayeoniee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think we should discuss whether to translate `configuration` to 구성.\r\nSince the configuration is the name of the class, I think it will be misunderstood when translated into Korean to understand that it is a class.\r\n\r\n`configuration`이 class의 이름이기 때문에 한국어로 번역 시 class임을 이해하지 못하도록 오해할 것 같습니다.", "> I think we should discuss whether to translate `configuration` to 구성. Since the configuration is the name of the class, I think it will be misunderstood when translated into Korean to understand that it is a class.\r\n> \r\n> `configuration`이 class의 이름이기 때문에 한국어로 번역 시 class임을 이해하지 못하도록 오해할 것 같습니다.\r\n\r\n저도 놓친 부분이네요. 클래스를 의미하는 부분은 `configuration`로, 클래스를 의미하지 않는 부분(본문에서 config라고 적힌 경우)은 `구성`으로 번역하는걸 검토해보겠습니다.", "May you please review this PR?\r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,680
1,682
1,681
CONTRIBUTOR
null
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니당 --> # What does this PR do? Translated the `custom_models.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제출 전 체크리스트로, 가짜연구소만의 체크리스트도 <details>로 감싸서 만들어두면 더 좋을 것 같아요. --> ## Who can review? <!-- 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22534/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22534/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22534", "html_url": "https://github.com/huggingface/transformers/pull/22534", "diff_url": "https://github.com/huggingface/transformers/pull/22534.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22534.patch", "merged_at": 1681731594000 }
https://api.github.com/repos/huggingface/transformers/issues/22533
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22533/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22533/comments
https://api.github.com/repos/huggingface/transformers/issues/22533/events
https://github.com/huggingface/transformers/pull/22533
1,652,198,463
PR_kwDOCUB6oc5NfzgG
22,533
🌐[i18n-KO] Translate `autoclass_tutorial` to Korean and Fix the typo of `quicktour`
{ "login": "gabrielwithappy", "id": 102908949, "node_id": "U_kgDOBiJEFQ", "avatar_url": "https://avatars.githubusercontent.com/u/102908949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gabrielwithappy", "html_url": "https://github.com/gabrielwithappy", "followers_url": "https://api.github.com/users/gabrielwithappy/followers", "following_url": "https://api.github.com/users/gabrielwithappy/following{/other_user}", "gists_url": "https://api.github.com/users/gabrielwithappy/gists{/gist_id}", "starred_url": "https://api.github.com/users/gabrielwithappy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gabrielwithappy/subscriptions", "organizations_url": "https://api.github.com/users/gabrielwithappy/orgs", "repos_url": "https://api.github.com/users/gabrielwithappy/repos", "events_url": "https://api.github.com/users/gabrielwithappy/events{/privacy}", "received_events_url": "https://api.github.com/users/gabrielwithappy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "- need to add the english link of `Load pretrained instances with an AutoClass`\r\n- keep the `AutoClass` as a english", "squashed commit messages and check a final document result.\r\n", "@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd\r\nPlease review thie PR. \r\nThank you in advance.", "[Korean]\r\n음.. 좀 헷갈리네요. 링크가 안되는 이유가 뭔지 좀 알아봐야 될 것 같습니다.\r\n소스코드에 링크가 안걸리는게 문제 같습니다. 좀 더 찾아볼께요\r\n[English]\r\nI will check why the hyperlink does not work.\r\nI think I missed somthing on link code of source codes in the document", "Thank you.\r\nI found my source code links are wrong.\r\nI updated review action items and fix it @HanNayeoniee ", "May you please review this PR?\r\n@sgugger, @ArthurZucker, @eunseojo", "Thanks for your contribution!" ]
1,680
1,681
1,680
CONTRIBUTOR
null
# What does this PR do? Translated the `autoclass_tutorial.mdx` file of the documentation to Korean and fix the typo of `quicktour` Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? <!-- 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR? @sgugger, @ArthurZucker, @eunseojo ## Review result - [x] fix a wrong source code link of functions in the document > Links for API documents are not activated. I checked other language documents have same problem. I think it will be fixed when API documents are translated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22533/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22533/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22533", "html_url": "https://github.com/huggingface/transformers/pull/22533", "diff_url": "https://github.com/huggingface/transformers/pull/22533.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22533.patch", "merged_at": 1680869555000 }
https://api.github.com/repos/huggingface/transformers/issues/22532
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22532/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22532/comments
https://api.github.com/repos/huggingface/transformers/issues/22532/events
https://github.com/huggingface/transformers/pull/22532
1,652,149,748
PR_kwDOCUB6oc5Nfo_Q
22,532
[`Trainer`] Force `is_model_parallel` when model is loaded in multiple GPUs using `accelerate`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you elaborate why is such a patch needed and what is the goal of your PR? Cause all of this seems very hacky.", "These hacks were needed because `self.place_model_on_device` [needs to be set to `True`](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L461-L469) in order for the `Trainer` to work correctly on a multi-GPU environment, i.e. with a model that has been loaded across multiple GPUs (so we're talking about Naive PP here). Otherwise users will encounter device mismatch between model's input/output.\r\n\r\nMoreover, modifying `place_model_on_device` directly on `TrainingArguments` seems to not work, as this argument seems to not be on the `__init__` of that class, and also it seems to me that it is better to not touch this attribute as it is a property method: https://github.com/huggingface/transformers/blob/9419f144ad6d5464afc3c9c65a23c6940f8dd9c2/src/transformers/training_args.py#L1801\r\n\r\nThat is why I preferred to introduce a new argument to avoid modifying what is already in place and modify directly what is needed to be edited, without having to modify the model's internals (forcing `model_parallel` to `True` on T5 models will call the deprecated `parallelize` API that leads to some bugs)", "_The documentation is not available anymore as the PR was closed or merged._", "Or you could just analyze the device map of the model and determine if there are several GPUs used. It would be cleaner and not require the user to learn the 97th training argument.", "Ahh yes good point!" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? When using the Trainer on a multi-GPU environment, users currently apply a patch that leads to some bugs. Before running a training they [need to call](https://github.com/huggingface/peft/issues/205#issuecomment-1491455711): ```python setattr(model, 'model_parallel', True) setattr(model, 'is_parallelizable', True) ``` Which can lead to unexpected bugs on some models, such as T5, that has the `parallelize` API that is still in place, thus when forcing `model_parallel` to be `True`, calls that API, which is deprecated and should not be maintained. Script to reproduce: ```python from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer, Trainer, TrainingArguments, DataCollatorForLanguageModeling from peft import prepare_model_for_int8_training,LoraConfig, get_peft_model causal_lm_model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained( causal_lm_model_id, load_in_8bit=True, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(causal_lm_model_id) model = prepare_model_for_int8_training(model) # setattr(model, 'model_parallel', True) # setattr(model, 'is_parallelizable', True) config = LoraConfig( r=16, lora_alpha=32, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) data = load_dataset("Abirate/english_quotes") data = data.map(lambda samples: tokenizer(samples["quote"]), batched=True) trainer = Trainer( model=model, train_dataset=data["train"], args=TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, warmup_steps=2, max_steps=3, learning_rate=2e-4, fp16=True, logging_steps=1, output_dir="outputs", ), data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False), ) trainer.is_model_parallel = True model.config.use_cache = False trainer.train() ``` cc @sgugger Related: https://github.com/huggingface/peft/issues/205
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22532/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22532/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22532", "html_url": "https://github.com/huggingface/transformers/pull/22532", "diff_url": "https://github.com/huggingface/transformers/pull/22532.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22532.patch", "merged_at": 1680534650000 }
https://api.github.com/repos/huggingface/transformers/issues/22531
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22531/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22531/comments
https://api.github.com/repos/huggingface/transformers/issues/22531/events
https://github.com/huggingface/transformers/pull/22531
1,652,055,124
PR_kwDOCUB6oc5NfUh5
22,531
[setup] drop deprecated `distutils` usage
{ "login": "XuehaiPan", "id": 16078332, "node_id": "MDQ6VXNlcjE2MDc4MzMy", "avatar_url": "https://avatars.githubusercontent.com/u/16078332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XuehaiPan", "html_url": "https://github.com/XuehaiPan", "followers_url": "https://api.github.com/users/XuehaiPan/followers", "following_url": "https://api.github.com/users/XuehaiPan/following{/other_user}", "gists_url": "https://api.github.com/users/XuehaiPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/XuehaiPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XuehaiPan/subscriptions", "organizations_url": "https://api.github.com/users/XuehaiPan/orgs", "repos_url": "https://api.github.com/users/XuehaiPan/repos", "events_url": "https://api.github.com/users/XuehaiPan/events{/privacy}", "received_events_url": "https://api.github.com/users/XuehaiPan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> but we are very happy with the setup as it is.\r\n\r\nThanks for the clarification. The `pyprojct.toml` format is the recommended packaging method in [PEP 517 – A build-system independent format for source trees](https://peps.python.org/pep-0517). I have reverted some of the commits but kept the first one. The `distutils` module is deprecated and will be removed in Python 3.12 (See also [PEP 632 – Deprecate distutils module](https://peps.python.org/pep-0632)). In this PR, I changed `distutils.core.Command` to `setuptools.Command`.", "I think you can suggest the changes thate removed the setup.cfg in a separate PR, it's a good cleanup (but not relevant to this PR anymore)\r\n\r\nFor migrating the setup.py to the pyproject, let's see with @LysandreJik what he thinks. My first reaction is to keep what's been working for us all these years ;-)", "Thanks for the clarification. Since we have already dropped Python 3.6 support, `setuptools` works very well with `pyproject.toml` based project. We can move the static parts in `setup.py` to `pyproject.toml`. Note that the optional dependencies are too dynamic, so we still need a `setup.py` file.\r\n\r\nMost Python utilities support `pyproject.toml` configuration (`black`, `isort`, `ruff`, `pytest`, ...). And some do not even support other config files like `setup.cfg` (e.g., `black`). I think maintaining configurations in a single file is a good practice. If you decide to migrate to `pyproject.toml`, pin me if I can help.", "Yes, moving all configurations to the pyproject.toml is something we would like to clean up. If you want to contribute it, please open a PR :-) Note that we kept the isort and flake8 configurations for a bit after our migration to ruff, but they can now be completely removed, so it would just be pytest if I'm not mistaken." ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Migrate setup script to `pyproject.toml` ([PEP 517 – A build-system independent format for source trees](https://peps.python.org/pep-0517)). Changes: - drop deprecated `distutils` usage - ~~migrate setup script to `pyproject.toml`~~ - ~~migrate `isort` and `pytest` configs to `pyproject.toml`~~ - ~~migrate `flake8` configs to `.flake8` and remove `setup.cfg` file~~ ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22531/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22531/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22531", "html_url": "https://github.com/huggingface/transformers/pull/22531", "diff_url": "https://github.com/huggingface/transformers/pull/22531.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22531.patch", "merged_at": 1680537865000 }
https://api.github.com/repos/huggingface/transformers/issues/22530
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22530/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22530/comments
https://api.github.com/repos/huggingface/transformers/issues/22530/events
https://github.com/huggingface/transformers/issues/22530
1,651,973,167
I_kwDOCUB6oc5idxwv
22,530
Multiple eval datasets can only use last dataset for best checkpoint
{ "login": "hawkeoni", "id": 27156990, "node_id": "MDQ6VXNlcjI3MTU2OTkw", "avatar_url": "https://avatars.githubusercontent.com/u/27156990?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hawkeoni", "html_url": "https://github.com/hawkeoni", "followers_url": "https://api.github.com/users/hawkeoni/followers", "following_url": "https://api.github.com/users/hawkeoni/following{/other_user}", "gists_url": "https://api.github.com/users/hawkeoni/gists{/gist_id}", "starred_url": "https://api.github.com/users/hawkeoni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hawkeoni/subscriptions", "organizations_url": "https://api.github.com/users/hawkeoni/orgs", "repos_url": "https://api.github.com/users/hawkeoni/repos", "events_url": "https://api.github.com/users/hawkeoni/events{/privacy}", "received_events_url": "https://api.github.com/users/hawkeoni/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's possible there is a bug, so please open a PR if you think you have the right fix!" ]
1,680
1,680
1,680
CONTRIBUTOR
null
I have a setup where I evaluate the model on several datasets and only the metrics from the last dataset can be used. The [code from Trainer](https://github.com/huggingface/transformers/blob/559a45d1dc1f46d6e9942cdc9ff5eef5a811a59d/src/transformers/trainer.py#L2234) looks like: ```python if self.control.should_evaluate: if isinstance(self.eval_dataset, dict): for eval_dataset_name, eval_dataset in self.eval_dataset.items(): metrics = self.evaluate( eval_dataset=eval_dataset, ignore_keys=ignore_keys_for_eval, metric_key_prefix=f"eval_{eval_dataset_name}", ) else: metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) self._report_to_hp_search(trial, self.state.global_step, metrics) if self.control.should_save: self._save_checkpoint(model, trial, metrics=metrics) ```` Only the last metric is used, when datasets are passed as a `Dict[str, Dataset]` Is this a bug? A possible fix: ````python if self.control.should_evaluate: if isinstance(self.eval_dataset, dict): metrics = {} for eval_dataset_name, eval_dataset in self.eval_dataset.items(): dataset_metrics = self.evaluate( eval_dataset=eval_dataset, ignore_keys=ignore_keys_for_eval, metric_key_prefix=f"eval_{eval_dataset_name}", ) metrics.update(dataset_metrics) else: metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) self._report_to_hp_search(trial, self.state.global_step, metrics) if self.control.should_save: self._save_checkpoint(model, trial, metrics=metrics) ``` ```` Please, close this if this is the intended behavior. If it's not I can submit a pr with fixes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22530/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22529
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22529/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22529/comments
https://api.github.com/repos/huggingface/transformers/issues/22529/events
https://github.com/huggingface/transformers/issues/22529
1,651,960,372
I_kwDOCUB6oc5iduo0
22,529
Intel macOS system with AMD 6900XT GPU, using MPS: cannot get any usable result back from any model
{ "login": "TheBloke", "id": 784313, "node_id": "MDQ6VXNlcjc4NDMxMw==", "avatar_url": "https://avatars.githubusercontent.com/u/784313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TheBloke", "html_url": "https://github.com/TheBloke", "followers_url": "https://api.github.com/users/TheBloke/followers", "following_url": "https://api.github.com/users/TheBloke/following{/other_user}", "gists_url": "https://api.github.com/users/TheBloke/gists{/gist_id}", "starred_url": "https://api.github.com/users/TheBloke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TheBloke/subscriptions", "organizations_url": "https://api.github.com/users/TheBloke/orgs", "repos_url": "https://api.github.com/users/TheBloke/repos", "events_url": "https://api.github.com/users/TheBloke/events{/privacy}", "received_events_url": "https://api.github.com/users/TheBloke/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I just did some mores searching and realised that -9223372036854775808 is \"the smallest value that can be stored in a 64-bit signed integer\" and that this issue looks the same as the Pytorch MPS issue reported here: https://github.com/pytorch/pytorch/issues/92311\r\n\r\nIn that thread, someone reported this workaround:\r\n> Just replace argmax(...) with max(...).indices for instance replace output.argmax(dim=1) with output.max(dim=1).indices\r\n\r\nI don't know if this helps me here as I'm not running any PyTorch code directly, but rather calling it through Transformers.\r\n\r\nAnyway I guess this likely shows this isn't a transformers issue but is in PyTorch, and has already been reported. In which case, apologies for not noticing this before reporting this. ", "yea it looks like this is something that needs to be fixed on the PyTorch side. We can't just change the code of `generate` on our side to accommodate MPS devices and this is clearly a bug in PyTorch. So we just have to wait a bit for them to fix it.", "Understood, thanks for the quick reply." ]
1,680
1,680
1,680
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 2.1.0.dev20230402 (False) says false but I am using PyTorch on GPU via `mps` - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No OS: macOS x64 Ventura 13.3 Hardware: Intel system with AMD 6900XT GPU ### Who can help? @sgugger (possibly PyTorch related?) ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction My goal is to try models like Alpaca and GPT4ALL on my home Intel macOS system using my AMD 6900XT GPU. I tried and failed to get tloen's Alpaca Lora UI and the LLaMa_MPS project running (details below). In investigating this, I have run this simple test script I wrote to test using MPS to get a result from `t5-small`. As a base for the code I used https://github.com/huggingface/transformers/issues/22122#issuecomment-1475302212 as the discussions in that thread indicated it should work fine: ``` import torch import transformers from transformers import T5ForConditionalGeneration, AutoTokenizer print("PyTorch version: ", torch.__version__) print("transformers version: ", transformers.__version__) print() tokenizer = AutoTokenizer.from_pretrained('t5-small', model_max_length=512) input_string = 'translate English to German: "The house is wonderful."' print("Input string:", input_string) ## On CPU print("Trying CPU") model_cpu = T5ForConditionalGeneration.from_pretrained('t5-small', device_map='auto') print("Running on: ", model_cpu.device) inputs = tokenizer(input_string, return_tensors='pt').input_ids outputs = model_cpu.generate(inputs, max_length=200) print("Decoded Output: ", tokenizer.decode(outputs[0])) print("raw output: ", outputs) ## On MPS print() print("Trying mps") model_mps = T5ForConditionalGeneration.from_pretrained('t5-small') model_mps = model_mps.to('mps') print("Running on: ", model_mps.device) inputs_mps = tokenizer(input_string, return_tensors='pt').input_ids inputs_mps = inputs_mps.to('mps') outputs = model_mps.generate(inputs_mps, max_length=200) try: print("Decoded Output: ", tokenizer.decode(outputs[0])) except Exception as e: print(e) print("raw output: ", outputs) ``` This produces the following result; CPU works fine, MPS produces a strange, repeating and very long result which throws an exception when being decoded: ``` PyTorch version: 2.1.0.dev20230402 transformers version: 4.28.0.dev0 Input string: translate English to German: "The house is wonderful." Trying CPU Running on: cpu Decoded Output: <pad> "Das Haus ist wunderbar."</s> raw output: tensor([[ 0, 96, 17266, 4598, 229, 19250, 535, 1]]) Trying mps Running on: mps:0 out of range integral type conversion attempted raw output: tensor([[ 0, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808, -9223372036854775808]], device='mps:0') ``` Before trying this script, I first tried tloen's alpaca-lora GUI which has MPS support (https://huggingface.co/spaces/tloen/alpaca-lora/blob/main/app.py), and LLaMA_MPS (https://github.com/jankais3r/LLaMA_MPS) Both of these exhibit the same or a very similar problem: the code appears to run fine, it uses my AMD 6900XT GPU (as detected via macOS Activity Manager), but I either get no output at all (alpaca-lora), or the output is corrupted, showing each token as a `??` symbol (LLaMa_MPS). I am an AI newbie so I'm unsure how to try and debug this but I am pretty sure that all three of these examples are exhibiting the same problem. I don't know if it's an issue in transformers or in PyTorch which provides the MPS backend, so I thought I'd start here. Note: I'm running PyTorch 2.1 dev version because trying 2.0.0 with alpaca-lora gave me the error `RuntimeError: MPS does not support cumsum op with int64 input` - this was fixed by updating to 2.1-dev and Ventura 13.3. I have tested LLaMa_MPS with PyTorch 2.0.0 with the same result, so I don't believe it's specific to 2.1-dev. Thanks in advance for any help. ### Expected behavior The mps code shown above should output the same result as the CPU code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22529/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22529/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22528
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22528/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22528/comments
https://api.github.com/repos/huggingface/transformers/issues/22528/events
https://github.com/huggingface/transformers/pull/22528
1,651,867,215
PR_kwDOCUB6oc5Nerz9
22,528
Add DePlot + MatCha on `transformers`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "All models have been moved to Google org and model cards updated correctly! This PR is ready for review cc @sgugger " ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Adds [MatCha](https://arxiv.org/pdf/2303.18223.pdf) and [DePlot](https://arxiv.org/pdf/2212.10505.pdf) on `transformers`. Those are two different papers from Google AI but fully based on `Pix2Struct`. Model weights: - https://huggingface.co/ybelkada/deplot - https://huggingface.co/ybelkada/matcha-base - https://huggingface.co/ybelkada/matcha-chart2text-pew - https://huggingface.co/ybelkada/matcha-chart2text-statista - https://huggingface.co/ybelkada/matcha-plotqa-v1 - https://huggingface.co/ybelkada/matcha-plotqa-v2 I will move them to Google org once I will double check the model card contents with the authors EDIT: all the weights have been moved
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22528/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22528", "html_url": "https://github.com/huggingface/transformers/pull/22528", "diff_url": "https://github.com/huggingface/transformers/pull/22528.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22528.patch", "merged_at": 1680709428000 }
https://api.github.com/repos/huggingface/transformers/issues/22527
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22527/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22527/comments
https://api.github.com/repos/huggingface/transformers/issues/22527/events
https://github.com/huggingface/transformers/pull/22527
1,651,707,843
PR_kwDOCUB6oc5NeJFy
22,527
[Pix2struct] Simplify generation
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "PR is ready for review, however checkpoints on the hub will need to be updated (`is_encoder_decoder` = True) for this PR to be merged", "PR is ready, models on the hub don't need to be updated since they don't have `is_encoder_decoder` set on the model config level (i.e. `Pix2StructConfig`. They have set it only in `Pix2StructTextConfig`). cc @younesbelkada " ]
1,680
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? This PR aims to fix the warning that is currently printed out when generating text with Pix2Struct: ``` A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer. ``` I see that all Pix2Struct models have `config.is_encoder_decoder=False`, but as Pix2Struct is an encoder-decoder model it'd be great/more logical to have this argument set to `True` and instead overwrite `prepare_inputs_for_generation` to have a cleaner way of generating text. This also makes us get rid of the warning. To do: - [ ] for the moment there're still one integration test failing (`test_batched_inference_image_captioning_conditioned`): ``` AssertionError: 'An photography of the Temple Bar and a collection of other items.' != 'An photography of the Temple Bar and a few other places.' E - An photography of the Temple Bar and a collection of other items. E ? ^^^^ ^^^^^^^^ ^^ - E + An photography of the Temple Bar and a few other places. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22527/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22527", "html_url": "https://github.com/huggingface/transformers/pull/22527", "diff_url": "https://github.com/huggingface/transformers/pull/22527.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22527.patch", "merged_at": 1681390875000 }
https://api.github.com/repos/huggingface/transformers/issues/22526
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22526/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22526/comments
https://api.github.com/repos/huggingface/transformers/issues/22526/events
https://github.com/huggingface/transformers/pull/22526
1,651,628,848
PR_kwDOCUB6oc5Nd4KW
22,526
Fix convert_opt_original_pytorch_checkpoint_to_pytorch.py typo
{ "login": "larekrow", "id": 127832774, "node_id": "U_kgDOB56Sxg", "avatar_url": "https://avatars.githubusercontent.com/u/127832774?v=4", "gravatar_id": "", "url": "https://api.github.com/users/larekrow", "html_url": "https://github.com/larekrow", "followers_url": "https://api.github.com/users/larekrow/followers", "following_url": "https://api.github.com/users/larekrow/following{/other_user}", "gists_url": "https://api.github.com/users/larekrow/gists{/gist_id}", "starred_url": "https://api.github.com/users/larekrow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/larekrow/subscriptions", "organizations_url": "https://api.github.com/users/larekrow/orgs", "repos_url": "https://api.github.com/users/larekrow/repos", "events_url": "https://api.github.com/users/larekrow/events{/privacy}", "received_events_url": "https://api.github.com/users/larekrow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? `load_checkpoint()` silently fails because `".qkj_proj." in key` is always `False`, but will eventually cause an error at `model.load_state_dict(state_dict)`. This PR fixes the typo that causes this issue. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22526/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22526", "html_url": "https://github.com/huggingface/transformers/pull/22526", "diff_url": "https://github.com/huggingface/transformers/pull/22526.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22526.patch", "merged_at": 1680530813000 }
https://api.github.com/repos/huggingface/transformers/issues/22525
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22525/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22525/comments
https://api.github.com/repos/huggingface/transformers/issues/22525/events
https://github.com/huggingface/transformers/pull/22525
1,651,583,605
PR_kwDOCUB6oc5NdunS
22,525
Update convert_llama_weights_to_hf.py
{ "login": "Ricardokevins", "id": 43642508, "node_id": "MDQ6VXNlcjQzNjQyNTA4", "avatar_url": "https://avatars.githubusercontent.com/u/43642508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ricardokevins", "html_url": "https://github.com/Ricardokevins", "followers_url": "https://api.github.com/users/Ricardokevins/followers", "following_url": "https://api.github.com/users/Ricardokevins/following{/other_user}", "gists_url": "https://api.github.com/users/Ricardokevins/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ricardokevins/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ricardokevins/subscriptions", "organizations_url": "https://api.github.com/users/Ricardokevins/orgs", "repos_url": "https://api.github.com/users/Ricardokevins/repos", "events_url": "https://api.github.com/users/Ricardokevins/events{/privacy}", "received_events_url": "https://api.github.com/users/Ricardokevins/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Well this was bound to disappear with #22402 😅" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Fix problem mentioned in https://github.com/huggingface/transformers/issues/22287 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22525/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22525", "html_url": "https://github.com/huggingface/transformers/pull/22525", "diff_url": "https://github.com/huggingface/transformers/pull/22525.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22525.patch", "merged_at": 1680514900000 }
https://api.github.com/repos/huggingface/transformers/issues/22524
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22524/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22524/comments
https://api.github.com/repos/huggingface/transformers/issues/22524/events
https://github.com/huggingface/transformers/issues/22524
1,651,518,628
I_kwDOCUB6oc5icCyk
22,524
I want the 4.28.0.dev0 version of transformers
{ "login": "gyh123wqe", "id": 129729242, "node_id": "U_kgDOB7uC2g", "avatar_url": "https://avatars.githubusercontent.com/u/129729242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gyh123wqe", "html_url": "https://github.com/gyh123wqe", "followers_url": "https://api.github.com/users/gyh123wqe/followers", "following_url": "https://api.github.com/users/gyh123wqe/following{/other_user}", "gists_url": "https://api.github.com/users/gyh123wqe/gists{/gist_id}", "starred_url": "https://api.github.com/users/gyh123wqe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gyh123wqe/subscriptions", "organizations_url": "https://api.github.com/users/gyh123wqe/orgs", "repos_url": "https://api.github.com/users/gyh123wqe/repos", "events_url": "https://api.github.com/users/gyh123wqe/events{/privacy}", "received_events_url": "https://api.github.com/users/gyh123wqe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "you can pip install with source code" ]
1,680
1,680
1,680
NONE
null
### Feature request I want the 4.28.0.dev0 version of transformers ### Motivation I want the 4.28.0.dev0 version of transformers ### Your contribution I want the 4.28.0.dev0 version of transformers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22524/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 2, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22524/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22523
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22523/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22523/comments
https://api.github.com/repos/huggingface/transformers/issues/22523/events
https://github.com/huggingface/transformers/issues/22523
1,651,491,183
I_kwDOCUB6oc5ib8Fv
22,523
Each list in `nested_token_ids` can't be a complete subset of another list, but is
{ "login": "lovodkin93", "id": 57570615, "node_id": "MDQ6VXNlcjU3NTcwNjE1", "avatar_url": "https://avatars.githubusercontent.com/u/57570615?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lovodkin93", "html_url": "https://github.com/lovodkin93", "followers_url": "https://api.github.com/users/lovodkin93/followers", "following_url": "https://api.github.com/users/lovodkin93/following{/other_user}", "gists_url": "https://api.github.com/users/lovodkin93/gists{/gist_id}", "starred_url": "https://api.github.com/users/lovodkin93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lovodkin93/subscriptions", "organizations_url": "https://api.github.com/users/lovodkin93/orgs", "repos_url": "https://api.github.com/users/lovodkin93/repos", "events_url": "https://api.github.com/users/lovodkin93/events{/privacy}", "received_events_url": "https://api.github.com/users/lovodkin93/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante", "Hey @lovodkin93 👋 the constraints feature in beam search is experimental, so our efforts are currently limited to fixing bugs. \r\n\r\nIf you'd like to add the feature yourself, go for it :) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
### Feature request Enable passing disjunctive constraints (https://github.com/huggingface/transformers/blob/main/src/transformers/generation/beam_constraints.py#L261) where one is a subset of the other ### Motivation in the constrainted beam decoding feature, specifically in the case of disjunctive constraints, currently there is no option for one disjunctive constraint to be a subset of the other, as can be seen here: https://github.com/huggingface/transformers/blob/main/src/transformers/generation/beam_constraints.py#L220 But, in many cases, this is exactly the case. For example, if I wanted to consider all the inflections of the verb "sentence": ["sentence", "sentences", "sentenced", "sentencing"], then the tokenizer separates "sentenced" into ["sentence", "d"], which means that "sentence" is a subset of "sentenced". ### Your contribution N/A
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22523/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22522
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22522/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22522/comments
https://api.github.com/repos/huggingface/transformers/issues/22522/events
https://github.com/huggingface/transformers/pull/22522
1,651,376,457
PR_kwDOCUB6oc5NdCQC
22,522
Update docs for assigning path to all_video_file_paths
{ "login": "hom-bahrani", "id": 8465628, "node_id": "MDQ6VXNlcjg0NjU2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/8465628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hom-bahrani", "html_url": "https://github.com/hom-bahrani", "followers_url": "https://api.github.com/users/hom-bahrani/followers", "following_url": "https://api.github.com/users/hom-bahrani/following{/other_user}", "gists_url": "https://api.github.com/users/hom-bahrani/gists{/gist_id}", "starred_url": "https://api.github.com/users/hom-bahrani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hom-bahrani/subscriptions", "organizations_url": "https://api.github.com/users/hom-bahrani/orgs", "repos_url": "https://api.github.com/users/hom-bahrani/repos", "events_url": "https://api.github.com/users/hom-bahrani/events{/privacy}", "received_events_url": "https://api.github.com/users/hom-bahrani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22522). All of your documentation changes will be reflected on that endpoint.", "@amyeroberts yes you're right, the dataset is downloaded to the .cache directory and extracted into local directory. I have updated accordingly and tested in colab to confirm that its working \r\n\r\n<img width=\"1387\" alt=\"Screenshot 2023-04-05 at 16 06 57\" src=\"https://user-images.githubusercontent.com/8465628/230124560-1eef5951-70f3-4b10-b4a8-4d1766bcb531.png\">\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
# What does this PR do? Updating a few lines in the video classification tasks guide as the way its written it seems like we are not actually iterating over the file paths, but rather the string. Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22522/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22522", "html_url": "https://github.com/huggingface/transformers/pull/22522", "diff_url": "https://github.com/huggingface/transformers/pull/22522.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22522.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22521
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22521/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22521/comments
https://api.github.com/repos/huggingface/transformers/issues/22521/events
https://github.com/huggingface/transformers/issues/22521
1,651,341,518
I_kwDOCUB6oc5ibXjO
22,521
Codeparrot Humaneval metric error?
{ "login": "Keysmis", "id": 9586803, "node_id": "MDQ6VXNlcjk1ODY4MDM=", "avatar_url": "https://avatars.githubusercontent.com/u/9586803?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Keysmis", "html_url": "https://github.com/Keysmis", "followers_url": "https://api.github.com/users/Keysmis/followers", "following_url": "https://api.github.com/users/Keysmis/following{/other_user}", "gists_url": "https://api.github.com/users/Keysmis/gists{/gist_id}", "starred_url": "https://api.github.com/users/Keysmis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Keysmis/subscriptions", "organizations_url": "https://api.github.com/users/Keysmis/orgs", "repos_url": "https://api.github.com/users/Keysmis/repos", "events_url": "https://api.github.com/users/Keysmis/events{/privacy}", "received_events_url": "https://api.github.com/users/Keysmis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @lvwerra ", "Hi @Keysmis can you report the arguments you used for the script? And what results did you get? We updated some models and maybe we didn't update the reported metrics everywhere. cc @loubnabnl ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
Hi~, I tried to reproduce the metrics you reported by running transformers/examples/research_projects/codeparrot/scripts/human_eval.py, and the model is codeparrot-small, but the results have significant deviations. Does anyone could reproduce the results?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22521/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22520
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22520/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22520/comments
https://api.github.com/repos/huggingface/transformers/issues/22520/events
https://github.com/huggingface/transformers/issues/22520
1,651,143,479
I_kwDOCUB6oc5ianM3
22,520
Llama Tokenizer uses incorrect indices for PAD
{ "login": "michaelroyzen", "id": 45830328, "node_id": "MDQ6VXNlcjQ1ODMwMzI4", "avatar_url": "https://avatars.githubusercontent.com/u/45830328?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelroyzen", "html_url": "https://github.com/michaelroyzen", "followers_url": "https://api.github.com/users/michaelroyzen/followers", "following_url": "https://api.github.com/users/michaelroyzen/following{/other_user}", "gists_url": "https://api.github.com/users/michaelroyzen/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelroyzen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelroyzen/subscriptions", "organizations_url": "https://api.github.com/users/michaelroyzen/orgs", "repos_url": "https://api.github.com/users/michaelroyzen/repos", "events_url": "https://api.github.com/users/michaelroyzen/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelroyzen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker -- is this fixed in #22402 ?", "This is probably not gonna be fixed with regard to the `configuration_llama`. \r\nHowever note that having the `sp_model` sending `-1` as a pad token means that it does not have any indices. Llama does not use a padding token. \r\nThe fix that we provide is that in the `tokenization_llama` the `pad_token` is set to `None`. \r\n\r\nThe config should be fixed to ensure that `pad_token=None` rather than `pad_token = 0`", "Thanks @gante @ArthurZucker \r\n\r\nWhat about the mismatch between the eos and bos tokens? Or is HF's tokenizer zero-indexed while Meta's native tokenizer is one-indexed?", "As you said and showed using the `sp_model`, \r\n> bos_id: 1\r\neos_id: 2\r\n\r\nIf you instantiate the tokenizer using [this](https://huggingface.co/hf-internal-testing/llama-tokenizer/tree/main) for example, it has the same ids so I am not sure I follow the problem? ", "@ArthurZucker if I want to batching then I have to manually add a pad_token. In this case how do I ensure that the pad_token_id is actually correct? I.e how do I get the tokenizer to set pad_tokens to 0 instead of 32000 that I am getting now, by using add_special_tokens like `add_special_tokens({'pad_token': '[PAD]'})`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "The problem is that `0` is already the `unk` token. The easiest way is to set the pad token to the unk token.\r\n" ]
1,680
1,684
1,684
NONE
null
### System Info latest transformer main ### Who can help? @gante ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction configuration_llama.py sets ``` pad_token_id=0, bos_token_id=1, eos_token_id=2 ``` but this is wrong. After checking the original tokenizer from FB, ```python sp_model = SentencePieceProcessor(model_file="/home/ubuntu/llama/tokenizer.model") print("bos_id: ", sp_model.bos_id()) print("eos_id: ", sp_model.eos_id()) print("pad_id: ", sp_model.pad_id()) ``` we see that ``` bos_id: 1 eos_id: 2 pad_id: -1 ``` ### Expected behavior ``` bos_id: 1 eos_id: 2 pad_id: -1 ``` instead of ``` pad_token_id=0, bos_token_id=1, eos_token_id=2 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22520/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22519
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22519/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22519/comments
https://api.github.com/repos/huggingface/transformers/issues/22519/events
https://github.com/huggingface/transformers/issues/22519
1,651,107,794
I_kwDOCUB6oc5iaefS
22,519
Grabbing the output for all Convolution layers in Wav2VecForCTC Model
{ "login": "priyammaz", "id": 60265010, "node_id": "MDQ6VXNlcjYwMjY1MDEw", "avatar_url": "https://avatars.githubusercontent.com/u/60265010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/priyammaz", "html_url": "https://github.com/priyammaz", "followers_url": "https://api.github.com/users/priyammaz/followers", "following_url": "https://api.github.com/users/priyammaz/following{/other_user}", "gists_url": "https://api.github.com/users/priyammaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/priyammaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/priyammaz/subscriptions", "organizations_url": "https://api.github.com/users/priyammaz/orgs", "repos_url": "https://api.github.com/users/priyammaz/repos", "events_url": "https://api.github.com/users/priyammaz/events{/privacy}", "received_events_url": "https://api.github.com/users/priyammaz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi and @ArthurZucker ", "Hey @priyammaz - you can append the hidden-states for each layer to a tuple in the same way that we do for the Wav2Vec2Encoder.\r\n\r\nIn the forward call of `Wav2Vec2FeatureEncoder`:\r\n```python\r\n def forward(self, input_values, output_hidden_states=False):\r\n all_hidden_states = () if output_hidden_states else None\r\n hidden_states = input_values[:, None]\r\n\r\n # make sure hidden_states require grad for gradient_checkpointing\r\n if self._requires_grad and self.training:\r\n hidden_states.requires_grad = True\r\n\r\n for conv_layer in self.conv_layers:\r\n if output_hidden_states:\r\n all_hidden_states = all_hidden_states + (hidden_states,)\r\n if self._requires_grad and self.gradient_checkpointing and self.training:\r\n\r\n def create_custom_forward(module):\r\n def custom_forward(*inputs):\r\n return module(*inputs)\r\n\r\n return custom_forward\r\n\r\n hidden_states = torch.utils.checkpoint.checkpoint(\r\n create_custom_forward(conv_layer),\r\n hidden_states,\r\n )\r\n else:\r\n hidden_states = conv_layer(hidden_states)\r\n\r\n if output_hidden_states:\r\n all_hidden_states = all_hidden_states + (hidden_states,)\r\n\r\n return BaseModelOutput(\r\n last_hidden_state=hidden_states, hidden_states=all_hidden_states,\r\n )\r\n```\r\n\r\nIn the forward call of `Wav2Vec2Model`:\r\n```python\r\n def forward(\r\n self,\r\n input_values: Optional[torch.Tensor],\r\n attention_mask: Optional[torch.Tensor] = None,\r\n mask_time_indices: Optional[torch.FloatTensor] = None,\r\n output_attentions: Optional[bool] = None,\r\n output_hidden_states: Optional[bool] = None,\r\n return_dict: Optional[bool] = None,\r\n ) -> Union[Tuple, Wav2Vec2BaseModelOutput]:\r\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\r\n output_hidden_states = (\r\n output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\r\n )\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n\r\n conv_features = self.feature_extractor(input_values, output_hidden_states=output_hidden_states)\r\n extract_features = conv_features[0].transpose(1, 2)\r\n\r\n if attention_mask is not None:\r\n # compute reduced attention_mask corresponding to feature vectors\r\n attention_mask = self._get_feature_vector_attention_mask(\r\n extract_features.shape[1], attention_mask, add_adapter=False\r\n )\r\n\r\n hidden_states, extract_features = self.feature_projection(extract_features)\r\n hidden_states = self._mask_hidden_states(\r\n hidden_states, mask_time_indices=mask_time_indices, attention_mask=attention_mask\r\n )\r\n\r\n encoder_outputs = self.encoder(\r\n hidden_states,\r\n attention_mask=attention_mask,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n )\r\n\r\n hidden_states = encoder_outputs[0]\r\n\r\n if self.adapter is not None:\r\n hidden_states = self.adapter(hidden_states)\r\n\r\n if not return_dict:\r\n return (hidden_states, extract_features) + encoder_outputs[1:]\r\n\r\n all_hidden_states = conv_features[1] + encoder_outputs.hidden_states if output_hidden_states else None\r\n\r\n return Wav2Vec2BaseModelOutput(\r\n last_hidden_state=hidden_states,\r\n extract_features=extract_features,\r\n hidden_states=all_hidden_states,\r\n attentions=encoder_outputs.attentions,\r\n )\r\n```\r\n\r\nAll in all, it looks something like this: https://github.com/sanchit-gandhi/codesnippets/blob/main/modeling_wav2vec2_with_conv_states.py\r\n", "ThanK you so much! I will give this a try this weekend and let you know if I am stuck anywhere, I am still learning the HuggingFace platform!", "This worked perfectly thank you so much! ", "Cool! Glad to hear that @priyammaz! Thinking about it more, we also apply a feature projection after the last CNN layer:\r\nhttps://github.com/sanchit-gandhi/codesnippets/blob/cb6a463b2b948a78081b382d51c062ca0ae8de31/modeling_wav2vec2_with_conv_states.py#L1324\r\nThis feature projection is essentially just layer norm followed by a linear layer:\r\nhttps://github.com/sanchit-gandhi/codesnippets/blob/cb6a463b2b948a78081b382d51c062ca0ae8de31/modeling_wav2vec2_with_conv_states.py#L485\r\nYou may also want to return the output of this feature projection layer if it's of interest to your research (you can do so simply by appending the outputs to our tuple of `output_hidden_states` as we do for the conv layer outputs:\r\nhttps://github.com/sanchit-gandhi/codesnippets/blob/cb6a463b2b948a78081b382d51c062ca0ae8de31/modeling_wav2vec2_with_conv_states.py#L1345 ", "Thanks for the info! I will definitely give that a try!" ]
1,680
1,681
1,681
NONE
null
### Feature request I want to be able to grab the output of all 7 convolution blocks from the Wav2VecForCTC model but I cant think of a way to do it. I tried to update the forward function of the Wav2Vec2FeatureEncoder with a new attribute that stores each hidden state of the convolution to a list but the moment I load the default pretrained model, the attribute no longer exists. ### Motivation I am working on model explainability and the option to grab outputs of each convolution at every step would allow me to do a deeper dive of how the model interprets different phonemes. ### Your contribution I am happy to help out how I can! Im not really sure where to even being though with this, maybe there is something simple that I am missing?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22519/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22518
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22518/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22518/comments
https://api.github.com/repos/huggingface/transformers/issues/22518/events
https://github.com/huggingface/transformers/pull/22518
1,650,989,416
PR_kwDOCUB6oc5NbzTd
22,518
Add ViViT
{ "login": "jegork", "id": 43540177, "node_id": "MDQ6VXNlcjQzNTQwMTc3", "avatar_url": "https://avatars.githubusercontent.com/u/43540177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jegork", "html_url": "https://github.com/jegork", "followers_url": "https://api.github.com/users/jegork/followers", "following_url": "https://api.github.com/users/jegork/following{/other_user}", "gists_url": "https://api.github.com/users/jegork/gists{/gist_id}", "starred_url": "https://api.github.com/users/jegork/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jegork/subscriptions", "organizations_url": "https://api.github.com/users/jegork/orgs", "repos_url": "https://api.github.com/users/jegork/repos", "events_url": "https://api.github.com/users/jegork/events{/privacy}", "received_events_url": "https://api.github.com/users/jegork/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@amyeroberts thank you for your comments in the previous PR! I have addressed your suggestions and also added the image processor test. \r\n\r\nHowever, I ran `make style`, however the pipeline was failing at check_code_quality. Therefore I've updated the testing dependencies which lead to black updating. But! When I run `make style` again, it gives the following output:\r\n\r\n```sh\r\nreformatted transformers/examples/research_projects/deebert/src/modeling_highway_bert.py\r\nreformatted transformers/examples/research_projects/movement-pruning/emmental/modeling_bert_masked.py\r\nreformatted transformers/src/transformers/models/reformer/modeling_reformer.py\r\nreformatted transformers/src/transformers/models/vivit/modeling_vivit.py\r\nreformatted transformers/tests/models/vivit/test_image_processing_vivit.py\r\n```\r\n\r\nSo it fixes not only the files from this PR, but also the already existing ones. Therefore I have a question: should I only pus reformatted files from this PR or all?\r\n\r\n", "@jegork The files listed e.g. `transformers/examples/research_projects/deebert/src/modeling_highway_bert.py` should have the most recent formatting applied and shouldn't need to be updated with this PR. \r\n\r\nCould you rebase on main, make sure the most recent formatting packages are installed using `pip install -e .[quality]` and try `make style` again? ", "@amyeroberts Thanks for your comments. I have addressed everything. However, I still get the same problems with `make style` \r\nI did `git fetch upstream`, then `git rebase upstream/main`, `pip install -e \".[quality]\"` after which I ran `make style`\r\n\r\nWhich resulted in the following output at the end:\r\n```sh\r\nblack examples tests src utils setup.py\r\nSkipping .ipynb files as Jupyter dependencies are not installed.\r\nYou can fix this by running ``pip install \"black[jupyter]\"``\r\nreformatted /Users/jegorkitskerkin/Documents/projects/transformers/examples/research_projects/deebert/src/modeling_highway_bert.py\r\nreformatted /Users/jegorkitskerkin/Documents/projects/transformers/examples/research_projects/movement-pruning/emmental/modeling_bert_masked.py\r\nreformatted /Users/jegorkitskerkin/Documents/projects/transformers/src/transformers/models/vivit/modeling_vivit.py\r\nreformatted /Users/jegorkitskerkin/Documents/projects/transformers/src/transformers/models/reformer/modeling_reformer.py\r\nreformatted /Users/jegorkitskerkin/Documents/projects/transformers/tests/models/vivit/test_image_processing_vivit.py\r\n\r\nAll done! ✨ 🍰 ✨\r\n5 files reformatted, 2380 files left unchanged.\r\nruff examples tests src utils setup.py --fix\r\n/Library/Developer/CommandLineTools/usr/bin/make autogenerate_code\r\nrunning deps_table_update\r\nupdating src/transformers/dependency_versions_table.py\r\n/Library/Developer/CommandLineTools/usr/bin/make extra_style_checks\r\npython utils/custom_init_isort.py\r\npython utils/sort_auto_mappings.py\r\ndoc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source\r\nOverwriting content of src/transformers/models/vivit/modeling_vivit.py.\r\nCleaned 1 files!\r\npython utils/check_doc_toc.py --fix_and_overwrite\r\n```\r\n\r\nAs you can see, the same unrelated-to-this-PR-files are getting formatted", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "hey @amyeroberts i've managed to fix the formatting-related problems and addressed your comments. However, I seem to be facing some problems with running tests. CI fails with \r\n```\r\nFAILED tests/models/vivit/test_modeling_vivit.py::VivitModelTest::test_model_outputs_equivalence - Failed: Timeout >120.0s\r\n```\r\n\r\nand as I see, `test_model_outputs_equivalence` comes from the `ModelTesterMixin` so I am not sure how to handle this ", "@jegork Mmmm, indeed that's odd. It's not immediately clear from the CI traceback why that would happen. \r\n\r\nAre you able to run the tests locally and do they pass?:\r\n\r\n```\r\nRUN_SLOW=1 tests/models/vivit/test_modeling_vivit.py::VivitModelTest::test_model_outputs_equivalence\r\n```\r\n\r\n", "@amyeroberts yep, everything works and passes locally", "@jegork Thanks for confirming. I'm going to rerun CircleCI in case there was just some transient issue with the run. If it persists we can dig a bit more into it. ", "Thanks @amyeroberts and @jegork for working on this, we look forward to using the ViVit model!", "@jegork Is the PR OK to merge? Or are there any other commits you'd like to push before I press the big green button? 🟢 ", "@amyeroberts I think it's ready to be merged. Thanks for your help!\r\n", "Hi @jegork congrats on your amazing contribution!\r\n\r\nis it ok if we transfer the ViViT checkpoints to the `google` organization on the hub? (assuming they are officially released checkpoints by Google)", "Hey @NielsRogge, thanks!\r\n\r\nSure" ]
1,680
1,689
1,689
CONTRIBUTOR
null
# What does this PR do? Fixes #15666. Reopening #20441, as I have missed the comments provided by @amyeroberts so the issue was closed by the bot. Add Video Vision Transformer to transformers. This PR implements a spacetime version of the Video Vision Transformer from the original paper. I have provided the model weights here https://huggingface.co/jegormeister/vivit-b-16x2-kinetics400 I will try to add Factorised Encoder version later on (these are the two versions that authors provide weight for). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/15666 - [x] Did you make sure to update the documentation with your changes? I have added the documentation, but I have troubles testing it as I couldn't run the preview command of the doc-builder, so if someone has the possibility to run and check it, I will be really grateful! - [x] Did you write any new necessary tests? WIP ## Who can review? @amyeroberts provided the last suggestions to the closed PR, so I hope you can review this one. Thanks! <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22518/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22518/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22518", "html_url": "https://github.com/huggingface/transformers/pull/22518", "diff_url": "https://github.com/huggingface/transformers/pull/22518.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22518.patch", "merged_at": 1689080645000 }
https://api.github.com/repos/huggingface/transformers/issues/22517
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22517/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22517/comments
https://api.github.com/repos/huggingface/transformers/issues/22517/events
https://github.com/huggingface/transformers/issues/22517
1,650,984,294
I_kwDOCUB6oc5iaAVm
22,517
LLaMA tokenizer seems to be broken
{ "login": "SupreethRao99", "id": 55043035, "node_id": "MDQ6VXNlcjU1MDQzMDM1", "avatar_url": "https://avatars.githubusercontent.com/u/55043035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SupreethRao99", "html_url": "https://github.com/SupreethRao99", "followers_url": "https://api.github.com/users/SupreethRao99/followers", "following_url": "https://api.github.com/users/SupreethRao99/following{/other_user}", "gists_url": "https://api.github.com/users/SupreethRao99/gists{/gist_id}", "starred_url": "https://api.github.com/users/SupreethRao99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SupreethRao99/subscriptions", "organizations_url": "https://api.github.com/users/SupreethRao99/orgs", "repos_url": "https://api.github.com/users/SupreethRao99/repos", "events_url": "https://api.github.com/users/SupreethRao99/events{/privacy}", "received_events_url": "https://api.github.com/users/SupreethRao99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,680
1,680
1,680
NONE
null
### System Info - huggingface_hub version: 0.13.3 - Platform: Linux-5.19.0-32-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /root/.cache/huggingface/token - Has saved token ?: True - Who am I ?: supreethrao - Configured git credential helpers: - FastAI: N/A - Tensorflow: N/A - Torch: 1.14.0a0+44dac51 - Jinja2: 3.1.2 - Graphviz: N/A - Pydot: N/A - Pillow: 9.2.0 - hf_transfer: N/A - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets - HF_TOKEN_PATH: /root/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I converted the LLaMA weights to the HuggingFace format from the script in the documentation ``` python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` When I try and load the tokenizer as follows ``` >>> from transformers import LlamaTokenizer >>> tokenizer = LlamaTokenizer.from_pretrained('path_to_converted_llama_model') ``` I get the following error ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained return cls._from_pretrained( File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/tokenization_llama.py", line 78, in __init__ self.sp_model.Load(vocab_file) File "/usr/local/lib/python3.8/dist-packages/sentencepiece/__init__.py", line 905, in Load return self.LoadFromFile(model_file) File "/usr/local/lib/python3.8/dist-packages/sentencepiece/__init__.py", line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) ``` I have the following versions of the library ``` transformers 4.28.0.dev0 (installed from source though pip install git+https://github.com/huggingface/transformers.git) sentencepiece 0.1.97 ``` ### Expected behavior The tokenizer should get loaded be able function properly without the aformentioned errors ### Edit The tokenizer.model file was corrupted which caused this issue, once that was fixed, conversion and tokenization works. closing this issue now
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22517/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22516
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22516/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22516/comments
https://api.github.com/repos/huggingface/transformers/issues/22516/events
https://github.com/huggingface/transformers/pull/22516
1,650,965,176
PR_kwDOCUB6oc5Nbupe
22,516
Generate: Enable easier TextStreamer customization
{ "login": "vblagoje", "id": 458335, "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vblagoje", "html_url": "https://github.com/vblagoje", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "repos_url": "https://api.github.com/users/vblagoje/repos", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> I like the structure, but I dislike the name `on_new_token`. It's actually a callback on \"new print-ready text\". Perhaps `on_finalized_text`? WDYT?\r\n\r\n@gante I am completely indifferent regarding the name. Please adjust the name as you like! \r\nOn a second look, I'm also not 100% sure whether to put this method into TextStreamer only. Your call. ", "Let's keep it in `TextStreamer` for now. It's still early to tell how people will want to use it :)" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Minimally adapts recently integrated (https://github.com/huggingface/transformers/pull/22449) by adding a more obvious API hook for streaming tokens yet retains all the current semantics. I am excited to use TextStreamer but I found the hook-in API not as intuitive as it could be if one needs to customize token printing. For example, I need to use specific colouring to print arriving tokens, yet achieving this without a complete TextStreamer rewrite is not as easy. We can easily create an obvious hook-in method: ```def on_new_token(self, token: str, stream_end: bool = False):``` This method is called by TextStreamer yet its subclasses can easily customize printing. The default implementation of on_new_token method simply prints tokens to stdout as it currently does. I don't foresee any major documentation updates as a consequence of this PR. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22516/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22516/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22516", "html_url": "https://github.com/huggingface/transformers/pull/22516", "diff_url": "https://github.com/huggingface/transformers/pull/22516.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22516.patch", "merged_at": 1680544179000 }
https://api.github.com/repos/huggingface/transformers/issues/22515
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22515/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22515/comments
https://api.github.com/repos/huggingface/transformers/issues/22515/events
https://github.com/huggingface/transformers/pull/22515
1,650,940,207
PR_kwDOCUB6oc5NbpvQ
22,515
[BLIP] fix cross attentions for BlipTextEncoder
{ "login": "zhbh01", "id": 17633096, "node_id": "MDQ6VXNlcjE3NjMzMDk2", "avatar_url": "https://avatars.githubusercontent.com/u/17633096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhbh01", "html_url": "https://github.com/zhbh01", "followers_url": "https://api.github.com/users/zhbh01/followers", "following_url": "https://api.github.com/users/zhbh01/following{/other_user}", "gists_url": "https://api.github.com/users/zhbh01/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhbh01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhbh01/subscriptions", "organizations_url": "https://api.github.com/users/zhbh01/orgs", "repos_url": "https://api.github.com/users/zhbh01/repos", "events_url": "https://api.github.com/users/zhbh01/events{/privacy}", "received_events_url": "https://api.github.com/users/zhbh01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @ArthurZucker and @younesbelkada ", "Sure, happy to provide more details. This bug is caused by the all_cross_attentions variable not properly storing the cross-attention produced by each BlipTextLayer. The variable is initialized at line 404 and returned in either line 460 or 469, but it remains unchanged between initialization and return. As a result, the forward function consistently returns an empty tuple for cross-attention.\r\n\r\nTo address this issue, I have made changes to ensure that all_cross_attentions correctly stores the cross-attention produced by each BlipTextLayer, allowing the forward function to return the appropriate cross-attention.\r\n\r\nTo reproduce the bug, please run the following snippet (the returned cross attentions will always be an empty tuple):\r\n\r\n```python\r\nimport torch\r\nfrom PIL import Image\r\nfrom transformers import BlipProcessor, BlipForQuestionAnswering\r\n\r\nprocessor = BlipProcessor.from_pretrained(\"Salesforce/blip-vqa-base\")\r\nmodel = BlipForQuestionAnswering.from_pretrained(\"Salesforce/blip-vqa-base\").to(\"cuda\")\r\nmodel.text_encoder.config.output_attentions = True\r\n\r\nimg_path = \"path of an image\"\r\nraw_image = Image.open(img_path).convert('RGB')\r\n\r\nname = \"cat\"\r\nquestion = [\r\n \"Is there a {} in the view?\".format(name),\r\n]\r\ninputs = processor([raw_image]*len(question), question, padding=True, return_tensors=\"pt\").to(\"cuda\")\r\n\r\nvision_outputs = model.vision_model(inputs['pixel_values'])\r\nimage_embeds = vision_outputs[0]\r\n\r\nimage_attention_mask = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image_embeds.device)\r\n\r\nquestion_outputs = model.text_encoder(\r\n input_ids=inputs[\"input_ids\"],\r\n attention_mask=inputs[\"attention_mask\"],\r\n encoder_hidden_states=image_embeds,\r\n encoder_attention_mask=image_attention_mask,\r\n return_dict=True\r\n)\r\n\r\n# question_outputs['cross_attentions'] will always be an empty tuple\r\nprint(question_outputs['cross_attentions'])\r\n```" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes a bug in the output of the cross attentions in BlipTextEncoder ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22515/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22515/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22515", "html_url": "https://github.com/huggingface/transformers/pull/22515", "diff_url": "https://github.com/huggingface/transformers/pull/22515.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22515.patch", "merged_at": 1680534027000 }
https://api.github.com/repos/huggingface/transformers/issues/22514
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22514/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22514/comments
https://api.github.com/repos/huggingface/transformers/issues/22514/events
https://github.com/huggingface/transformers/pull/22514
1,650,925,577
PR_kwDOCUB6oc5Nbm5d
22,514
llama docs: fix conversion script url
{ "login": "python273", "id": 3097956, "node_id": "MDQ6VXNlcjMwOTc5NTY=", "avatar_url": "https://avatars.githubusercontent.com/u/3097956?v=4", "gravatar_id": "", "url": "https://api.github.com/users/python273", "html_url": "https://github.com/python273", "followers_url": "https://api.github.com/users/python273/followers", "following_url": "https://api.github.com/users/python273/following{/other_user}", "gists_url": "https://api.github.com/users/python273/gists{/gist_id}", "starred_url": "https://api.github.com/users/python273/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/python273/subscriptions", "organizations_url": "https://api.github.com/users/python273/orgs", "repos_url": "https://api.github.com/users/python273/repos", "events_url": "https://api.github.com/users/python273/events{/privacy}", "received_events_url": "https://api.github.com/users/python273/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "hmm, probably should work now", "Could you try pushing an empty commit?", "Thanks again!" ]
1,680
1,680
1,680
CONTRIBUTOR
null
Fixes the link on this page: https://huggingface.co/docs/transformers/main/model_doc/llama
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22514/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22514", "html_url": "https://github.com/huggingface/transformers/pull/22514", "diff_url": "https://github.com/huggingface/transformers/pull/22514.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22514.patch", "merged_at": 1680532120000 }
https://api.github.com/repos/huggingface/transformers/issues/22513
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22513/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22513/comments
https://api.github.com/repos/huggingface/transformers/issues/22513/events
https://github.com/huggingface/transformers/issues/22513
1,650,866,874
I_kwDOCUB6oc5iZjq6
22,513
Generate a pre-training model for GAP Computational Discrete Algebra System.
{ "login": "hongyi-zhao", "id": 11155854, "node_id": "MDQ6VXNlcjExMTU1ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/11155854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hongyi-zhao", "html_url": "https://github.com/hongyi-zhao", "followers_url": "https://api.github.com/users/hongyi-zhao/followers", "following_url": "https://api.github.com/users/hongyi-zhao/following{/other_user}", "gists_url": "https://api.github.com/users/hongyi-zhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/hongyi-zhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hongyi-zhao/subscriptions", "organizations_url": "https://api.github.com/users/hongyi-zhao/orgs", "repos_url": "https://api.github.com/users/hongyi-zhao/repos", "events_url": "https://api.github.com/users/hongyi-zhao/events{/privacy}", "received_events_url": "https://api.github.com/users/hongyi-zhao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
### Feature request I can't find any fine-tuned pre-training models for [GAP Computational Discrete Algebra](https://www.gap-system.org/). ### Motivation I'm a scholar who conducts research in related fields of mathematical physics based on group theory methods. So, I would like to have a fine-tuned pre-training model for [GAP Computational Discrete Algebra](https://www.gap-system.org/). ### Your contribution I want to know the possibilities in creating such a model based on the resources provided on huggingface. Any hints/comments will be appreciated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22513/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22512
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22512/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22512/comments
https://api.github.com/repos/huggingface/transformers/issues/22512/events
https://github.com/huggingface/transformers/issues/22512
1,650,649,561
I_kwDOCUB6oc5iYunZ
22,512
PyTorch ViTMAEModel output is not deterministic
{ "login": "nalzok", "id": 13443062, "node_id": "MDQ6VXNlcjEzNDQzMDYy", "avatar_url": "https://avatars.githubusercontent.com/u/13443062?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nalzok", "html_url": "https://github.com/nalzok", "followers_url": "https://api.github.com/users/nalzok/followers", "following_url": "https://api.github.com/users/nalzok/following{/other_user}", "gists_url": "https://api.github.com/users/nalzok/gists{/gist_id}", "starred_url": "https://api.github.com/users/nalzok/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nalzok/subscriptions", "organizations_url": "https://api.github.com/users/nalzok/orgs", "repos_url": "https://api.github.com/users/nalzok/repos", "events_url": "https://api.github.com/users/nalzok/events{/privacy}", "received_events_url": "https://api.github.com/users/nalzok/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nYes that's expected behaviour, see https://github.com/huggingface/transformers/issues/20431", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
### System Info - `transformers` version: 4.27.4 - Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.6.8 (tpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: No. - Using distributed or parallel set-up in script?: No ### Who can help? @amyeroberts @sgugger @stevhliu @MKhalusova ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the [example snippet](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/vit_mae#transformers.ViTMAEModel) in the documentation. I'll copy & paste it below for your convenience: ```python from transformers import AutoImageProcessor, ViTMAEModel from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("facebook/vit-mae-base") model = ViTMAEModel.from_pretrained("facebook/vit-mae-base") inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` Put the code in a script, execute it twice, and you will notice that the content in `last_hidden_states` is different. ### Expected behavior The embeddings should be deterministic across runs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22512/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22511
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22511/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22511/comments
https://api.github.com/repos/huggingface/transformers/issues/22511/events
https://github.com/huggingface/transformers/issues/22511
1,650,593,128
I_kwDOCUB6oc5iYg1o
22,511
Inconsistent issue in multi gpu training single machine
{ "login": "rmill040", "id": 16518119, "node_id": "MDQ6VXNlcjE2NTE4MTE5", "avatar_url": "https://avatars.githubusercontent.com/u/16518119?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rmill040", "html_url": "https://github.com/rmill040", "followers_url": "https://api.github.com/users/rmill040/followers", "following_url": "https://api.github.com/users/rmill040/following{/other_user}", "gists_url": "https://api.github.com/users/rmill040/gists{/gist_id}", "starred_url": "https://api.github.com/users/rmill040/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rmill040/subscriptions", "organizations_url": "https://api.github.com/users/rmill040/orgs", "repos_url": "https://api.github.com/users/rmill040/repos", "events_url": "https://api.github.com/users/rmill040/events{/privacy}", "received_events_url": "https://api.github.com/users/rmill040/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you confirm if the issue persists with the latest release?", "Hi @sgugger -- just updated to the latest version `4.27.4`. I received this error message last time about the FSDP config not being updated correctly from the command line, slipped my mind. This is why I originally rolled back to `4.26.0` based on the version that worked with the `alpaca` repo.\r\n\r\nEither way, with the new transformers version, here are some of the error logs, still the local_rank = -1 warning and also now the `args.fsdp_config[\"xla\"]` error, bc I think the args.fsdp_config is empty before the `Trainer` object fires off training\r\n\r\n```\r\nPyTorch: setting up devices\r\nPyTorch: setting up devices\r\nPyTorch: setting up devices\r\ntorch.distributed process group is initialized, but local_rank == -1. In order to use Torch DDP, launch your script with `python -m torch.distributed.launch\r\ntorch.distributed process group is initialized, but local_rank == -1. In order to use Torch DDP, launch your script with `python -m torch.distributed.launch\r\ntorch.distributed process group is initialized, but local_rank == -1. In order to use Torch DDP, launch your script with `python -m torch.distributed.launch\r\nPyTorch: setting up devices\r\ntorch.distributed process group is initialized, but local_rank == -1. In order to use Torch DDP, launch your script with `python -m torch.distributed.launch\r\nTraceback (most recent call last):\r\nTraceback (most recent call last):\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/llm/train.py\", line 585, in <module>\r\n File \"/home/ubuntu/llm/train.py\", line 585, in <module>\r\n File \"/home/ubuntu/llm/train.py\", line 585, in <module>\r\nTraceback (most recent call last):\r\nPyTorch: setting up devices\r\n File \"/home/ubuntu/llm/train.py\", line 585, in <module>\r\ntorch.distributed process group is initialized, but local_rank == -1. In order to use Torch DDP, launch your script with `python -m torch.distributed.launch\r\n train()train() \r\n\r\ntrain() File \"/home/ubuntu/llm/train.py\", line 565, in train\r\n File \"/home/ubuntu/llm/train.py\", line 565, in train\r\n\r\n File \"/home/ubuntu/llm/train.py\", line 565, in train\r\n train()\r\n File \"/home/ubuntu/llm/train.py\", line 565, in train\r\n trainer = Trainer( \r\ntrainer = Trainer( File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/trainer.py\", line 421, in __init__\r\n\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/trainer.py\", line 421, in __init__\r\n trainer = Trainer(\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/trainer.py\", line 421, in __init__\r\n trainer = Trainer(\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/trainer.py\", line 421, in __init__\r\n if not args.fsdp_config[\"xla\"] and args.local_rank == -1: \r\nif not args.fsdp_config[\"xla\"] and args.local_rank == -1:\r\n if not args.fsdp_config[\"xla\"] and args.local_rank == -1:\r\nTypeErrorTypeError: : 'NoneType' object is not subscriptable'NoneType' object is not subscriptable\r\n\r\nTypeError: 'NoneType' object is not subscriptable\r\n if not args.fsdp_config[\"xla\"] and args.local_rank == -1:\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/llm/train.py\", line 585, in <module>\r\nTypeError: 'NoneType' object is not subscriptable\r\n train()\r\n File \"/home/ubuntu/llm/train.py\", line 565, in train\r\n trainer = Trainer(\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/trainer.py\", line 421, in __init__\r\n if not args.fsdp_config[\"xla\"] and args.local_rank == -1:\r\nTypeError: 'NoneType' object is not subscriptable\r\nPyTorch: setting up devices\r\ntorch.distributed process group is initialized, but local_rank == -1. In order to use Torch DDP, launch your script with `python -m torch.distributed.launch\r\nPyTorch: setting up devices\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/llm/train.py\", line 585, in <module>\r\ntorch.distributed process group is initialized, but local_rank == -1. In order to use Torch DDP, launch your script with `python -m torch.distributed.launch\r\n train()\r\n File \"/home/ubuntu/llm/train.py\", line 565, in train\r\n trainer = Trainer(\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/trainer.py\", line 421, in __init__\r\nPyTorch: setting up devices\r\ntorch.distributed process group is initialized, but local_rank == -1. In order to use Torch DDP, launch your script with `python -m torch.distributed.launch\r\n if not args.fsdp_config[\"xla\"] and args.local_rank == -1:\r\nTypeError: 'NoneType' object is not subscriptable\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/llm/train.py\", line 585, in <module>\r\n train()\r\n File \"/home/ubuntu/llm/train.py\", line 565, in train\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/llm/train.py\", line 585, in <module>\r\n trainer = Trainer(\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/trainer.py\", line 421, in __init__\r\n train()\r\n File \"/home/ubuntu/llm/train.py\", line 565, in train\r\n if not args.fsdp_config[\"xla\"] and args.local_rank == -1:\r\nTypeError: 'NoneType' object is not subscriptabletrainer = Trainer(\r\n\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/trainer.py\", line 421, in __init__\r\n if not args.fsdp_config[\"xla\"] and args.local_rank == -1:\r\nTypeError: 'NoneType' object is not subscriptable\r\n```", "cc @pacman100 ", "Will look into this in a few days.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This block of code seems to only allow FSDP with XLA? Can anyone confirm?\r\n\r\nhttps://github.com/huggingface/transformers/blob/v4.30.0/src/transformers/trainer.py#L1490,L1526\r\n\r\nWhen trying to follow this blog https://www.philschmid.de/sagemaker-fsdp-gpt, the entire model gets loaded onto all the gpus causing OOM although the blog tries to demonstrate FSDP (I chose an instance size with GPU mem < model size to test model sharding)\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,689
1,689
NONE
null
### System Info Environment from `transformers-cli env`: ``` - `transformers` version: 4.26.0 - Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes, 8 GPUs on AWS pd4.24xlarge A100 40GB chips - Using distributed or parallel set-up in script?: Both FSDP and DeepSpeed ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction See Expected behavior section, script attached with run commands but unable to provide data due to sensitivity concerns ### Expected behavior I am doing some fine-tuning on a causal LM similar to the code in https://github.com/tatsu-lab/stanford_alpaca but with additional custom data. I notice that when I fire off the multi-gpu training on a single node using a `torchrun` comand similar to: ``` torchrun --nproc_per_node=8 --master_port=8080 train.py \ --seed 1718 \ --model_name_or_path "facebook/opt-6.7b" \ --output_dir "opt-6.7b" \ --overwrite_data True \ --validation_size 0.05 \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "steps" \ --eval_steps 200 \ --save_strategy "steps" \ --save_steps 200 \ --log_level "info" \ --logging_strategy "steps" \ --logging_steps 1 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'OPTDecoderLayer' \ --bf16 True \ --tf32 True ``` I get a bunch of warnings indicating that the local_rank of each process is -1. After stepping through some of the code in `src/trainer.py`, it looks like only data parallel gets kicked off. As such, the behavior I notice is since I requested ```--nproc_per_node=8```, it appears that literally 8 instances of my script are getting fired off per GPU I have on my machine. I can confirm this based on looking at the 8 processes running on each GPU id from the `nvidia-smi` command. Since so many processes are starting on each GPU, I get an OOM error immediately. These OOM errors occur (since the local_rank = -1) whether I use FSDP or DeepSpeed (stage 2, 3 and both with/without CPU/disk offload). However, when I add the "patch" below before I kicked off the training using the `Trainer` object: ```python # Update local ranks training_args.local_rank = int(os.environ["LOCAL_RANK"]) assert training_args.local_rank != -1, "BAD THINGS ARE ABOUT TO HAPPEN!" LOGGER.info(f"Configuring local ranks: I am local process: {training_args.local_rank}", main_process_only=False) ``` I finally get the single process per GPU and it looks like the GPUs are in fact doing distributed training and the script runs fine. I have attached the entire script in case this helps with debugging -- not sure what could be causing this behavior. When I run the alpaca repo, the training runs fine and no issues with the `torchrun` command. Any ideas if I'm doing something clearly off here? I tried upgrading to the latest version of `transformers` and the same issue. Again, no issues running the alpaca repo but mine has that local_rank = -1 problem and without the "patch" the scripts errors out right when training starts. [train.py.zip](https://github.com/huggingface/transformers/files/11130281/train.py.zip)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22511/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22510
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22510/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22510/comments
https://api.github.com/repos/huggingface/transformers/issues/22510/events
https://github.com/huggingface/transformers/pull/22510
1,650,572,073
PR_kwDOCUB6oc5NafAu
22,510
[WIP]🌐[i18n-KO] Translate `autoclass_tutorial` to Korean
{ "login": "gabrielwithappy", "id": 102908949, "node_id": "U_kgDOBiJEFQ", "avatar_url": "https://avatars.githubusercontent.com/u/102908949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gabrielwithappy", "html_url": "https://github.com/gabrielwithappy", "followers_url": "https://api.github.com/users/gabrielwithappy/followers", "following_url": "https://api.github.com/users/gabrielwithappy/following{/other_user}", "gists_url": "https://api.github.com/users/gabrielwithappy/gists{/gist_id}", "starred_url": "https://api.github.com/users/gabrielwithappy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gabrielwithappy/subscriptions", "organizations_url": "https://api.github.com/users/gabrielwithappy/orgs", "repos_url": "https://api.github.com/users/gabrielwithappy/repos", "events_url": "https://api.github.com/users/gabrielwithappy/events{/privacy}", "received_events_url": "https://api.github.com/users/gabrielwithappy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "To update PR commit comment rule, I closed this PR and will update new PR with new template.\r\nThank you." ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Part of #20179 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @0525hhgus, @KIHOON71, @gabrielwithappy, @jungnerd, @sim-so, @HanNayeoniee, @wonhyeongseo Pseudo Lab, Please review this PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22510/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22510/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22510", "html_url": "https://github.com/huggingface/transformers/pull/22510", "diff_url": "https://github.com/huggingface/transformers/pull/22510.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22510.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22509
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22509/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22509/comments
https://api.github.com/repos/huggingface/transformers/issues/22509/events
https://github.com/huggingface/transformers/pull/22509
1,650,491,514
PR_kwDOCUB6oc5NaPJW
22,509
docs: ko: sagemaker.mdx
{ "login": "jungnerd", "id": 46880056, "node_id": "MDQ6VXNlcjQ2ODgwMDU2", "avatar_url": "https://avatars.githubusercontent.com/u/46880056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungnerd", "html_url": "https://github.com/jungnerd", "followers_url": "https://api.github.com/users/jungnerd/followers", "following_url": "https://api.github.com/users/jungnerd/following{/other_user}", "gists_url": "https://api.github.com/users/jungnerd/gists{/gist_id}", "starred_url": "https://api.github.com/users/jungnerd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungnerd/subscriptions", "organizations_url": "https://api.github.com/users/jungnerd/orgs", "repos_url": "https://api.github.com/users/jungnerd/repos", "events_url": "https://api.github.com/users/jungnerd/events{/privacy}", "received_events_url": "https://api.github.com/users/jungnerd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Also please mention @HanNayeoniee instead of your id(@jungnerd) for requesting review on the last 2 lines of your main PR message. Thank you.\r\n\r\nEdit: or just add her 😄 your choice really. Thank you so much for participating on Saturday's meeting.", "> Also please mention @HanNayeoniee instead of your id(@jungnerd) for requesting review on the last 2 lines of your main PR message. Thank you.\r\n> \r\n> Edit: or just add her 😄 your choice really. Thank you so much for participating on Saturday's meeting.\r\n\r\nThanks for telling me. I've fixed the commit.🤭", "We should also update `_toctree.yml`, and I can help with this anytime. Please feel free to ping me on KakaoTalk. 🙌", "Great work! First PR of Pseudo-lab team!\r\nI think [WIP] tag needs to be removed since translation is all done and it's been merged." ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Part of #20179 (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @wonhyeongseo, @HanNayeoniee, @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy Team Pseudo-Lab, please review this PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22509/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22509", "html_url": "https://github.com/huggingface/transformers/pull/22509", "diff_url": "https://github.com/huggingface/transformers/pull/22509.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22509.patch", "merged_at": 1680527822000 }
https://api.github.com/repos/huggingface/transformers/issues/22508
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22508/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22508/comments
https://api.github.com/repos/huggingface/transformers/issues/22508/events
https://github.com/huggingface/transformers/pull/22508
1,650,473,632
PR_kwDOCUB6oc5NaLiG
22,508
🌐 [i18n-KO] Translated `pipeline_tutorial.mdx` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hello Mr. @sgugger , thank you for your patience! May you please review & merge this PR?\n\nThe internal review time frame (7 days) has passed since I requested feedback from my colleagues at PseudoLab." ]
1,680
1,682
1,680
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Part of #20179 (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @wonhyeongseo, @jungnerd, @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee Team Pseudo-Lab, please review this PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22508/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22508/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22508", "html_url": "https://github.com/huggingface/transformers/pull/22508", "diff_url": "https://github.com/huggingface/transformers/pull/22508.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22508.patch", "merged_at": 1680881280000 }
https://api.github.com/repos/huggingface/transformers/issues/22507
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22507/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22507/comments
https://api.github.com/repos/huggingface/transformers/issues/22507/events
https://github.com/huggingface/transformers/pull/22507
1,650,289,306
PR_kwDOCUB6oc5NZng1
22,507
Fix NameError `init_empty_weights` when importing Blip2Processor
{ "login": "TheAdamEvans", "id": 3723005, "node_id": "MDQ6VXNlcjM3MjMwMDU=", "avatar_url": "https://avatars.githubusercontent.com/u/3723005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TheAdamEvans", "html_url": "https://github.com/TheAdamEvans", "followers_url": "https://api.github.com/users/TheAdamEvans/followers", "following_url": "https://api.github.com/users/TheAdamEvans/following{/other_user}", "gists_url": "https://api.github.com/users/TheAdamEvans/gists{/gist_id}", "starred_url": "https://api.github.com/users/TheAdamEvans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TheAdamEvans/subscriptions", "organizations_url": "https://api.github.com/users/TheAdamEvans/orgs", "repos_url": "https://api.github.com/users/TheAdamEvans/repos", "events_url": "https://api.github.com/users/TheAdamEvans/events{/privacy}", "received_events_url": "https://api.github.com/users/TheAdamEvans/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@pacman100 Is this something you'd be able to review for me? This my first contribution in this repo but I'm pretty sure it fixes a real issue in loading BLIP2.", "_The documentation is not available anymore as the PR was closed or merged._", "Perhaps @NielsRogge, since this is your implementation of BLIP2, you could review for me? I was able to reproduce the error in colab.", "Thanks for the review @sgugger, makes sense now" ]
1,680
1,680
1,680
NONE
null
## Context Was trying to run [https://huggingface.co/Salesforce/blip2-flan-t5-xl](https://huggingface.co/Salesforce/blip-image-captioning-base) in a colab notebook (after installing `accelerate`) and was getting an error when importing Blip2Processor: ``` NameError `init_empty_weights` is not defined ``` <img width="1285" alt="Screen Shot 2023-04-01 at 1 31 17 pm" src="https://user-images.githubusercontent.com/3723005/229265367-36eed050-b75b-4d06-b69b-f80a649e2d2e.png"> ## Changes Splitting out the if statements to check for `accelerate` and `bitsandbytes` separately seems to fix this problem, in `load_in_8bit` mode. Applies a similar fix to the imports in `deepspeed.py`. This fixed the same error as above but in this conditional block, later in the file. ``` if is_deepspeed_zero3_enabled(): import deepspeed [logger.info](http://logger.info/)("Detected DeepSpeed ZeRO-3: activating zero.init() for this model") init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config())] + init_contexts ``` After these changes, I was able to import, Blip2Processor, Blip2ForConditionalGeneration on Colab :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22507/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22507", "html_url": "https://github.com/huggingface/transformers/pull/22507", "diff_url": "https://github.com/huggingface/transformers/pull/22507.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22507.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22506
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22506/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22506/comments
https://api.github.com/repos/huggingface/transformers/issues/22506/events
https://github.com/huggingface/transformers/issues/22506
1,650,283,109
I_kwDOCUB6oc5iXVJl
22,506
Dynamic module import error when using ddp
{ "login": "jinzhen-lin", "id": 14980050, "node_id": "MDQ6VXNlcjE0OTgwMDUw", "avatar_url": "https://avatars.githubusercontent.com/u/14980050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jinzhen-lin", "html_url": "https://github.com/jinzhen-lin", "followers_url": "https://api.github.com/users/jinzhen-lin/followers", "following_url": "https://api.github.com/users/jinzhen-lin/following{/other_user}", "gists_url": "https://api.github.com/users/jinzhen-lin/gists{/gist_id}", "starred_url": "https://api.github.com/users/jinzhen-lin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jinzhen-lin/subscriptions", "organizations_url": "https://api.github.com/users/jinzhen-lin/orgs", "repos_url": "https://api.github.com/users/jinzhen-lin/repos", "events_url": "https://api.github.com/users/jinzhen-lin/events{/privacy}", "received_events_url": "https://api.github.com/users/jinzhen-lin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for raising this issue. I think this is linked to #21646, I will push a fix shortly." ]
1,680
1,680
1,680
NONE
null
### System Info - `transformers` version: 4.27.3 - Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10 - Python version: 3.8.15 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Create a file `a.py` with the following content ```python from transformers import AutoConfig AutoConfig.from_pretrained("THUDM/glm-10b", trust_remote_code=True) ``` Run it with `torchrun` ``` torchrun --nproc-per-node 8 a.py ``` Then we would get this error sometimes ``` Traceback (most recent call last): File "a.py", line 5, in <module> AutoConfig.from_pretrained("THUDM/glm-10b", trust_remote_code=True) File "/home/linjinzhen/.miniconda3/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 911, in from_pretrained config_class = get_class_from_dynamic_module( File "/home/linjinzhen/.miniconda3/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 399, in get_class_from_dynamic_module return get_class_in_module(class_name, final_module.replace(".py", "")) File "/home/linjinzhen/.miniconda3/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 157, in get_class_in_module shutil.copy(f"{module_dir}/{module_file_name}", tmp_dir) File "/home/linjinzhen/.miniconda3/lib/python3.8/shutil.py", line 418, in copy copyfile(src, dst, follow_symlinks=follow_symlinks) File "/home/linjinzhen/.miniconda3/lib/python3.8/shutil.py", line 264, in copyfile with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst: FileNotFoundError: [Errno 2] No such file or directory: '/home/linjinzhen/.cache/huggingface/modules/transformers_modules/THUDM/glm-10b/696788d4f82ac96b90823555f547d1e754839ff4/configuration_glm.py' ``` or ``` Traceback (most recent call last): File "a.py", line 5, in <module> AutoConfig.from_pretrained("THUDM/glm-10b", trust_remote_code=True) File "/home/linjinzhen/.miniconda3/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 911, in from_pretrained config_class = get_class_from_dynamic_module( File "/home/linjinzhen/.miniconda3/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 399, in get_class_from_dynamic_module return get_class_in_module(class_name, final_module.replace(".py", "")) File "/home/linjinzhen/.miniconda3/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 177, in get_class_in_module module = importlib.import_module(module_path) File "/home/linjinzhen/.miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'transformers_modules.THUDM.glm-10b.696788d4f82ac96b90823555f547d1e754839ff4.configuration_glm' Traceback (most recent call last): File "<string>", line 1, in <module> FileNotFoundError: [Errno 2] No such file or directory: '/home/linjinzhen/.cache/huggingface/modules/transformers_modules/THUDM/glm-10b/696788d4f82ac96b90823555f547d1e754839ff4/configuration_glm.py' ``` It seems that it is a multiprocess-relalted issue. https://github.com/huggingface/transformers/blob/v4.27.4/src/transformers/dynamic_module_utils.py#L147-L179 ### Expected behavior Dynamic module can be imported successfully when using ddp.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22506/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22506/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22505
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22505/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22505/comments
https://api.github.com/repos/huggingface/transformers/issues/22505/events
https://github.com/huggingface/transformers/issues/22505
1,650,267,741
I_kwDOCUB6oc5iXRZd
22,505
HF CLIP image features different from OpenAI CLIP image features
{ "login": "junwang-wish", "id": 112650299, "node_id": "U_kgDOBrboOw", "avatar_url": "https://avatars.githubusercontent.com/u/112650299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/junwang-wish", "html_url": "https://github.com/junwang-wish", "followers_url": "https://api.github.com/users/junwang-wish/followers", "following_url": "https://api.github.com/users/junwang-wish/following{/other_user}", "gists_url": "https://api.github.com/users/junwang-wish/gists{/gist_id}", "starred_url": "https://api.github.com/users/junwang-wish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/junwang-wish/subscriptions", "organizations_url": "https://api.github.com/users/junwang-wish/orgs", "repos_url": "https://api.github.com/users/junwang-wish/repos", "events_url": "https://api.github.com/users/junwang-wish/events{/privacy}", "received_events_url": "https://api.github.com/users/junwang-wish/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @junwang-wish, thanks for reporting this issue and the detailed reproduction script. I'll dig into this to find where the differences are coming from. ", "Thanks @amyeroberts , due to the significant difference would u recommend me to use HF clip or OpenAI clip based on your domain expertise?", "@junwang-wish I managed to track down difference in values down to a slight difference in how the images are cropped during processing. The cropping in the feature extractor changed with #17628 - which resulted in the position of the occasionally being 1 pixel to the left or up from the OpenAI implementation. The PR #22608 aims to address this. Checking this update on the repro example in this issue, I can confirm the OpenAI and HF CLIP models return equivalent outputs again. \r\n\r\nIn terms of which to use, it depends on what you wish to use the model for. As the difference is arising from preprocessing, rather than the models themselves, provided the same image is passed in there shouldn't be any significant difference in outputs and I'd recommend whichever model fits best within your workflow. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@amyeroberts @junwang-wish \r\nHi I have the same issue with transformer==4.30.2.\r\n\r\nI found the preprocessing makes the difference. I tried 3 different ways to do the preprocessing and only the 3. from Openai's implementation keep the correct results.\r\n\r\n1. Use CLIPFeatureExtractor\r\n2. `tform = transforms.Compose([\r\n transforms.ToTensor(),\r\n transforms.Resize(\r\n (224, 224),\r\n interpolation=transforms.InterpolationMode.BICUBIC,\r\n antialias=False,\r\n ),\r\n transforms.Normalize(\r\n [0.48145466, 0.4578275, 0.40821073],\r\n [0.26862954, 0.26130258, 0.27577711]),\r\n])`\r\n3. from openai's original preprocessing. \r\n`x = kornia.geometry.resize(x, (224, 224), interpolation='bicubic', align_corners=True, antialias=False)\r\nx = (x + 1.) / 2.\r\nx = kornia.enhance.normalize(x, torch.Tensor([0.48145466, 0.4578275, 0.40821073]), torch.Tensor([0.26862954, 0.26130258, 0.27577711]))`\r\n\r\nI'm wondering if this will be fixed in a newer version or the repo isn't trying to keep exact the same results with openai's CLIP. Thanks.", "@rafaelpadilla if you have time to look into this would be awesome! ", "Investigating this issue and the proposed example, I found that the resulting image produced by HF is shifted up in 1 pixel in comparison to the transformation used by OpenAI (`torchvision.transforms.CropCenter`) as presented [here](https://github.com/openai/CLIP/blob/a1d071733d7111c9c014f024669f959182114e33/clip/clip.py#L82).\r\n\r\nThis happens because our `center_crop` function does not behave as `torchvision.transforms.CropCenter` if `orig_height - crop_height` is odd or if `orig_width - crop_width` is odd.\r\n\r\nI have worked on a solution in this [PR #26238](https://github.com/huggingface/transformers/pull/26238) . This is a general solution, making our `center_crop` behave like `torchvision.transforms.CropCenter`, impacting all other modules that call `crop_center`.\r\n\r\nHowever, an older [PR #22608](https://github.com/huggingface/transformers/pull/22608) seems to address the same issue with a new `crop` transformation in the `image_transforms.py`, allowing changes in the `center_crop` in CLIP's image processing only. I'm working to see how this PR may impact other models and will leave my review there.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,702
1,702
NONE
null
### System Info python3.8, CUDA 12.1, Ubuntu20.02, latest clip, transformers==4.26.1 ### Who can help? @amyeroberts ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` url = "https://canary.contestimg.wish.com/api/webimage/61b241a3a4ee2ecaf2f63c77-large.jpg?cache_buster=bbeee1fdb460a1d12bc266824914e030" # get HF image fearures from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") outputs = model.get_image_features(**inputs) pooled_output_hf = outputs.detach().cpu().numpy() # get OpenAI image features import torch import clip from PIL import Image device = "cuda" if torch.cuda.is_available() else "cpu" model, preprocess = clip.load("ViT-B/32", device=device) image = preprocess(Image.open(requests.get(url, stream=True).raw)).unsqueeze(0).to(device) with torch.no_grad(): image_features = model.encode_image(image) pooled_output_clip = image_features.detach().cpu().numpy() # check difference assert np.allclose(pooled_output_hf, pooled_output_clip, atol=0.1), "hf and clip too different" ``` ### Expected behavior HF CLIP should be close to OpenAI CLIP but they differ more than 0.1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22505/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22504
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22504/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22504/comments
https://api.github.com/repos/huggingface/transformers/issues/22504/events
https://github.com/huggingface/transformers/issues/22504
1,650,105,658
I_kwDOCUB6oc5iWp06
22,504
Expose callback for download progress
{ "login": "tristanMatthias", "id": 2550138, "node_id": "MDQ6VXNlcjI1NTAxMzg=", "avatar_url": "https://avatars.githubusercontent.com/u/2550138?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tristanMatthias", "html_url": "https://github.com/tristanMatthias", "followers_url": "https://api.github.com/users/tristanMatthias/followers", "following_url": "https://api.github.com/users/tristanMatthias/following{/other_user}", "gists_url": "https://api.github.com/users/tristanMatthias/gists{/gist_id}", "starred_url": "https://api.github.com/users/tristanMatthias/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tristanMatthias/subscriptions", "organizations_url": "https://api.github.com/users/tristanMatthias/orgs", "repos_url": "https://api.github.com/users/tristanMatthias/repos", "events_url": "https://api.github.com/users/tristanMatthias/events{/privacy}", "received_events_url": "https://api.github.com/users/tristanMatthias/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This looks like a feature that would need to be implemented in `huggingface_hub` first, then we would use it here and pass along the proper argument.\r\n\r\ncc @Wauplin and @LysandreJik ", "Hey @tristanMatthias :wave: This is a tricky feature request as I don't want to complexify too much the current API/implementation of the underlying methods to download files. It is quite unknown but in `snapshot_download` there is a [`tqdm_class: Optional[tqdm]`](https://huggingface.co/docs/huggingface_hub/v0.13.3/en/package_reference/file_download#huggingface_hub.snapshot_download.tqdm_class) parameter that can be passed. Instead of providing a callback, you overwrite completely the progress bar that is used. The passed class must inherit from `tqdm.auto.tqdm` or at least mimic its behavior. \r\n\r\nWhat we can do in `huggingface_hub` is to add this parameter to `hf_hub_download` as well. Then `transformers` would have to adapt its API. What do you think about it? If it fits your need, I can point you the code that needs to be updated in `hfh`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
### Feature request When using `AutoTokenizer` and `AutoModelForCausalLM`, I would like be able to pass a callback function (or some other solution) that will allow me to report on the status of the download if it's not cached. ``` AutoTokenizer.from_pretrained( model_name, download_progress_callback=lambda perc: print(f"Downloading tokenizer: {perc}%") ) AutoModelForCausalLM.from_pretrained( model_name, download_progress_callback=lambda perc: print(f"Downloading model: {perc}%") ) ``` Ideally this would be the **total** download progress (achievable with the `file_metadata` flag of `HfApi.model_info`). ### Motivation I am building a UI wrapper around hugging face models, and would like to enable a "on click" install for any model. The issue is it can take quite some time to download, and the UI is left spinning sometimes for hours. I would like to provide more feedback to the end user on the status (plus maybe some time estimations eventually) ### Your contribution Happy to contribute in any way, however I'll need to be pointed in the right direction. I've taken a peruse through the `huggingface_hub` repo, as well as this one, but am a little unsure of how to approach this. In particular, I'm unsure how a model determines _which specific_ files to download from the hub.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22504/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22503
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22503/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22503/comments
https://api.github.com/repos/huggingface/transformers/issues/22503/events
https://github.com/huggingface/transformers/pull/22503
1,649,909,389
PR_kwDOCUB6oc5NYYN5
22,503
Add copied from statements for image processors
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22503). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
COLLABORATOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22503/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22503", "html_url": "https://github.com/huggingface/transformers/pull/22503", "diff_url": "https://github.com/huggingface/transformers/pull/22503.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22503.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22502
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22502/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22502/comments
https://api.github.com/repos/huggingface/transformers/issues/22502/events
https://github.com/huggingface/transformers/issues/22502
1,649,893,242
I_kwDOCUB6oc5iV196
22,502
Running LlamaForCausalLM with MPS provokes "RuntimeError: MPS does not support cumsum op with int64 input"
{ "login": "kechan", "id": 122762, "node_id": "MDQ6VXNlcjEyMjc2Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/122762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kechan", "html_url": "https://github.com/kechan", "followers_url": "https://api.github.com/users/kechan/followers", "following_url": "https://api.github.com/users/kechan/following{/other_user}", "gists_url": "https://api.github.com/users/kechan/gists{/gist_id}", "starred_url": "https://api.github.com/users/kechan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kechan/subscriptions", "organizations_url": "https://api.github.com/users/kechan/orgs", "repos_url": "https://api.github.com/users/kechan/repos", "events_url": "https://api.github.com/users/kechan/events{/privacy}", "received_events_url": "https://api.github.com/users/kechan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Relevant stack trace (can provide more if needed):\r\n\r\n> File [~/Developer/python39_env/lib/python3.9/site-packages/transformers/generation/utils.py:2245](https://file+.vscode-resource.vscode-cdn.net/Users/kechan/Library/CloudStorage/GoogleDrive-kelvin%40jumptools.com/My%20Drive/LLaMA/notebooks/~/Developer/python39_env/lib/python3.9/site-packages/transformers/generation/utils.py:2245), in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)\r\n> 2242 break\r\n> 2244 # prepare model inputs\r\n> -> 2245 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)\r\n> 2247 # forward pass to get next token\r\n> 2248 outputs = self(\r\n> 2249 **model_inputs,\r\n> 2250 return_dict=True,\r\n> 2251 output_attentions=output_attentions,\r\n> 2252 output_hidden_states=output_hidden_states,\r\n> 2253 )\r\n> \r\n> File [~/Developer/python39_env/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:736](https://file+.vscode-resource.vscode-cdn.net/Users/kechan/Library/CloudStorage/GoogleDrive-kelvin%40jumptools.com/My%20Drive/LLaMA/notebooks/~/Developer/python39_env/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py:736), in LlamaForCausalLM.prepare_inputs_for_generation(self, input_ids, past_key_values, attention_mask, inputs_embeds, **kwargs)\r\n> 733 position_ids = kwargs.get(\"position_ids\", None)\r\n> 734 if attention_mask is not None and position_ids is None:\r\n> 735 # create position_ids on the fly for batch generation\r\n> --> 736 position_ids = attention_mask.long().cumsum(-1) - 1\r\n> 737 position_ids.masked_fill_(attention_mask == 0, 1)\r\n> 738 if past_key_values:\r\n> \r\n> RuntimeError: MPS does not support cumsum op with int64 input\r\n\r\n\r\nThis seems to happen during greedy search and subsequently precisely at:\r\n\r\n`position_ids = attention_mask.long().cumsum(-1) - 1`", "Actually, this could be PyTorch/MPS issue, that the int64 version of cumsum is not implemented. Found the issue there: \r\nhttps://github.com/pytorch/pytorch/issues/96610\r\n\r\nI wonder if long is necessary for attention_mask? should int32 be good enough? \r\n", "According to the issue it should be fixed with a nightly install of PyTorch and MacOS 13.3", "@sgugger thanks for responding. I just updated to 13.3 and the torch nightly, and indeed, no more problem. Closing issue.", "just for fun, increase length to 256\r\n\r\nmy prompt is \"Is facebook a bad company?\"\r\n\r\n\" Is facebook a bad company?\\nI'm not sure if this is the right place to post this, but I'm not sure where else to post it.\\nI'm not a facebook user, but I've heard a lot of bad things about it. I've heard that it's a bad company, that it's a bad product, that it's a bad service, that it's a bad website, that it's a bad social network, that it's a bad company, that it's a bad product, that it's a bad service, that it's a bad website, that it's a bad social network, that it's a bad company, that it's a bad product, that it's a bad service, that it's a bad website, that it's a bad social network, that it's a bad company, that it's a bad product, that it's a bad service, that it's a bad website, that it's a bad social network, that it's a bad company, that it's a bad product, that it's a bad service, that it's a bad website\"\r\n\r\nit started repeating things. Maybe this is 7B, and it would behave better for larger one? \r\n\r\nThis must have not been an encouraging sign for earlier pioneers. So it is amazing openAi stuck at it and arrived all the way to chatGPT level of great.", "This is a problem for me now - running 13.5.2 MacOS, python 3.10.9. Cannot find a solution to this other than workarounds that I can't understand. Any advice on how to get past this? Must be a problem for a lot of people? Thanks in advance. ", "I have the same issue (RuntimeError: MPS does not support cumsum op with int64 input) with my MacOS Version 14.0 and nightly torch. Any idea how I can solve this issue?", "I have same issue, anyone can help me ?", "m1 macOS 14.1.1 (23B81), also has this problem" ]
1,680
1,700
1,680
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.9.6 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes, I use device='mps' - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction To reproduce, just run this on a M1/M2 Mac with Apple silicon ``` from transformers import LlamaForCausalLM, LlamaTokenizer import torch tokenizer = LlamaTokenizer.from_pretrained('/path/to/weights') model = LlamaForCausalLM.from_pretrained('/path/to/weights') device = torch.device('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') print(f'Using device: {device}') model = model.to(device) prompt = "Hey, are you consciours? Can you talk to me?" inputs = tokenizer(prompt, return_tensors="pt") inputs = {k: v.to(device) for k, v in inputs.items()} # place on device input_ids = inputs['input_ids'].to(torch.int32) # doesn't appear to help attn_masks = inputs['attention_mask'].to(torch.int32) # doesn't appear to help generate_ids = model.generate(input_ids, max_length=30) ``` ### Expected behavior No error. Will post stack trace.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22502/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22501
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22501/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22501/comments
https://api.github.com/repos/huggingface/transformers/issues/22501/events
https://github.com/huggingface/transformers/pull/22501
1,649,864,442
PR_kwDOCUB6oc5NYO7w
22,501
Generate: `TextIteratorStreamer` (streamer for gradio)
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
MEMBER
null
# What does this PR do? Following the previous streamer PR (#22449), this PR adds a streamer that can be used as an iterator. If we want to use the iterator while generate is running, they must be on separate threads (naturally). The interface looks quite simple, as can be seen in the documentation 🤗 The only kink is the need to call generation on a separate thread, but there is no great way around it (at most we can design an `if` branch inside generate where, if a streamer is used, generate is called in a separate thread... but that seems overkill for now). A Gradio demo running on this branch can be seen [here](https://huggingface.co/spaces/joaogante/chatbot_transformers_streaming). There is pretty much no slowdown compared to a non-streamer call. Inspired by @oobabooga's work (see [this comment](https://github.com/huggingface/transformers/pull/22449#issuecomment-1491311486)).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22501/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22501", "html_url": "https://github.com/huggingface/transformers/pull/22501", "diff_url": "https://github.com/huggingface/transformers/pull/22501.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22501.patch", "merged_at": 1680530678000 }
https://api.github.com/repos/huggingface/transformers/issues/22500
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22500/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22500/comments
https://api.github.com/repos/huggingface/transformers/issues/22500/events
https://github.com/huggingface/transformers/pull/22500
1,649,830,599
PR_kwDOCUB6oc5NYH1D
22,500
Make tiny model creation + pipeline testing more robust
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
COLLABORATOR
null
(After this, we/I can start to look the current skipped failing pipeline tests. I will also start to write the documentation on how some CIs are performed on Notion) # What does this PR do? - make pipeline testing **could also** work against **local** tiny models - make tiny model creation script work with multi-processes - A (renamed) workflow to: - (new steps) create **all** tiny models locally + run pipeline tests against local tiny models - (steps already on `main`) create + upload tiny models for new model **architecture** to Hub and generate new summary file ### Motivation - make sure any modification to the tiny model creation script don't break things - to ease the process of updating the summary file and test new tiny models on Hub)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22500/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22500", "html_url": "https://github.com/huggingface/transformers/pull/22500", "diff_url": "https://github.com/huggingface/transformers/pull/22500.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22500.patch", "merged_at": 1680795955000 }
https://api.github.com/repos/huggingface/transformers/issues/22499
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22499/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22499/comments
https://api.github.com/repos/huggingface/transformers/issues/22499/events
https://github.com/huggingface/transformers/issues/22499
1,649,735,711
I_kwDOCUB6oc5iVPgf
22,499
Make FlaxPreTrainedModel a Flax Module
{ "login": "cgarciae", "id": 5862228, "node_id": "MDQ6VXNlcjU4NjIyMjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5862228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cgarciae", "html_url": "https://github.com/cgarciae", "followers_url": "https://api.github.com/users/cgarciae/followers", "following_url": "https://api.github.com/users/cgarciae/following{/other_user}", "gists_url": "https://api.github.com/users/cgarciae/gists{/gist_id}", "starred_url": "https://api.github.com/users/cgarciae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cgarciae/subscriptions", "organizations_url": "https://api.github.com/users/cgarciae/orgs", "repos_url": "https://api.github.com/users/cgarciae/repos", "events_url": "https://api.github.com/users/cgarciae/events{/privacy}", "received_events_url": "https://api.github.com/users/cgarciae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "One additional note, in the current state of #22479 `dataclasses.replace` still doesn't work, which is why the `test_clone` test is not passing:\r\n\r\nhttps://github.com/huggingface/transformers/pull/22479/files#diff-abf3849ef52688f44671a0752d5a74bf08c861db1eaaab7a4827e52b17cc9dcbR166-R169", "> One thing to note about `3` is that it will require ALL subclasses to define `__init__`, currently some get it for free. We can fix all standard `transformers` models, but user defined sub-classes that reuse `__init__` will break. If this is not good enough we can try to automatically generate a `__inti__` method during `__init_subclass__`.\r\n\r\nI think this type of users is more advanced and used to seeing breaking changes from time to time. They would typically just pin `transformers` version or update their methods.", "`test_clone` has been fixed by fully mimicking the dataclass signature from the custom `__init__` methods.", "@sanchit-gandhi do you have some time to look into this? ", "Latest PR for the refactor is ongoing: https://github.com/huggingface/transformers/pull/22866", "PR remains ongoing - @cgarciae are you able to see this one through to completion? More than happy to discuss any final design decisions and get you another review on the PR as and when required!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,690
1,690
NONE
null
### Feature request Issue for discussing #22627 As stated the idea is to make `FlaxPreTrainedModel` a `nn.Module` so Flax users can easily integrate it into other Flax networks or systems that expect Flax Modules. ## Requirements 1. Be backward compatible, it is desirable not to brake any existing `transformers` users. 2. When `_do_init=False` the `FlaxPreTrainedModel` should behave like regular Flax Module, this means you cannot use `__call__` directly and must instead use the usual `apply` method. ## Challenges The main challenge is that Flax Module are dataclasses and use various dataclasses APIs like `dataclasses.replace`. This implies a couple of things: 1. All fields must be specified as class annotations. 2. Most of the current logic in `__init__` has to be done in `__post_init__`. 3. Constructors with `**kwargs` will be tricky to handle. ## Current approach I've decided solve this problem the following way: 1. `FlaxPreTrainedModel` will define all the needed dataclass fields and to comply with its current signature. `__init__` was refactored into a `__post_init__`. 2. `FlaxPreTrainedModel` sub-classes like `FlaxBertPreTrainedModel` will define its own `__init__` such that they keep their current signature and will then forward everything to `super().__init__`. One thing to note is that sub-classes that inherit from other sub-classes that define `__init__` like `FlaxBertModel(FlaxBertPreTrainedModel)` must define a trivial `__init__` that forwards everything to the parent or else `dataclass` will define one for them according to dataclass semantics. 3. To make `dataclasses.replace` happy, the signature for custom `__init__` functions must accept all the fields names as arguments (i.e. must comply with the dataclass signature) even if it will not use some because e.g. the sub-class will define them on its own before forwarding them to the parent class. 4. I've made the `params` an `Optional` since I'll be expecting it to be `None` when `_do_init=False` (we should not keep the weights aside a Flax Module when its behaving as such). One thing to note about `3` is that it will require ALL subclasses to define `__init__`, currently some get it for free. We can fix all standard `transformers` models, but user defined sub-classes that reuse `__init__` will break. If this is not good enough we can try to automatically generate a `__inti__` method during `__init_subclass__`. cc @patrickvonplaten @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22499/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22499/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22498
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22498/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22498/comments
https://api.github.com/repos/huggingface/transformers/issues/22498/events
https://github.com/huggingface/transformers/pull/22498
1,649,726,862
PR_kwDOCUB6oc5NXyxW
22,498
Implemented safetensors checkpoints save/load for Trainer
{ "login": "ViktorooReps", "id": 56936206, "node_id": "MDQ6VXNlcjU2OTM2MjA2", "avatar_url": "https://avatars.githubusercontent.com/u/56936206?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ViktorooReps", "html_url": "https://github.com/ViktorooReps", "followers_url": "https://api.github.com/users/ViktorooReps/followers", "following_url": "https://api.github.com/users/ViktorooReps/following{/other_user}", "gists_url": "https://api.github.com/users/ViktorooReps/gists{/gist_id}", "starred_url": "https://api.github.com/users/ViktorooReps/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ViktorooReps/subscriptions", "organizations_url": "https://api.github.com/users/ViktorooReps/orgs", "repos_url": "https://api.github.com/users/ViktorooReps/repos", "events_url": "https://api.github.com/users/ViktorooReps/events{/privacy}", "received_events_url": "https://api.github.com/users/ViktorooReps/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Should be fine now. @sgugger can you have a look?", "One test failed:\r\n```\r\n=================================== FAILURES ===================================\r\n_________________ WhisperModelTest.test_equivalence_pt_to_flax _________________\r\n[gw1] linux -- Python 3.8.12 /home/circleci/.pyenv/versions/3.8.12/bin/python\r\n\r\nself = <tests.models.whisper.test_modeling_whisper.WhisperModelTest testMethod=test_equivalence_pt_to_flax>\r\n\r\n @is_pt_flax_cross_test\r\n def test_equivalence_pt_to_flax(self):\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n init_shape = (1,) + inputs_dict[\"input_features\"].shape[1:]\r\n \r\n for model_class in self.all_model_classes:\r\n with self.subTest(model_class.__name__):\r\n fx_model_class_name = \"Flax\" + model_class.__name__\r\n \r\n if not hasattr(transformers, fx_model_class_name):\r\n # no flax model exists for this class\r\n return\r\n \r\n # Output all for aggressive testing\r\n config.output_hidden_states = True\r\n config.output_attentions = self.has_attentions\r\n \r\n fx_model_class = getattr(transformers, fx_model_class_name)\r\n \r\n # load PyTorch class\r\n pt_model = model_class(config).eval()\r\n # Flax models don't use the `use_cache` option and cache is not returned as a default.\r\n # So we disable `use_cache` here for PyTorch model.\r\n pt_model.config.use_cache = False\r\n \r\n # load Flax class\r\n fx_model = fx_model_class(config, input_shape=init_shape, dtype=jnp.float32)\r\n \r\n # make sure only flax inputs are forward that actually exist in function args\r\n fx_input_keys = inspect.signature(fx_model.__call__).parameters.keys()\r\n \r\n # prepare inputs\r\n pt_inputs = self._prepare_for_class(inputs_dict, model_class)\r\n \r\n # remove function args that don't exist in Flax\r\n pt_inputs = {k: v for k, v in pt_inputs.items() if k in fx_input_keys}\r\n \r\n # send pytorch inputs to the correct device\r\n pt_inputs = {\r\n k: v.to(device=torch_device) if isinstance(v, torch.Tensor) else v for k, v in pt_inputs.items()\r\n }\r\n \r\n # convert inputs to Flax\r\n fx_inputs = {k: np.array(v) for k, v in pt_inputs.items() if torch.is_tensor(v)}\r\n \r\n fx_state = convert_pytorch_state_dict_to_flax(pt_model.state_dict(), fx_model)\r\n fx_model.params = fx_state\r\n \r\n # send pytorch model to the correct device\r\n pt_model.to(torch_device)\r\n \r\n with torch.no_grad():\r\n pt_outputs = pt_model(**pt_inputs)\r\n fx_outputs = fx_model(**fx_inputs)\r\n \r\n fx_keys = tuple([k for k, v in fx_outputs.items() if v is not None])\r\n pt_keys = tuple([k for k, v in pt_outputs.items() if v is not None])\r\n \r\n self.assertEqual(fx_keys, pt_keys)\r\n> self.check_pt_flax_outputs(fx_outputs, pt_outputs, model_class)\r\n\r\ntests/models/whisper/test_modeling_whisper.py:865: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_modeling_common.py:2098: in check_pt_flax_outputs\r\n self.check_pt_flax_outputs(\r\ntests/test_modeling_common.py:2123: in check_pt_flax_outputs\r\n self.check_pt_flax_outputs(fx_output, pt_output, model_class, tol=tol, name=attr)\r\ntests/test_modeling_common.py:2152: in check_pt_flax_outputs\r\n self.assertLessEqual(\r\nE AssertionError: 1.1086464e-05 not less than or equal to 1e-05 : outputs.encoder_last_hidden_state: Difference between PyTorch and Flax is 1.1086463928222656e-05 (>= 1e-05).\r\n```\r\nBut I doubt that it is due to the changes added. @sgugger correct me if I am wrong", "Should be good to merge now", "Thanks again for your contribution!" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #22478 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22498/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22498/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22498", "html_url": "https://github.com/huggingface/transformers/pull/22498", "diff_url": "https://github.com/huggingface/transformers/pull/22498.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22498.patch", "merged_at": 1680613504000 }
https://api.github.com/repos/huggingface/transformers/issues/22497
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22497/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22497/comments
https://api.github.com/repos/huggingface/transformers/issues/22497/events
https://github.com/huggingface/transformers/pull/22497
1,649,662,666
PR_kwDOCUB6oc5NXlIp
22,497
Update Neptune callback docstring
{ "login": "normandy7", "id": 7567953, "node_id": "MDQ6VXNlcjc1Njc5NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7567953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/normandy7", "html_url": "https://github.com/normandy7", "followers_url": "https://api.github.com/users/normandy7/followers", "following_url": "https://api.github.com/users/normandy7/following{/other_user}", "gists_url": "https://api.github.com/users/normandy7/gists{/gist_id}", "starred_url": "https://api.github.com/users/normandy7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/normandy7/subscriptions", "organizations_url": "https://api.github.com/users/normandy7/orgs", "repos_url": "https://api.github.com/users/normandy7/repos", "events_url": "https://api.github.com/users/normandy7/events{/privacy}", "received_events_url": "https://api.github.com/users/normandy7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for your PR! For the formatting, could you apply `make style` on your branch (after a `pip install -e .[\"quality\"]`) so that it's auto-fixed? In particular multi-line argument descriptions need to be all in the indented blocks." ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Updates to the `NeptuneCallback` docstring: - Update links to Neptune docs following migration - Formatting ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22497/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22497", "html_url": "https://github.com/huggingface/transformers/pull/22497", "diff_url": "https://github.com/huggingface/transformers/pull/22497.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22497.patch", "merged_at": 1680291515000 }
https://api.github.com/repos/huggingface/transformers/issues/22496
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22496/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22496/comments
https://api.github.com/repos/huggingface/transformers/issues/22496/events
https://github.com/huggingface/transformers/pull/22496
1,649,622,770
PR_kwDOCUB6oc5NXcuu
22,496
feat: Whisper prompting
{ "login": "connor-henderson", "id": 78612354, "node_id": "MDQ6VXNlcjc4NjEyMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-henderson", "html_url": "https://github.com/connor-henderson", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "repos_url": "https://api.github.com/users/connor-henderson/repos", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hey this PR looks really good (although I'll leave the actual review to Sanchit or Arthur). \r\n\r\nI was just wondering whether it also makes sense to support the `condition_on_previous_text` option that the OpenAI repo has, since that uses the same mechanism (using the `<|startofprev|>` token). \r\n\r\nIn addition, there's [this PR](https://github.com/openai/whisper/pull/1040) that suggests an `always_use_initial_prompt` option that uses the prompt on every segment, not just the first. Might be useful to consider that here as well.\r\n", "> Hey this PR looks really good (although I'll leave the actual review to Sanchit or Arthur).\r\n> \r\n> I was just wondering whether it also makes sense to support the `condition_on_previous_text` option that the OpenAI repo has, since that uses the same mechanism (using the `<|startofprev|>` token).\r\n> \r\n> In addition, there's [this PR](https://github.com/openai/whisper/pull/1040) that suggests an `always_use_initial_prompt` option that uses the prompt on every segment, not just the first. Might be useful to consider that here as well.\r\n\r\nHey Matthijs thanks, I'm happy to add what's wanted. Will look for HF guidance on that and whether it should be added here or in a follow on PR. `temperature` was another factor I saw in the Whisper model, if it was > 0.5 no prompt tokens were added ([link](https://github.com/openai/whisper/blob/b5851c6c40e753606765ac45b85b298e3ae9e00d/whisper/transcribe.py#L311-L313)). ", "To-do list before re-requesting review\r\n\r\n- [x] **Converting the prompt token to an ID in an instance variable gives an incorrect ID, unlike when its called in decode**\r\n--Given we're only using it in two places and it's an inexpensive op to call `convert_tokens_to_ids` I've left this, at least for now, to focus more on the below\r\n- [x] **Bug I found where if the ending text of the prompt matches the start of the transcribed text, that text will not be included in the transcription output. Example:** \r\n--I'm actually not sure this is a bug now. The model has learned to be penalized for repeating itself and this only happens if the end of the prompt matches the beginning of the transcription almost exactly. It also appears to be happening inside the model itself as opposed to in the logits processing or other modification before / after.\r\n<img width=\"779\" alt=\"Screenshot 2023-04-05 at 1 14 03 AM\" src=\"https://user-images.githubusercontent.com/78612354/229986962-269e4564-2b01-405a-a510-fab7d82c2915.png\">\r\n\r\n\r\nAdded from @hollance's below two comments:\r\n- [x] **Add `always_use_initial_prompt` and `condition_on_previous_text` options** to pipeline and `model.generate()`\r\n- [x] **Add prompting functionality to the `automatic-speech-recognition` pipeline**\r\n", "One more thing we'll need to do, is change the `automatic-speech-recognition` pipeline so that it will actually call `model.generate()` with the prompt, but only for the first chunk (or always if we also decide to support an `always_use_initial_prompt` option). This logic cannot be part of the modeling code, as `model.generate()` has no knowledge of which chunk of audio it's processing.", "I looked a bit more into how this works today, and it turns out that 🤗 Transformers does things a bit differently than the original OpenAI code. \r\n\r\nOpenAI does the following: \r\n\r\nFor the first 30-second chunk of audio, it passes the following token sequence to the model's decoder on the first iteration: `<|startofprev|> initial prompt<|startoftranscript|><|en|><|transcribe|>`. And then it decodes the rest of the sequence autoregressively.\r\n\r\nThen for the second chunk of audio, it passes the following sequence to the decoder on the first iteration: `<|startofprev|> initial prompt output of the first chunk<|startoftranscript|><|en|><|transcribe|>`. \r\n\r\nFor the next chunk, it uses `<|startofprev|> initial prompt output of the first chunk output of the second chunk<|startoftranscript|><|en|><|transcribe|>`\r\n\r\nAnd so on... This list of tokens that it passes in the `<|startofprev|>` section grows longer and longer with each new chunk. \r\n\r\n(When you set the `condition_on_previous_text` option to False, it only uses the output from the previous chunk instead of the complete history. In that case the initial prompt text is only used for the very first chunk.)\r\n\r\nOur ASR `pipeline` works quite differently. It also splits up the audio in 30-second chunks but they partially overlap, and then it runs the model on these chunks in parallel. That makes it impossible to pass the previous context to these chunks, as each chunk is processed independently. So we have no way of sending `<|startofprev|> initial prompt output of the first chunk<|startoftranscript|><|en|><|transcribe|>` to the second chunk.\r\n\r\nThe best we can do is send `<|startofprev|> initial prompt<|startoftranscript|><|en|><|transcribe|>` to the very first chunk only, or always send it to all chunks. So we ignore the \"previous context\" part and always include the prompt. (The latter would do the same as this open [PR on the OpenAI repo](https://github.com/openai/whisper/pull/1040) for always passing the initial prompt inside `<|startofprev|>` instead of the previous context.)\r\n\r\nThe suggested modifications to `model.generate()` in this PR make it possible to have both `initial_prompt` and the `condition_on_previous_text` options as in OpenAI, but it would require the user to write their own processing loop to get the same results as OpenAI. So we should definitely continue with this PR, but if we also want to support `initial_prompt` in the `pipeline` we'll have to decide on which approach we want. (It's not possible to have `condition_on_previous_text` in the current pipeline.)", "> * We can provide a prompt in the pipeline like the below without modifying the pipeline at all, works for me locally. Is this sufficient / what you had in mind?\r\n\r\nYou are correct that when you do the following,\r\n\r\n```python\r\npipe = pipeline(task=\"automatic-speech-recognition\", model=\"openai/whisper-tiny\")\r\nres = pipe(samples, generate_kwargs={ \"prompt_ids\": prompt_ids })\r\n```\r\n\r\nthe pipeline will automatically pass the `prompt_ids` to `model.generate()`. However note that this pipeline only processes the first 30 seconds of the audio file. This is fine for audio that is shorter than 30 seconds.\r\n\r\nHowever, to process an audio file that is longer than 30 seconds, we have to do:\r\n\r\n```python\r\nres = pipe(example, generate_kwargs={ \"prompt_ids\": prompt_ids }, chunk_length_s=30, stride_length_s=[6, 0])\r\n```\r\n\r\nNow the same `prompt_ids` are passed to `model.generate()` for each 30-second chunk. In effect, this is the `always_use_initial_prompt` option.\r\n\r\nTo get the regular `initial_prompt` (i.e. `always_use_initial_prompt` disabled) and `condition_on_previous_text` behavior as they work in OpenAI with the current pipeline, we'd have to pass in a `stride_length_s=[0,0]` and `batch_size=1` to make the loop work sequentially rather than in parallel, and somehow keep track of the previous outputs. ", "Ok the additional requested features are now added so I believe this is ready for re-review. Thank you for your comments! \r\n\r\n> However note that this pipeline only processes the first 30 seconds of the audio file. This is fine for audio that is shorter than 30 seconds... In effect, this is the `always_use_initial_prompt` option.\r\n\r\nI think I’m missing something here as I’ve tried this on >1 min of audio in the below example where I also added a debug line to decode the tokens inside of the pipeline as they were generated, and it appears to be properly sequential. In any case, if we don’t want this I’ll remove `condition_on_previous_text` from the pipeline just lmk! \r\n```python\r\npipe = pipeline(task=\"automatic-speech-recognition\", model=\"openai/whisper-tiny\")\r\nres = pipe(samples, generate_kwargs={ \"condition_on_previous_text\": True, \"prompt_ids\": prompt_ids })\r\n# ['<|startofprev|><|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']\r\n# [\"<|startofprev|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Nor is Mr. Quilter's manner less interesting than his matter.<|endoftext|>\"]\r\n# [\"<|startofprev|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel. Nor is Mr. Quilter's manner less interesting than his matter.<|startoftranscript|><|en|><|transcribe|><|notimestamps|> He tells us that at this festive season of the year with Christmas and roast beef looming before us, similarly drawn from eating and its results occur most readily to the mind.<|endoftext|>\"]\r\n# [\"<|startofprev|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel. Nor is Mr. Quilter's manner less interesting than his matter. He tells us that at this festive season of the year with Christmas and roast beef looming before us, similarly drawn from eating and its results occur most readily to the mind.<|startoftranscript|><|en|><|transcribe|><|notimestamps|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all and can discover in it but little of Rocky Ithaca.<|endoftext|>\"]\r\n# [\"<|startofprev|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel. Nor is Mr. Quilter's manner less interesting than his matter. He tells us that at this festive season of the year with Christmas and roast beef looming before us, similarly drawn from eating and its results occur most readily to the mind. He has grave doubts whether Sir Frederick Layton's work is really Greek after all and can discover in it but little of Rocky Ithaca.<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Lennils, pictures are a sort of upguards and atom paintings and Mason's exquisite itals are as national as a jingo poem. Mr. Berkett Foster's landscapes smile at one much in the same way that Mr. Carker used to flash his teeth. And Mr. John Collier gives his sitter a cheerful slap on the back before he says like a shampoo or a turkish bath. Next man<|endoftext|>\"]\r\n# [\"<|startofprev|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel. Nor is Mr. Quilter's manner less interesting than his matter. He tells us that at this festive season of the year with Christmas and roast beef looming before us, similarly drawn from eating and its results occur most readily to the mind. He has grave doubts whether Sir Frederick Layton's work is really Greek after all and can discover in it but little of Rocky Ithaca. Lennils, pictures are a sort of upguards and atom paintings and Mason's exquisite itals are as national as a jingo poem. Mr. Berkett Foster's landscapes smile at one much in the same way that Mr. Carker used to flash his teeth. And Mr. John Collier gives his sitter a cheerful slap on the back before he says like a shampoo or a turkish bath. Next man<|startoftranscript|><|en|><|transcribe|><|notimestamps|> it is obviously unnecessary for us to point out how luminous these criticisms are, how delicate and expression.<|endoftext|>\"]\r\n# [\"<|startofprev|> middle classes, and we are glad to welcome his gospel. Nor is Mr. Quilter's manner less interesting than his matter. He tells us that at this festive season of the year with Christmas and roast beef looming before us, similarly drawn from eating and its results occur most readily to the mind. He has grave doubts whether Sir Frederick Layton's work is really Greek after all and can discover in it but little of Rocky Ithaca. Lennils, pictures are a sort of upguards and atom paintings and Mason's exquisite itals are as national as a jingo poem. Mr. Berkett Foster's landscapes smile at one much in the same way that Mr. Carker used to flash his teeth. And Mr. John Collier gives his sitter a cheerful slap on the back before he says like a shampoo or a turkish bath. Next man it is obviously unnecessary for us to point out how luminous these criticisms are, how delicate and expression.<|startoftranscript|><|en|><|transcribe|><|notimestamps|> On the general principles of art and Mr. Quilter writes with equal lucidity.<|endoftext|>\"]\r\n\r\n```\r\n<br> \r\n\r\n>The suggested modifications to model.generate() in this PR make it possible to have both initial_prompt and the condition_on_previous_text options as in OpenAI, but it would require the user to write their own processing loop to get the same results as OpenAI.\r\n\r\nAimed to address this with the new sequential loop over chunks of the input. Right now this way is incompatible with `return_dict_in_generate`=True as I wasn't sure how / if we'd still want to several ModelOutputs, looking for guidance here.\r\n<br> \r\nAlso, there are hacks in a few places related to getting the id of the prompt start token and separating it from the prompt text ids. Would this be something we could add to the model or generation config?", "cc'ing in @gante re `generate` ", ">1. Add the prompt_ids to model.generate() as in your earlier version of the PR. All this does is insert the prompt in the <|startofprev|> section. This doesn't give us the OpenAI functionality yet, it only adds <|startofprev|> support to the modeling and tokenizer code.\r\n\r\nThanks @hollance I definitely agree splitting this into >1 PR is ideal, have pushed back up code for number 1 above so this can just address that portion. It now implicitly does `always_use_initial_prompt`.", "Curious if by adding `return_tensors` to `get_prompt_ids` you're setting up effectively doing `condition_on_previous_text` via cleverly feeding batches / prompts to `model.generate()` calls (i.e. the first chunk of a second model.generate call would use the text from the first chunk of the first model.generate call as a prompt and so on for each chunk in the batch), but that's more of a question for subsequent PRs", "The reason I asked for the `return_tensors` argument is that passing the `prompt_ids` into `model.generate()` as a `torch.LongTensor` instead of `List[int]` is more consistent with how we normally pass tokens into Transformers models. I understand that inside the model you might need turn it into a list anyway for the `forced_decoder_ids`, but that's really an internal implementation detail. When we generate, the output token sequence is also a Tensor, and so we can concat this to the previous `prompt_ids` to create the next one, etc. I hope that makes sense. :-)\r\n\r\n", "All right, I think this all looks very good. Pinging @sanchit-gandhi for an additional review since he opened the issue.", "Is there an estimation of when this branch will be merged?", "Rebased to include tolerance increase for unrelated flaky flaky PT-FLAX whisper test", "Thanks for the latest round of changes @connor-henderson! Kindly requesting a final review from @amyeroberts!", "Since we're all happy with it, I'm pinging @amyeroberts from the core maintainers team to have a final look.", "@amyeroberts @connor-henderson \r\nHi All, \r\nThank you for your great contribution, however I would like a raise a little concern. \r\nWe tried to inference the model using this branch and the latest commit and got some weird results. \r\nWe provide the audio sample in addition to the prompts for easy reproducing:\r\n[WAV file link](https://drive.google.com/file/d/1kbMEuQv8AmTAyJKkARlwx-wfFhI7uilX/view?usp=sharing)\r\n\r\ncode:\r\n```python\r\nfrom transformers import WhisperForConditionalGeneration, WhisperProcessor\r\nimport torchaudio\r\n\r\n\r\ninput_speech, sr = torchaudio.load(\r\n \"sample.wav\"\r\n)\r\nmodel_name = \"openai/whisper-medium\"\r\nprocessor = WhisperProcessor.from_pretrained(model_name, cache_dir=\"artifacts\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(model_name, cache_dir=\"artifacts\")\r\ninput_features = processor(input_speech.squeeze(), sampling_rate=sr, return_tensors=\"pt\").input_features\r\n\r\n# --- Without prompt ---\r\noutput_without_prompt = model.generate(input_features)\r\nprint(processor.decode(output_without_prompt[0], skip_special_tokens=False))\r\nprint(processor.decode(output_without_prompt[0], skip_special_tokens=True))\r\n\r\n# --- With prompt ---\r\nprompt_ids = processor.get_prompt_ids(\"Mexico city\")\r\noutput_with_prompt = model.generate(input_features, prompt_ids=prompt_ids)\r\nprint(processor.decode(output_with_prompt[0], skip_special_tokens=False))\r\nprint(processor.decode(output_with_prompt[0], skip_special_tokens=True))\r\n```\r\n\r\nand this is the trace:\r\n```\r\n<|startoftranscript|><|en|><|transcribe|><|notimestamps|> San Francisco educators. She was teaching in Mexico City.<|endoftext|>\r\n San Francisco educators. She was teaching in Mexico City.\r\n<|startofprev|> Mexico city<|startoftranscript|><|en|><|transcribe|><|notimestamps|> and<|endoftext|>\r\n and\r\n```\r\n\r\nWhen we don't pass prompts we get the expected output, but when we do pass prompts (that appear in the transcription) we end up with a bad output.\r\n\r\nNote that we did not commit any code changes before running this script.\r\n\r\nSystem:\r\n - pytorch 2.0.1\r\n - The test was made on CPU\r\n\r\n", "@AvivSham thanks for sharing, I looked at this and I think it may just be that prompting can be finicky. I believe the model perceives the prompt as previous context, so having 'Mexico city' be followed by 'San Francisco' with no grammar in between might've been viewed as unlikely by the model, which could then have led to further model confusion in successive generations. \r\n\r\nI tried your example with the tiny model and the prompt actually corrected the output, and trying it with the medium Whisper model I was able to repro your issue but also address it by adding a period to the end of the prompt:\r\n\r\n```py\r\n# --- Without prompt ---\r\noutput_without_prompt = model.generate(input_features)\r\nprint(processor.decode(output_with_prompt[0], skip_special_tokens=False))\r\n# <|startoftranscript|><|en|><|transcribe|><|notimestamps|> San Francisco educators. She was teaching in Mexico City.<|endoftext|>\r\nprint(processor.decode(output_with_prompt[0], skip_special_tokens=True))\r\n# San Francisco educators. She was teaching in Mexico City.\r\n\r\n# --- With prompt ---\r\nprompt_ids = processor.get_prompt_ids(\"Mexico city.\") # Added a period to the end\r\noutput_with_prompt = model.generate(input_features, prompt_ids=prompt_ids)\r\nprint(processor.decode(output_with_prompt[0], skip_special_tokens=False))\r\n# <|startofprev|> Mexico city.<|startoftranscript|><|en|><|transcribe|><|notimestamps|> San Francisco educators. She was teaching in Mexico city.<|endoftext|>\r\nprint(processor.decode(output_with_prompt[0], skip_special_tokens=True))\r\n# San Francisco educators. She was teaching in Mexico City.\r\n```", "Awesome - thanks for the reviews @amyeroberts and @gante, and for the fast iteration and detailed explanations from you @connor-henderson! Excited to see this PR merged when confirmed as ready 🤗\r\n\r\nRegarding prompt engineering, my advice would by to try and emulate a full sentence, complete with punctuation and casing, since really what we're providing as the 'prompt' is just the target transcription from a previous window (see https://github.com/openai/whisper/discussions/963#discussioncomment-4987057)", "Hi all,\r\nThanks for the great work on adding prompt in 'model.generate'.\r\nIs it possible to add 'initial_prompt' in the Fine-Tune code with a 'prompt_use_rate' to control how often to add prompts to the sentences in training sets?\r\nSo that we may improve the performance for some special prompts via prompt-tuning.", "@AvivSham Thanks for reporting and @connor-henderson thanks for investigating! \r\n\r\nI think we're good to merge 👍 ", "Thank you so much for adding this! I've found that I occasionally get the following:\r\n```\r\nTraceback (most recent call last):\r\n File \"G:\\Conda\\hfwhisper\\lib\\site-packages\\transformers\\models\\whisper\\modeling_whisper.py\", line 1662, in generate\r\n return super().generate(\r\n File \"G:\\Conda\\hfwhisper\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"G:\\Conda\\hfwhisper\\lib\\site-packages\\transformers\\generation\\utils.py\", line 1518, in generate\r\n return self.greedy_search(\r\n File \"G:\\Conda\\hfwhisper\\lib\\site-packages\\transformers\\generation\\utils.py\", line 2345, in greedy_search\r\n next_token_logits = outputs.logits[:, -1, :]\r\nIndexError: index -1 is out of bounds for dimension 1 with size 0\r\n```\r\n\r\nMy workaround is to catch the exception and try again without the prompt_ids.", "Do you have a reproducible example for this @dgram0? That seems like a serious enough bug that needs investigating further.", "@Johnson-NLP \r\n\r\n> Is it possible to add 'initial_prompt' in the Fine-Tune code with a 'prompt_use_rate' to control how often to add prompts to the sentences in training sets?\r\n\r\nSounds like an interesting idea. Would you mind opening a new issue for this? Thanks!\r\n", "To get prompting working with fine-tuning, we probably don't want to explicitly add 'prompted' examples per-se, but rather split longer examples up into shorter ones and feed them sequentially through the model, providing previous passages as 'context' to the model.\r\n\r\nFor example, if we had a training sample that looked like:\r\n```\r\nThis is the first sentence. This is the second sentence. And finally, this is the third.\r\n```\r\n\r\nCurrently what we do is feed it to the model all at once:\r\n```\r\n<|startoftranscript|> This is the first sentence. This is the second sentence. And finally, this is the third. <|endoftranscript|>\r\n```\r\n\r\nWhat we can do is feed the first sentence in:\r\n```\r\n<|startoftranscript|> This is the first sentence. <|endoftranscript|>\r\n```\r\n\r\nThen the second sentence, with the first sentence as context:\r\n```\r\n<|startofprev|> This is the first sentence.<|startoftranscript|> This is the second sentence. <|endoftranscript|>\r\n```\r\n\r\nAnd then the third, with both the first and second sentences as context:\r\n```\r\n<|startofprev|> This is the first sentence. This is the second sentence.<|startoftranscript|> And finally, this is the third.<|endoftranscript|>\r\n```\r\n\r\nAt inference time, we then just provide the \"context\" as our prompts:\r\n```\r\n<|startofprev|> This is the prompt.<|startoftranscript|> (model generates the rest)\r\n```\r\n\r\nSee section 2.3 of the [Whisper paper](https://arxiv.org/pdf/2212.04356.pdf) for an in-depth explanation as to how they achieve this during pre-training. We essentially want to do the same for fine-tuning.\r\n\r\nFor this to work, ideally we need an original sentence that is >> 30s in duration. That way when we split it up, we don't have super short examples that we feed to the model.", "> Do you have a reproducible example for this @dgram0? That seems like a serious enough bug that needs investigating further.\r\n\r\nI'll try reproducing in a small toy example. It's reproducible on my side with the fine-tuned large private model I've been working with.", "> Do you have a reproducible example for this @dgram0? That seems like a serious enough bug that needs investigating further.\r\n\r\nThe following triggers the bug on the 13th iterations of the loop. (Usually, it takes a lot more iterations.)\r\n```\r\nfrom datasets import load_dataset, DatasetDict\r\nfrom transformers import WhisperForConditionalGeneration, WhisperProcessor\r\n\r\nit = iter(load_dataset(\"librispeech_asr\", \"all\", split=\"test.other\", streaming=True))\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-tiny\", language=\"English\", task=\"transcribe\")\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-tiny\")\r\nprompt = 'some text rich in domain specific vocabulary lives here'\r\npast_prompts = [\"I am from the cutter lying off the coast\"]\r\nwhile it:\r\n _ = [next(it) for x in range(3)]\r\n clip = next(it)\r\n input_features = processor(clip['audio']['array'], sampling_rate=clip['audio']['sampling_rate'], return_tensors=\"pt\").input_features\r\n prompt_ids = processor.get_prompt_ids(prompt + ' - ' + ' - '.join(past_prompts))\r\n pred_ids = model.generate(input_features, language=\"english\", task=\"transcribe\", max_new_tokens=128, prompt_ids=prompt_ids)\r\n result = processor.batch_decode(pred_ids, skip_special_tokens=True)[0].strip()\r\n result_text = result.removesuffix('.')\r\n print(result_text)\r\n if result_text != '':\r\n past_prompts.append(result_text)\r\n if len(past_prompts) > 12:\r\n past_prompts = past_prompts[1:]\r\n\r\n```\r\n", "@dgram0 thanks for sharing, I was able to repro this. As far as its relation to prompting I think this is another case of prompt sensitivity as opposed to a bug, but it may still be of interest with regards to Whisper generally since its the same error message as issue #22682. \r\n\r\nI noticed that joining the prompts by `' - '` was causing the model to start predicting chinese characters, and using `'. '` instead did not lead to the error (at least through 30 loops, at that point I stopped testing). I did notice degraded predictions over time though since a period did not necessarily belong after each result, and every now and again a chinese char was still predicted so. I'd just be cautious about how prompts are chained together.", "@connor-henderson It's a bit of a contrived example meant just to recreate the issue without having to loop too much and at the same time show what may be considered a normal use case. Even without it predicting non-English characters or words you'll eventually encounter the issue within a few hundred loops." ]
1,680
1,706
1,684
CONTRIBUTOR
null
# What does this PR do? Closes #22395, thank you @sanchit-gandhi for the descriptive ask! Note: due to initial scope expansion the commit history includes initial work towards `condition_on_previous_text`, `always_use_initial_prompt`, and pipeline integration, but these efforts have been pushed to a later PR This this pull request adds 3 new functionalities + tests to support initial prompting functionality within Whisper's `model.generate()` and `tokenizer`: - `prompt_ids` param for `model.generate()`: - Optional param of initial prompt ids to provide context for each chunk of text generated by in `model.generate()` - `get_prompt_ids` Processor method to create initial prompt ids to pass to generate from a passed in string - Removing the prompt when the tokenizer is decoding if `skip_special_tokens=True` Example new API usage: ```py processor = WhisperProcessor.from_pretrained("openai/whisper-tiny") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny") input_features = processor(input_speech, return_tensors="pt").input_features # --- Without prompt --- prompt_ids = processor.get_prompt_ids("Leighton") output_without_prompt = model.generate(input_features) print(processor.decode(output_without_prompt[0])) # "<|startoftranscript|><|en|><|transcribe|><|notimestamps|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all and can discover in it but little of Rocky Ithaca.<|endoftext|>" # --- With prompt --- prompt_ids = processor.get_prompt_ids("Leighton") output_with_prompt = model.generate(input_features, prompt_ids=prompt_ids) print(processor.decode(output_with_prompt[0])) # "<|startofprev|> Leighton<|startoftranscript|><|en|><|transcribe|><|notimestamps|> He has grave doubts whether Sir Frederick Leighton's work is really Greek after all and can discover in it but little of Rocky Ithaca.<|endoftext|>" ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). **Haven't added anywhere outside of documenting the new generate() arg directly on the function** - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @sanchit-gandhi
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22496/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/22496/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22496", "html_url": "https://github.com/huggingface/transformers/pull/22496", "diff_url": "https://github.com/huggingface/transformers/pull/22496.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22496.patch", "merged_at": 1684485191000 }
https://api.github.com/repos/huggingface/transformers/issues/22495
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22495/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22495/comments
https://api.github.com/repos/huggingface/transformers/issues/22495/events
https://github.com/huggingface/transformers/issues/22495
1,649,533,416
I_kwDOCUB6oc5iUeHo
22,495
Unable to pre-train Roberta from scratch using example/run_mlm.py script
{ "login": "sarthusarth", "id": 17705073, "node_id": "MDQ6VXNlcjE3NzA1MDcz", "avatar_url": "https://avatars.githubusercontent.com/u/17705073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarthusarth", "html_url": "https://github.com/sarthusarth", "followers_url": "https://api.github.com/users/sarthusarth/followers", "following_url": "https://api.github.com/users/sarthusarth/following{/other_user}", "gists_url": "https://api.github.com/users/sarthusarth/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarthusarth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarthusarth/subscriptions", "organizations_url": "https://api.github.com/users/sarthusarth/orgs", "repos_url": "https://api.github.com/users/sarthusarth/repos", "events_url": "https://api.github.com/users/sarthusarth/events{/privacy}", "received_events_url": "https://api.github.com/users/sarthusarth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "roberta is coded in a hacky way which requires you to set its `max_position_embeddings` to the len of the tokenizer +2 (for instance its 514 for `roberta-base`).", "How can I set that in the script?", "By changing the line creating the config for instance.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,701
1,683
NONE
null
### System Info - `transformers` version: 4.27.0.dev0 - Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13 - Python version: 3.7.12 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.12.1+cu113 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: No ### Who can help? I trained a custom tokenizer by and tried pre-training Roberta using the official script https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py Using the parameters: python run_mlm.py \ --model_type roberta \ --tokenizer_name new_gcs/cnn_final/ \ --dataset_name new_gcs/cnn_final/ \ --max_seq_length 512 \ --line_by_line true \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --do_train true\ --do_eval true \ --output_dir ./test-mlm But I get this error and I checked model.vocab is same as len(token) and max_token in sample is 512 as well. transformers versions is: '4.27.0.dev0' File "run_mlm.py", line 632, in <module> main() File "run_mlm.py", line 581, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1635, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1898, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 2640, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 2672, in compute_loss outputs = model(**inputs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 1109, in forward return_dict=return_dict, File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 850, in forward past_key_values_length=past_key_values_length, File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 128, in forward position_embeddings = self.position_embeddings(position_ids) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2199, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self 0%| | 0/8883 [00:00<?, ?it/s] @sgugger @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running the official scipt: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py ### Expected behavior Should triain the model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22495/timeline
completed
null
null