url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/22192
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22192/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22192/comments
https://api.github.com/repos/huggingface/transformers/issues/22192/events
https://github.com/huggingface/transformers/pull/22192
1,626,545,836
PR_kwDOCUB6oc5MKRDR
22,192
Modify electra loss calcualation part
{ "login": "BM-K", "id": 55969260, "node_id": "MDQ6VXNlcjU1OTY5MjYw", "avatar_url": "https://avatars.githubusercontent.com/u/55969260?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BM-K", "html_url": "https://github.com/BM-K", "followers_url": "https://api.github.com/users/BM-K/followers", "following_url": "https://api.github.com/users/BM-K/following{/other_user}", "gists_url": "https://api.github.com/users/BM-K/gists{/gist_id}", "starred_url": "https://api.github.com/users/BM-K/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BM-K/subscriptions", "organizations_url": "https://api.github.com/users/BM-K/orgs", "repos_url": "https://api.github.com/users/BM-K/repos", "events_url": "https://api.github.com/users/BM-K/events{/privacy}", "received_events_url": "https://api.github.com/users/BM-K/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,679
1,679
NONE
null
`loss_fct(logits.view(-1, self.num_labels), labels.view(-1))` and `loss_fct(logits, labels)` do the same thing, the latter code is more efficient.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22192/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22192", "html_url": "https://github.com/huggingface/transformers/pull/22192", "diff_url": "https://github.com/huggingface/transformers/pull/22192.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22192.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22191
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22191/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22191/comments
https://api.github.com/repos/huggingface/transformers/issues/22191/events
https://github.com/huggingface/transformers/issues/22191
1,626,167,393
I_kwDOCUB6oc5g7Vhh
22,191
run_qa.py on custom datasets raise TypeError: __init__() got an unexpected keyword argument 'field'
{ "login": "TongJiL", "id": 43793141, "node_id": "MDQ6VXNlcjQzNzkzMTQx", "avatar_url": "https://avatars.githubusercontent.com/u/43793141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TongJiL", "html_url": "https://github.com/TongJiL", "followers_url": "https://api.github.com/users/TongJiL/followers", "following_url": "https://api.github.com/users/TongJiL/following{/other_user}", "gists_url": "https://api.github.com/users/TongJiL/gists{/gist_id}", "starred_url": "https://api.github.com/users/TongJiL/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TongJiL/subscriptions", "organizations_url": "https://api.github.com/users/TongJiL/orgs", "repos_url": "https://api.github.com/users/TongJiL/repos", "events_url": "https://api.github.com/users/TongJiL/events{/privacy}", "received_events_url": "https://api.github.com/users/TongJiL/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It may be installing old versions of the library so you have to pick up the corresponding version of the example (cc @philschmid for the exact versions)", "> It may be installing old versions of the library so you have to pick up the corresponding version of the example (cc @philschmid for the exact versions)\r\n\r\nThank you for your reply! That's also my assumption, I basically just used the train code from: https://huggingface.co/deepset/roberta-base-squad2 under train/SageMaker. Could be that the datasets version is too new in my instance, but in this case, which datasets version would you recommend? Thanks!", "@TongJiL could you share the exact code snippet? ", "@philschmid \r\n```\r\nimport sagemaker\r\nfrom sagemaker.huggingface import HuggingFace\r\n\r\nrole = sagemaker.get_execution_role()\r\nhyperparameters = {\r\n\t'model_name_or_path':'deepset/roberta-base-squad2',\r\n\t'output_dir':'/opt/ml/model'\r\n 'train_file';'/opt/ml/input/data/train/qa_train_data.csv'\r\n}\r\n\r\ngit_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.17.0'}\r\n\r\nhuggingface_estimator = HuggingFace(\r\n\tentry_point='run_qa.py',\r\n\tsource_dir='./examples/pytorch/question-answering',\r\n\tinstance_type='ml.p3.2xlarge',\r\n\tinstance_count=1,\r\n\trole=role,\r\n\tgit_config=git_config,\r\n\ttransformers_version='4.17.0',\r\n\tpytorch_version='1.10.2',\r\n\tpy_version='py38',\r\n\thyperparameters = hyperparameters\r\n)\r\n\r\ndata = {\r\n 'train': \"s3://my_s3_path/qa_train_data.csv\"\r\n}\r\nhuggingface_estimator.fit(data)```", "Turns out the \"filed\" works for Json but not csv.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Based on the following documentation: https://huggingface.co/docs/datasets/loading, the `field=\"data\"` applies when using a code such as the following to load the dataset:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"json\", data_files=\"my_file.json\", field=\"data\")\r\n```\r\n\r\nin which case the code will looking for a JSON file in the following format, where \"data\" is the name of the field in the JSON file where the data is stored:\r\n\r\n```json\r\n{\"version\": \"0.1.0\",\r\n \"data\": [{\"a\": 1, \"b\": 2.0, \"c\": \"foo\", \"d\": false},\r\n {\"a\": 4, \"b\": -5.5, \"c\": null, \"d\": true}]\r\n}\r\n```\r\n\r\nSo this is why csv files won't work." ]
1,678
1,696
1,682
NONE
null
### System Info Hello, I'm trying to train the qa model on SageMaker following the instracution, but I got ```TypeError: __init__() got an unexpected keyword argument 'field'``` issue when try to use my own datasets. I used SageMaker instance so it already install every dependency in requirements.txt. I checked the datasets code and seems like it does not support "field" anymore? Please fix this issue or let me know if there's something I did wrong. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run load_datasets has the same error ### Expected behavior run run_qa.py in sagemaker successfully
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22191/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22190
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22190/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22190/comments
https://api.github.com/repos/huggingface/transformers/issues/22190/events
https://github.com/huggingface/transformers/pull/22190
1,626,013,431
PR_kwDOCUB6oc5MIdZQ
22,190
Regression pipeline device
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Tests pipelines pass for both PyTorch and TF so merging!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22190). All of your documentation changes will be reflected on that endpoint." ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? Fixes the regression introduced by #21479. Basically doing ```py from transformers import pipeline classifier = pipeline("text-classification", device=-1) ``` now fails on v4.27.0 whereas it used to work on 4.26.1 This PR fixes this and adds a test to avoid future regression. Fixes #22189
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22190/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22190/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22190", "html_url": "https://github.com/huggingface/transformers/pull/22190", "diff_url": "https://github.com/huggingface/transformers/pull/22190.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22190.patch", "merged_at": 1678904019000 }
https://api.github.com/repos/huggingface/transformers/issues/22189
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22189/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22189/comments
https://api.github.com/repos/huggingface/transformers/issues/22189/events
https://github.com/huggingface/transformers/issues/22189
1,625,970,105
I_kwDOCUB6oc5g6lW5
22,189
transformers-cli serve not working
{ "login": "jankrepl", "id": 18519371, "node_id": "MDQ6VXNlcjE4NTE5Mzcx", "avatar_url": "https://avatars.githubusercontent.com/u/18519371?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jankrepl", "html_url": "https://github.com/jankrepl", "followers_url": "https://api.github.com/users/jankrepl/followers", "following_url": "https://api.github.com/users/jankrepl/following{/other_user}", "gists_url": "https://api.github.com/users/jankrepl/gists{/gist_id}", "starred_url": "https://api.github.com/users/jankrepl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jankrepl/subscriptions", "organizations_url": "https://api.github.com/users/jankrepl/orgs", "repos_url": "https://api.github.com/users/jankrepl/repos", "events_url": "https://api.github.com/users/jankrepl/events{/privacy}", "received_events_url": "https://api.github.com/users/jankrepl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil ", "This will be patched very soon, thanks for reporting!", "> This will be patched very soon, thanks for reporting!\r\n\r\nThank you for fixing it so quickly:)" ]
1,678
1,679
1,678
NONE
null
### System Info System info ``` bash - `transformers` version: 4.27.0 - Platform: macOS-12.3.1-arm64-arm-64bit - Python version: 3.8.12 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The following command fails for `transformers[serving]==4.27.0` ```bash transformers-cli serve --task=fill-mask --model=bert-base-uncased ``` this is the traceback ```bash Traceback (most recent call last): File "venv/bin/transformers-cli", line 8, in <module> sys.exit(main()) File "venv/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 54, in main service = args.func(args) File "venv/lib/python3.8/site-packages/transformers/commands/serving.py", line 49, in serve_command_factory nlp = pipeline( File "venv/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 976, in pipeline return pipeline_class(model=model, framework=framework, task=task, **kwargs) File "venv/lib/python3.8/site-packages/transformers/pipelines/base.py", line 773, in __init__ self.model = self.model.to(device=device) File "venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1811, in to return super().to(*args, **kwargs) File "venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1126, in to device, dtype, non_blocking, convert_to_format = torch._C._nn._parse_to(*args, **kwargs) RuntimeError: Device index must not be negative ``` ### Expected behavior However, downgrading to `transformers[serving]==4.26.1` fixes the issue ```bash INFO: Started server process [22054] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://localhost:8888 (Press CTRL+C to quit) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22189/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22188
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22188/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22188/comments
https://api.github.com/repos/huggingface/transformers/issues/22188/events
https://github.com/huggingface/transformers/issues/22188
1,625,966,407
I_kwDOCUB6oc5g6kdH
22,188
XGLMForCausalLM does not support `device_map='auto'` for load 8 bit
{ "login": "tontan1998", "id": 127051609, "node_id": "U_kgDOB5KnWQ", "avatar_url": "https://avatars.githubusercontent.com/u/127051609?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tontan1998", "html_url": "https://github.com/tontan1998", "followers_url": "https://api.github.com/users/tontan1998/followers", "following_url": "https://api.github.com/users/tontan1998/following{/other_user}", "gists_url": "https://api.github.com/users/tontan1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/tontan1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tontan1998/subscriptions", "organizations_url": "https://api.github.com/users/tontan1998/orgs", "repos_url": "https://api.github.com/users/tontan1998/repos", "events_url": "https://api.github.com/users/tontan1998/events{/privacy}", "received_events_url": "https://api.github.com/users/tontan1998/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
null
[]
[ "cc @younesbelkada ", "Should be addressed in #22207 ! ", "hi @tontan1998 \r\nYou can now benefit from XGLM 8bit on the `main` branch of `transformers`:\r\n\r\n```bash\r\npip install git+https://github.com/huggingface/transformers.git\r\n```", "> hi @tontan1998 You can now benefit from XGLM 8bit on the `main` branch of `transformers`:\r\n> \r\n> ```shell\r\n> pip install git+https://github.com/huggingface/transformers.git\r\n> ```\r\n\r\nThank you!" ]
1,678
1,678
1,678
NONE
null
### System Info transformers: v4.27.0 ### Who can help? @sgugger @muellerzr ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I was use this code. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "facebook/xglm-1.7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model_8bit = AutoModelForCausalLM.from_pretrained(model_name, load_in_8bit=True, device_map='auto') ``` Error: ```python Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[5], line 3 1 model_name = "facebook/xglm-1.7B" 2 tokenizer = AutoTokenizer.from_pretrained(model_name) ----> 3 model_8bit = AutoModelForCausalLM.from_pretrained(model_name, load_in_8bit=True, device_map='auto') File /usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py:471, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 469 elif type(config) in cls._model_mapping.keys(): 470 model_class = _get_model_class(config, cls._model_mapping) --> 471 return model_class.from_pretrained( 472 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs 473 ) 474 raise ValueError( 475 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n" 476 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}." 477 ) File /usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py:2556, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 2550 special_dtypes = { 2551 name: torch.float32 2552 for name, _ in model.named_parameters() 2553 if any(m in name for m in keep_in_fp32_modules) 2554 } 2555 if model._no_split_modules is None: -> 2556 raise ValueError(f"{model.__class__.__name__} does not support `device_map='{device_map}'` yet.") 2557 no_split_modules = model._no_split_modules 2558 if device_map not in ["auto", "balanced", "balanced_low_0", "sequential"]: ValueError: XGLMForCausalLM does not support `device_map='auto'` yet. ``` ### Expected behavior XGLMForCausalLM should support `device_map='auto'`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22188/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22187
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22187/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22187/comments
https://api.github.com/repos/huggingface/transformers/issues/22187/events
https://github.com/huggingface/transformers/pull/22187
1,625,916,759
PR_kwDOCUB6oc5MIIbU
22,187
Revert 22152 MaskedImageCompletionOutput changes
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Will do a patch asap" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? Reverts the breaking changes introduced by #22152. Temporary fix until it's decided how to change the model output. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? cc @alaradirik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22187/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22187", "html_url": "https://github.com/huggingface/transformers/pull/22187", "diff_url": "https://github.com/huggingface/transformers/pull/22187.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22187.patch", "merged_at": 1678901843000 }
https://api.github.com/repos/huggingface/transformers/issues/22186
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22186/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22186/comments
https://api.github.com/repos/huggingface/transformers/issues/22186/events
https://github.com/huggingface/transformers/pull/22186
1,625,866,421
PR_kwDOCUB6oc5MH9ko
22,186
Fix `ViTForMaskedImageModeling` doc example
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ydshieh @alaradirik @fxmarty The issue coming from #22152 was an oversight on my part about breaking changes. Perhaps we should revert that PR first and then agree how to introduce this change as it is intended to be added to other vision model? ", "Oh, I thought it was a new model head! Indeed a breaking change there. Good for me to revert that PR, but would be nice to talk to Sylvain or Lysandre first (if you feel necessary). I will leave you judge.\r\n\r\nRegarding a solution if we really want to have this new attribute and the new name `MaskedImageCompletionOutput`, adding a new property (named `logits`) to `MaskedImageCompletionOutput` might be a way, but I didn't think about this deeply.", "_The documentation is not available anymore as the PR was closed or merged._", "Converted to draft for now", "@ydshieh Yep - let's get @LysandreJik and @sgugger 's opinions. \r\n\r\nI think having the `logits` param is probably the best solution. As far as I know, it's very rare to check against the model output type itself. I believe `reconstruction` was chosen because of the `ImageSuperResolutionOutput` data class. As mentioned in the original PR - we probably do want a different model type to be returned as the documented shapes are incorrect. \r\n\r\n@alaradirik - could you open a PR to revert the changes? ", "Actually, it's late for @alaradirik. I'll open the PR now.", "Yes we can't rename the parameter in the outputs like that for a model that has been around for a bit. What is even more annoying is that the commit was in the release, so we will need to make a patch with the fix.", "Close this PR as it's clear we will and have to definitely use the original `logits`.", "Sorry for being late to comment, I added the `MaskedImageCompletionOutput` to replace the inaccurate `MaskedLMOutput `class used by the masked image modeling heads (ViT and DeiT). Neither of these models have any checkpoints on the hub as mentioned in #22152 . Swin's MIM head has its own output class but no fine-tuned checkpoints for the MIM task either. \r\n\r\nWith that said, ViT and Swin's MIM heads are implementations of [SimMIM](https://arxiv.org/abs/2111.09886) and SimMIM have recently released fine-tuned checkpoints for these two models (as opposed to the base model weights on the hub for Swin MIM head). I'm planning to convert these checkpoints and add a `masked-image-completion` pipeline after @sheonhan merges the ICT PR (a contemporary, better performing MIM model). It'd be great to add an output class and fix inaccurate class output (listed as a language model in the docs) before that. While logits is not an accurate output name in this case as the model returns full reconstructed images, I could replace `reconstruction` with `logits` and open a new PR.\r\n\r\nWhat do you think @amyeroberts @sgugger?\r\nCC @LysandreJik @ydshieh " ]
1,678
1,679
1,678
COLLABORATOR
null
# What does this PR do? Same #22185
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22186/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22186", "html_url": "https://github.com/huggingface/transformers/pull/22186", "diff_url": "https://github.com/huggingface/transformers/pull/22186.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22186.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22185
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22185/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22185/comments
https://api.github.com/repos/huggingface/transformers/issues/22185/events
https://github.com/huggingface/transformers/pull/22185
1,625,785,729
PR_kwDOCUB6oc5MHr-8
22,185
Fix ViTForMaskedImageModeling example in documentation
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@fxmarty I am just going to open a PR, but you are too fast! Thanks.", "@ydshieh It could be I missed it elsewhere, so feel free to push here / do an other PR!", "You have to check your CircleCI setting however. The CI has issue to run in this PR.", "OK, nothing missed, but as the CI has some problem, I am going to open PR.", "I will make sure you are a contributor in #22186 @fxmarty . Close this one as mentioned above.", "Ah yes I did not configure SSO with CircleCI, maybe that's the issue." ]
1,678
1,678
1,678
COLLABORATOR
null
Following https://github.com/huggingface/transformers/pull/22152, `logits` is not the right output key name. cc @alaradirik @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22185/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22185", "html_url": "https://github.com/huggingface/transformers/pull/22185", "diff_url": "https://github.com/huggingface/transformers/pull/22185.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22185.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22184
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22184/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22184/comments
https://api.github.com/repos/huggingface/transformers/issues/22184/events
https://github.com/huggingface/transformers/pull/22184
1,625,731,365
PR_kwDOCUB6oc5MHgO7
22,184
Fix: unfinished_sequences with correct device
{ "login": "Stxr", "id": 18367238, "node_id": "MDQ6VXNlcjE4MzY3MjM4", "avatar_url": "https://avatars.githubusercontent.com/u/18367238?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Stxr", "html_url": "https://github.com/Stxr", "followers_url": "https://api.github.com/users/Stxr/followers", "following_url": "https://api.github.com/users/Stxr/following{/other_user}", "gists_url": "https://api.github.com/users/Stxr/gists{/gist_id}", "starred_url": "https://api.github.com/users/Stxr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Stxr/subscriptions", "organizations_url": "https://api.github.com/users/Stxr/orgs", "repos_url": "https://api.github.com/users/Stxr/repos", "events_url": "https://api.github.com/users/Stxr/events{/privacy}", "received_events_url": "https://api.github.com/users/Stxr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "The following code will reproduce the error:\r\n```python\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModel\r\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\r\n\r\nclass Wrapper(torch.nn.Module):\r\n \"\"\"\r\n Wrapper for the model to be traced\r\n \"\"\"\r\n def __init__(self):\r\n super().__init__()\r\n self.model = AutoModel.from_pretrained(\"THUDM/chatglm-6b\", trust_remote_code=True).half().to(device)\r\n self.tokenizer = AutoTokenizer.from_pretrained(\"THUDM/chatglm-6b\", trust_remote_code=True)\r\n\r\n def forward(self, input_ids):\r\n self.model.eval()\r\n input_ids = input_ids.to(device)\r\n gen_kwargs = {\"max_length\": 2048, \"num_beams\": 1, \"do_sample\": True, \"top_p\": 0.7,\r\n \"temperature\": 0.95}\r\n outputs = self.model.generate(input_ids=input_ids, **gen_kwargs)\r\n return outputs[0, len(input_ids[0]) - 2:]\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"THUDM/chatglm-6b\", trust_remote_code=True)\r\nquery = \"hello\"\r\ninput = tokenizer([query], return_tensors=\"pt\", padding=True)\r\nmodel = Wrapper()\r\ntorch.jit.trace(model, (input.input_ids,)).save(\"chatglm-6b.pt\")\r\n```", "_The documentation is not available anymore as the PR was closed or merged._", "Interestingly, I don't see `.new()` on pytorch's docs. Good thing we're removing it :D " ]
1,678
1,678
1,678
CONTRIBUTOR
null
The original code was causing errors when running torch.jit.trace due to the tensor options being incorrect. I fixed this by using torch.ones to create a tensor with the correct device and dtype. This should resolve the issue with running torch.jit.trace. # What does this PR do? This PR fixes a bug that causes errors when running torch.jit.trace in transformers. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a bug. - [x] I read the contributor guideline. - [ ] This was discussed in issue # (insert issue number here). - [ ] I updated the documentation. - [ ] I wrote new tests. ## Who can review? Please tag @ArthurZucker and @younesbelkada for text models review. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22184/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22184", "html_url": "https://github.com/huggingface/transformers/pull/22184", "diff_url": "https://github.com/huggingface/transformers/pull/22184.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22184.patch", "merged_at": 1678897639000 }
https://api.github.com/repos/huggingface/transformers/issues/22183
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22183/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22183/comments
https://api.github.com/repos/huggingface/transformers/issues/22183/events
https://github.com/huggingface/transformers/pull/22183
1,625,630,957
PR_kwDOCUB6oc5MHKlf
22,183
Italian Translation of migration.mdx
{ "login": "Baelish03", "id": 97971495, "node_id": "U_kgDOBdbtJw", "avatar_url": "https://avatars.githubusercontent.com/u/97971495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Baelish03", "html_url": "https://github.com/Baelish03", "followers_url": "https://api.github.com/users/Baelish03/followers", "following_url": "https://api.github.com/users/Baelish03/following{/other_user}", "gists_url": "https://api.github.com/users/Baelish03/gists{/gist_id}", "starred_url": "https://api.github.com/users/Baelish03/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Baelish03/subscriptions", "organizations_url": "https://api.github.com/users/Baelish03/orgs", "repos_url": "https://api.github.com/users/Baelish03/repos", "events_url": "https://api.github.com/users/Baelish03/events{/privacy}", "received_events_url": "https://api.github.com/users/Baelish03/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "_The documentation is not available anymore as the PR was closed or merged._", "Thanks @Baelish03!\r\nCan you move your changes in toctree between *big_models* and *debugging*?\r\n\r\n```\r\n- local: big_models\r\n title: Istanziare un big model\r\n- local: migration\r\n title: Passaggio da pacchetti precedenti\r\n- local: debugging\r\n title: Debugging\r\n```\r\n\r\nThe translation LGTM, only two sentences sound strange, I propose this modification:\r\n\r\n192: Se chiamavi i modelli con nomi di parole chiave per argomenti di parole chiave, ad esempio `model(inputs_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)`, questo non dovrebbe causare alcun cambiamento. --> Se inizializzavi i modelli usando parole chiave per gli argomenti, ad esempio `model(inputs_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)`, questo non dovrebbe causare alcun cambiamento.\r\n\r\n194: Se chiamavi i modelli con input posizionali per argomenti di parole chiave, ad esempio `model(inputs_ids, attention_mask, token_type_ids)`, potrebbe essere necessario ricontrollare l'ordine esatto degli argomenti di input. --> Se inizializzavi i modelli con input posizionali gli argomenti, ad esempio `model(inputs_ids, attention_mask, token_type_ids)`, potrebbe essere necessario ricontrollare l'ordine esatto degli argomenti di input.", "Thanks for the advice, I re-upload files with corrections", "Thanks @Baelish03 \r\nLGTM @sgugger, @stevhliu and @MKhalusova @omarespejel" ]
1,678
1,678
1,678
CONTRIBUTOR
null
<!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> See issue [#17459] Add italian translation of migration.mdx and update _toctree.yml. It's my first pull request, so i hope it's ok <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22183/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22183", "html_url": "https://github.com/huggingface/transformers/pull/22183", "diff_url": "https://github.com/huggingface/transformers/pull/22183.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22183.patch", "merged_at": 1678968008000 }
https://api.github.com/repos/huggingface/transformers/issues/22182
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22182/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22182/comments
https://api.github.com/repos/huggingface/transformers/issues/22182/events
https://github.com/huggingface/transformers/pull/22182
1,625,614,233
PR_kwDOCUB6oc5MHHF1
22,182
Add Video Mask2Former
{ "login": "shivalikasingh95", "id": 73357305, "node_id": "MDQ6VXNlcjczMzU3MzA1", "avatar_url": "https://avatars.githubusercontent.com/u/73357305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shivalikasingh95", "html_url": "https://github.com/shivalikasingh95", "followers_url": "https://api.github.com/users/shivalikasingh95/followers", "following_url": "https://api.github.com/users/shivalikasingh95/following{/other_user}", "gists_url": "https://api.github.com/users/shivalikasingh95/gists{/gist_id}", "starred_url": "https://api.github.com/users/shivalikasingh95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shivalikasingh95/subscriptions", "organizations_url": "https://api.github.com/users/shivalikasingh95/orgs", "repos_url": "https://api.github.com/users/shivalikasingh95/repos", "events_url": "https://api.github.com/users/shivalikasingh95/events{/privacy}", "received_events_url": "https://api.github.com/users/shivalikasingh95/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22182). All of your documentation changes will be reflected on that endpoint.", "> I suggest that instead of adapting the logic within the current `Mask2Former` layers and image processor, a new model `Mask2FormerVideo` is added, with its own image processor and modeling file. This would enable simpler logic in the forward passes for the respective models, and an image processor which can take and return videos directly.\r\n> \r\n> This PR is in a good state, so it should be fairly simple to make this change but of course let us know if you have any questions about this implementation.\r\n\r\nHey @amyeroberts , thanks a lot for the review :)\r\n\r\nRegarding the structure...Alara and I actually had a discussion regarding this earlier...whether to add this as a separate model or not but we felt that since video mask2former is not very different from original mask2former, it would be alright if we just modify the existing implementation to handle both video and image. \r\n\r\nBut I'd be happy to convert this into a separate model.\r\nJust want to quickly double check with @alaradirik too before proceeding as this relates to our earlier discussion.\r\n", "@shivalikasingh95 I'm fine with going either way since the video segmentation model only has minor differences but perhaps we could keep most of changes to the existing sub classes and add a new head class - `Mask2FormerForVideoSegmentation` and add it to the model docs. I think this would boost the visibility and usage of the model as well, what do you think?\r\n\r\nJust asking for future models @amyeroberts, would we have a video_processing_xxx.py file and `VideoProcessorXXX` class to process videos? We have talked about creating video processing utilities with @shivalikasingh95 before but I wasn't sure about the best way to handle it.", "> @shivalikasingh95 I'm fine with going either way since the video segmentation model only has minor differences but perhaps we could keep most of changes to the existing sub classes and add a new head class - `Mask2FormerForVideoSegmentation` and add it to the model docs. I think this would boost the visibility and usage of the model as well, what do you think?\r\n\r\n@alaradirik I think adding a new head class - `Mask2FormerForVideoSegmentation` is a really good idea! \r\n\r\n> Just asking for future models @amyeroberts, would we have a video_processing_xxx.py file and `VideoProcessorXXX` class to process videos? \r\n\r\nIf we can add something like `Mask2FormerVideoProcessor` which can handle videos directly then that would be perfect.\r\n______________________________________________________________________________________________\r\n\r\nI'm not fully sure if it would make sense to add a separate modeling file altogether for video-mask2former since the authors wanted to show how easily we can use mask2former for video segmentation too.\r\n\r\nQuote from the paper -:\r\n_\"We find Mask2Former also achieves state-of-the-art performance on video instance segmentation without modifying the architecture, the loss or even the training pipeline\"_\r\n\r\nAnd hence, implementation wise there isn't really much difference.\r\n______________________________________________________________________________________________\r\n\r\nI think adding a new head and video processor class should help in making the implementation cleaner.\r\n\r\nBut again, I'm not sure if a model can have both an ImageProcessor and VideoProcessor class. If you guys feel, this may not be the best approach then may be we can go for turning this into a separate model.\r\n", "> I'm not fully sure if it would make sense to add a separate modeling file altogether for video-mask2former \r\n\r\nI agree that having a separate model implementation isn't in line with the spirit of the model. However, at the moment, supporting video inputs does require a modification of architecture, as shown by the need for the `is_video` flag throughout `modeling_mask2formers.py`. \r\n\r\n> If we can add something like Mask2FormerVideoProcessor which can handle videos directly then that would be perfect.\r\n\r\nI don't think a `VideoProcessor` class is necessary at the moment. We already have models with video inputs e.g. [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) which use image processors.\r\nIf we want to use the same processing class as the image model, adding a method such as `preprocess_video` is a possibility. This does mean the processor won't be compatible with the usual API, i.e. it could not be directly called `image_processor(video_inputs)`. However, the current method `post_process_video_instance_segmentation` also breaks this. Having a separate modeling file resolves the issue of one class handling both images and videos. \r\n\r\n", "> Having a separate modeling file resolves the issue of one class handling both images and videos.\r\n\r\nSure @amyeroberts. I understand now why adding a separate modeling file would make more sense. Had a discussion with @alaradirik too regarding this change on Friday. I'll take care of adding the new modeling file.\r\n\r\n>I don't think a VideoProcessor class is necessary at the moment. We already have models with video inputs e.g. [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) which use image processors.\r\n\r\nAgain makes sense, @amyeroberts. I only meant to suggest adding a VideoProcessor class for Mask2Former in case you guys were planning on introducing VideoProcessor classes in general as part of transformers library.\r\n\r\n>If we want to use the same processing class as the image model, adding a method such as preprocess_video is a possibility. This does mean the processor won't be compatible with the usual API, i.e. it could not be directly called image_processor(video_inputs)\r\n\r\nWould adding a separate image processor class, `VideoMask2FormerImageProcessor` make sense if we don't want to have the same image processing class for image and video models?\r\nThis way the usual API behaviour would not be broken. In this case, we can directly use `image_processor(video_inputs)`.\r\n\r\n", ">Would adding a separate image processor class, VideoMask2FormerImageProcessor make sense if we don't want to have the same image processing class for image and video models?\r\nThis way the usual API behaviour would not be broken. In this case, we can directly use image_processor(video_inputs).\r\n\r\n@shivalikasingh95 - yep, I think that works! ", "@alaradirik and @amyeroberts please feel free to review this PR.\r\n\r\nI'm just getting a few failing CI checks due to [this error](https://app.circleci.com/pipelines/github/huggingface/transformers/62275/workflows/568ede06-a91a-4e5c-a02c-6bbafcbbcd64/jobs/766732). Would be great if I can get some help on how to fix it.", "@amyeroberts could you do a final review, Video Mask2Former is a separate model now and the PR is in good shape :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,685
1,685
CONTRIBUTOR
null
### What does this PR do? This PR adds Video Mask2Former model. Original repo: https://github.com/facebookresearch/Mask2Former/ Mask2Former for Video Instance Segmentation Paper: https://arxiv.org/abs/2112.10764 Co-authored with: @alaradirik - [x] Update model checkpoints - [x] Update model cards - [ ] transfer model checkpoints to facebook organization ### Who can review? @alaradirik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22182/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22182", "html_url": "https://github.com/huggingface/transformers/pull/22182", "diff_url": "https://github.com/huggingface/transformers/pull/22182.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22182.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22181
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22181/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22181/comments
https://api.github.com/repos/huggingface/transformers/issues/22181/events
https://github.com/huggingface/transformers/issues/22181
1,625,465,530
I_kwDOCUB6oc5g4qK6
22,181
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name '_rename_parameter' from 'scipy._lib._util' (/databricks/python/lib/python3.8/site-packages/scipy/_lib/_util.py)
{ "login": "k3ybladewielder", "id": 50303964, "node_id": "MDQ6VXNlcjUwMzAzOTY0", "avatar_url": "https://avatars.githubusercontent.com/u/50303964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/k3ybladewielder", "html_url": "https://github.com/k3ybladewielder", "followers_url": "https://api.github.com/users/k3ybladewielder/followers", "following_url": "https://api.github.com/users/k3ybladewielder/following{/other_user}", "gists_url": "https://api.github.com/users/k3ybladewielder/gists{/gist_id}", "starred_url": "https://api.github.com/users/k3ybladewielder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/k3ybladewielder/subscriptions", "organizations_url": "https://api.github.com/users/k3ybladewielder/orgs", "repos_url": "https://api.github.com/users/k3ybladewielder/repos", "events_url": "https://api.github.com/users/k3ybladewielder/events{/privacy}", "received_events_url": "https://api.github.com/users/k3ybladewielder/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @k3ybladewielder, thanks for raising this issue.\r\n\r\nThe error is coming from python not being able to import a specific scipy module and so isn't a `transformers` related bug per se. \r\n\r\nI would try reinstalling transformers with a package manager and the `sklearn` option as it should install all the necessary dependencies in your environment: `pip install transformers[sklearn] --force-reinstall`", "@amyeroberts thanks for the answer. Its works", "**can yall help me too:**\r\n```\r\nfrom transformers import pipeline\r\n```\r\n\r\n```\r\nFailed to import transformers.pipelines because of the following error (look up to see its traceback):\r\nUnable to convert function return value to a Python type! The signature was\r\n\t() -> handle\r\n```\r\n\r\n**full trace of error:**\r\n\r\n```\r\nTypeError Traceback (most recent call last)\r\nFile ~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py:1099, in _LazyModule._get_module(self, module_name)\r\n 1098 try:\r\n-> 1099 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1100 except Exception as e:\r\n\r\nFile ~/anaconda3/lib/python3.9/importlib/__init__.py:127, in import_module(name, package)\r\n 126 level += 1\r\n--> 127 return _bootstrap._gcd_import(name[level:], package, level)\r\n\r\nFile <frozen importlib._bootstrap>:1030, in _gcd_import(name, package, level)\r\n\r\nFile <frozen importlib._bootstrap>:1007, in _find_and_load(name, import_)\r\n\r\nFile <frozen importlib._bootstrap>:986, in _find_and_load_unlocked(name, import_)\r\n\r\nFile <frozen importlib._bootstrap>:680, in _load_unlocked(spec)\r\n\r\nFile <frozen importlib._bootstrap_external>:850, in exec_module(self, module)\r\n\r\nFile <frozen importlib._bootstrap>:228, in _call_with_frames_removed(f, *args, **kwds)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/transformers/pipelines/__init__.py:44, in <module>\r\n 35 from ..utils import (\r\n 36 HUGGINGFACE_CO_RESOLVE_ENDPOINT,\r\n 37 is_kenlm_available,\r\n (...)\r\n 42 logging,\r\n 43 )\r\n---> 44 from .audio_classification import AudioClassificationPipeline\r\n 45 from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/transformers/pipelines/audio_classification.py:21, in <module>\r\n 20 from ..utils import add_end_docstrings, is_torch_available, is_torchaudio_available, logging\r\n---> 21 from .base import PIPELINE_INIT_ARGS, Pipeline\r\n 24 if is_torch_available():\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/transformers/pipelines/base.py:35, in <module>\r\n 34 from ..image_processing_utils import BaseImageProcessor\r\n---> 35 from ..modelcard import ModelCard\r\n 36 from ..models.auto.configuration_auto import AutoConfig\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/transformers/modelcard.py:48, in <module>\r\n 32 from .models.auto.modeling_auto import (\r\n 33 MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES,\r\n 34 MODEL_FOR_CAUSAL_LM_MAPPING_NAMES,\r\n (...)\r\n 46 MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES,\r\n 47 )\r\n---> 48 from .training_args import ParallelMode\r\n 49 from .utils import (\r\n 50 MODEL_CARD_NAME,\r\n 51 cached_file,\r\n (...)\r\n 57 logging,\r\n 58 )\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/transformers/training_args.py:30, in <module>\r\n 29 from .debug_utils import DebugOption\r\n---> 30 from .trainer_utils import (\r\n 31 EvaluationStrategy,\r\n 32 FSDPOption,\r\n 33 HubStrategy,\r\n 34 IntervalStrategy,\r\n 35 SchedulerType,\r\n 36 ShardedDDPOption,\r\n 37 )\r\n 38 from .utils import (\r\n 39 ExplicitEnum,\r\n 40 cached_property,\r\n (...)\r\n 53 requires_backends,\r\n 54 )\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/transformers/trainer_utils.py:48, in <module>\r\n 47 if is_tf_available():\r\n---> 48 import tensorflow as tf\r\n 51 def seed_worker(_):\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/__init__.py:37, in <module>\r\n 35 import typing as _typing\r\n---> 37 from tensorflow.python.tools import module_util as _module_util\r\n 38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/__init__.py:42, in <module>\r\n 39 # pylint: enable=wildcard-import\r\n 40 \r\n 41 # Bring in subpackages.\r\n---> 42 from tensorflow.python import data\r\n 43 from tensorflow.python import distribute\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/__init__.py:21, in <module>\r\n 20 # pylint: disable=unused-import\r\n---> 21 from tensorflow.python.data import experimental\r\n 22 from tensorflow.python.data.ops.dataset_ops import AUTOTUNE\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/experimental/__init__.py:96, in <module>\r\n 95 # pylint: disable=unused-import\r\n---> 96 from tensorflow.python.data.experimental import service\r\n 97 from tensorflow.python.data.experimental.ops.batching import dense_to_ragged_batch\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/experimental/service/__init__.py:419, in <module>\r\n 15 \"\"\"API for using the tf.data service.\r\n 16 \r\n 17 This module contains:\r\n (...)\r\n 416 job of ParameterServerStrategy).\r\n 417 \"\"\"\r\n--> 419 from tensorflow.python.data.experimental.ops.data_service_ops import distribute\r\n 420 from tensorflow.python.data.experimental.ops.data_service_ops import from_dataset_id\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/experimental/ops/data_service_ops.py:22, in <module>\r\n 21 from tensorflow.python import tf2\r\n---> 22 from tensorflow.python.data.experimental.ops import compression_ops\r\n 23 from tensorflow.python.data.experimental.service import _pywrap_server_lib\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/experimental/ops/compression_ops.py:16, in <module>\r\n 15 \"\"\"Ops for compressing and uncompressing dataset elements.\"\"\"\r\n---> 16 from tensorflow.python.data.util import structure\r\n 17 from tensorflow.python.ops import gen_experimental_dataset_ops as ged_ops\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/util/structure.py:22, in <module>\r\n 20 import wrapt\r\n---> 22 from tensorflow.python.data.util import nest\r\n 23 from tensorflow.python.framework import composite_tensor\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/data/util/nest.py:34, in <module>\r\n 16 \"\"\"## Functions for working with arbitrarily nested sequences of elements.\r\n 17 \r\n 18 NOTE(mrry): This fork of the `tensorflow.python.util.nest` module\r\n (...)\r\n 31 arrays.\r\n 32 \"\"\"\r\n---> 34 from tensorflow.python.framework import sparse_tensor as _sparse_tensor\r\n 35 from tensorflow.python.util import _pywrap_utils\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/framework/sparse_tensor.py:24, in <module>\r\n 23 from tensorflow.python.framework import composite_tensor\r\n---> 24 from tensorflow.python.framework import constant_op\r\n 25 from tensorflow.python.framework import dtypes\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/framework/constant_op.py:25, in <module>\r\n 24 from tensorflow.python.eager import context\r\n---> 25 from tensorflow.python.eager import execute\r\n 26 from tensorflow.python.framework import dtypes\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/eager/execute.py:21, in <module>\r\n 20 from tensorflow.python.eager import core\r\n---> 21 from tensorflow.python.framework import dtypes\r\n 22 from tensorflow.python.framework import ops\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/tensorflow/python/framework/dtypes.py:34, in <module>\r\n 32 from tensorflow.core.function import trace_type\r\n---> 34 _np_bfloat16 = _pywrap_bfloat16.TF_bfloat16_type()\r\n 37 class DTypeMeta(type(_dtypes.DType), abc.ABCMeta):\r\n\r\nTypeError: Unable to convert function return value to a Python type! The signature was\r\n\t() -> handle\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nRuntimeError Traceback (most recent call last)\r\nInput In [11], in <cell line: 2>()\r\n 1 import gradio as gr # UI library\r\n----> 2 from transformers import pipeline\r\n\r\nFile <frozen importlib._bootstrap>:1055, in _handle_fromlist(module, fromlist, import_, recursive)\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py:1089, in _LazyModule.__getattr__(self, name)\r\n 1087 value = self._get_module(name)\r\n 1088 elif name in self._class_to_module.keys():\r\n-> 1089 module = self._get_module(self._class_to_module[name])\r\n 1090 value = getattr(module, name)\r\n 1091 else:\r\n\r\nFile ~/anaconda3/lib/python3.9/site-packages/transformers/utils/import_utils.py:1101, in _LazyModule._get_module(self, module_name)\r\n 1099 return importlib.import_module(\".\" + module_name, self.__name__)\r\n 1100 except Exception as e:\r\n-> 1101 raise RuntimeError(\r\n 1102 f\"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its\"\r\n 1103 f\" traceback):\\n{e}\"\r\n 1104 ) from e\r\n\r\nRuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):\r\nUnable to convert function return value to a Python type! The signature was\r\n\t() -> handle\r\n```\r\n", "@CliffLopes could you open a new issue, including all the information requested in the issue template e.g. running environment etc. ? Just from the traceback, it looks like the issue is coming from the tensorflow installed in the environment. In the issue, please make sure to include any relevant information about tensorflow e.g. version and how it was installed." ]
1,678
1,690
1,678
NONE
null
## Environment info transformers version: '4.26.1' Platform: Databricks the command to import, return the error below ``` from transformers import pipeline ``` ``` RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name '_rename_parameter' from 'scipy._lib._util' (/databricks/python/lib/python3.8/site-packages/scipy/_lib/_util.py) ``` ## Who can help @Narsil full trace of error ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /databricks/python/lib/python3.8/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1109 try: -> 1110 return importlib.import_module("." + module_name, self.__name__) 1111 except Exception as e: /usr/lib/python3.8/importlib/__init__.py in import_module(name, package) 126 level += 1 --> 127 return _bootstrap._gcd_import(name[level:], package, level) 128 /usr/lib/python3.8/importlib/_bootstrap.py in _gcd_import(name, package, level) /usr/lib/python3.8/importlib/_bootstrap.py in _find_and_load(name, import_) /usr/lib/python3.8/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) /usr/lib/python3.8/importlib/_bootstrap.py in _load_unlocked(spec) /usr/lib/python3.8/importlib/_bootstrap_external.py in exec_module(self, module) /usr/lib/python3.8/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) /databricks/python/lib/python3.8/site-packages/transformers/pipelines/__init__.py in <module> 64 from .depth_estimation import DepthEstimationPipeline ---> 65 from .document_question_answering import DocumentQuestionAnsweringPipeline 66 from .feature_extraction import FeatureExtractionPipeline /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/transformers/pipelines/document_question_answering.py in <module> 28 from .base import PIPELINE_INIT_ARGS, ChunkPipeline ---> 29 from .question_answering import select_starts_ends 30 /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/transformers/pipelines/question_answering.py in <module> 7 ----> 8 from ..data import SquadExample, SquadFeatures, squad_convert_examples_to_features 9 from ..modelcard import ModelCard /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/transformers/data/__init__.py in <module> 29 ) ---> 30 from .metrics import glue_compute_metrics, xnli_compute_metrics 31 from .processors import ( /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/transformers/data/metrics/__init__.py in <module> 21 ---> 22 if is_sklearn_available(): 23 from sklearn.metrics import f1_score, matthews_corrcoef /databricks/python/lib/python3.8/site-packages/transformers/utils/import_utils.py in is_sklearn_available() 563 return False --> 564 return is_scipy_available() and importlib.util.find_spec("sklearn.metrics") 565 /usr/lib/python3.8/importlib/util.py in find_spec(name, package) 93 if parent_name: ---> 94 parent = __import__(parent_name, fromlist=['__path__']) 95 try: /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/mlflow/utils/import_hooks/__init__.py in load_module(self, fullname) 234 try: --> 235 module = self.loader.load_module(fullname) 236 notify_module_loaded(module) /databricks/python_shell/dbruntime/PostImportHook.py in load_module(self, fullname) 215 try: --> 216 module = self.loader.load_module(fullname) 217 notify_module_loaded(module) /databricks/python/lib/python3.8/site-packages/sklearn/__init__.py in <module> 81 from . import __check_build # noqa: F401 ---> 82 from .base import clone 83 from .utils._show_versions import show_versions /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/sklearn/base.py in <module> 16 from ._config import get_config ---> 17 from .utils import _IS_32BIT 18 from .utils._tags import ( /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/sklearn/utils/__init__.py in <module> 22 from .murmurhash import murmurhash3_32 ---> 23 from .class_weight import compute_class_weight, compute_sample_weight 24 from . import _joblib /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/sklearn/utils/class_weight.py in <module> 6 ----> 7 from .validation import _deprecate_positional_args 8 /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/sklearn/utils/validation.py in <module> 25 ---> 26 from .fixes import _object_dtype_isnan, parse_version 27 from .. import get_config as _get_config /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/sklearn/utils/fixes.py in <module> 19 import scipy ---> 20 import scipy.stats 21 from scipy.sparse.linalg import lsqr as sparse_lsqr # noqa /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/scipy/stats/__init__.py in <module> 484 DegenerateDataWarning, FitError) --> 485 from ._stats_py import * 486 from ._variation import variation /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 /databricks/python/lib/python3.8/site-packages/scipy/stats/_stats_py.py in <module> 40 from scipy.ndimage import _measurements ---> 41 from scipy._lib._util import (check_random_state, MapWrapper, 42 rng_integers, _rename_parameter, _contains_nan) ImportError: cannot import name '_rename_parameter' from 'scipy._lib._util' (/databricks/python/lib/python3.8/site-packages/scipy/_lib/_util.py) The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) <command-632418863495886> in <module> 3 import plotly.express as plx 4 ----> 5 from transformers import pipeline 6 # from transformers import AutoTokenizer /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 160 # Import the desired module. If you’re seeing this while debugging a failed import, 161 # look at preceding stack frames for relevant error information. --> 162 original_result = python_builtin_import(name, globals, locals, fromlist, level) 163 164 is_root_import = thread_local._nest_level == 1 /usr/lib/python3.8/importlib/_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive) /databricks/python/lib/python3.8/site-packages/transformers/utils/import_utils.py in __getattr__(self, name) 1098 value = self._get_module(name) 1099 elif name in self._class_to_module.keys(): -> 1100 module = self._get_module(self._class_to_module[name]) 1101 value = getattr(module, name) 1102 else: /databricks/python/lib/python3.8/site-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1110 return importlib.import_module("." + module_name, self.__name__) 1111 except Exception as e: -> 1112 raise RuntimeError( 1113 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its" 1114 f" traceback):\n{e}" RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name '_rename_parameter' from 'scipy._lib._util' (/databricks/python/lib/python3.8/site-packages/scipy/_lib/_util.py) ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import pipeline ``` ### Expected behavior Successful import of pipeline
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22181/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22180
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22180/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22180/comments
https://api.github.com/repos/huggingface/transformers/issues/22180/events
https://github.com/huggingface/transformers/pull/22180
1,625,374,723
PR_kwDOCUB6oc5MGTX3
22,180
A script to add/update `pipeline_model_mapping` systematically
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Request @sgugger for review, as he is involved in a few related PRs before. If the core maintainers decide to let @amyeroberts to review, I can provide more context (regarding previous PRs) for her to ease the review process.", "_The documentation is not available anymore as the PR was closed or merged._", "I think I should add a test for this new script, just like `tests/repo_utils/test_tests_fetcher.py` that tests the script `utils/tests_fetcher.py`.\r\n\r\nHowever, this script needs file(s) in `tests/model/` and require models being in `transformers`. I might need to see if there is similar cases being done before to implement a test for this new script.", "Hey @ydshieh, thanks for your PR! In which settings would you want to leverage such a script? Would it be when users contribute new models?", "Hi @LysandreJik \r\n\r\nI am thinking it is better to have a CI job that runs this script in a regular basis (as a check, or even open a PR automatically), similar to #22275.\r\nAt some occasions, we (`transformers` members) might want to run it for a particular test file to check/update something quickly.\r\n\r\nI don't expect contributors to run this - fewer steps, less friction and happier for them :-)\r\n", "If it makes your workflow simpler then why not, but I would make it very explicit what this script does and document it (even in the script directly). If I had trouble understanding what it was for/when and who should use it, I figure others will have the same problem :)\r\n\r\nThanks!", "> If it makes your workflow simpler then why not, but I would make it very explicit what this script does and document it (even in the script directly). If I had trouble understanding what it was for/when and who should use it, I figure others will have the same problem :)\r\n> \r\n> Thanks!\r\n\r\nNice point ❤️.I will add some comments in the script. Thank you for the feedback.", "@LysandreJik Hope you will ❤️ the added comment added in\r\n\r\nhttps://github.com/huggingface/transformers/pull/22180/commits/87433ee115ba14b509df6386d2feff8e7ad41567\r\n\r\n", "(just rebase on `main`)", "Hi @LysandreJik . A description is added in [this commit](https://github.com/huggingface/transformers/pull/22180/commits/87433ee115ba14b509df6386d2feff8e7ad41567). Let me know if you have any further comment/review(s) :-). Thank you.", "Just fix a few things before merging. Nothing really big.\r\n\r\nI have use this new script to update the attributes in #22606." ]
1,678
1,680
1,680
COLLABORATOR
null
# What does this PR do? This script will ease the process of adding and/or updating `pipeline_model_mapping` in test files in the future, in a systematical way (no manual editing). This is also one part of the process of [tiny model creation/upload + tiny model info update + **test file update**] The basic idea: For a test file: - find a test class - check if `pipeline_model_mapping` is already defined - yes + overwrite is `True`: remove it (more precisely, mark them to be removed, and remove them before writing to file) - compute `pipeline_model_mapping` via the mappings defined in `XXXPipelineTests` classes defined in files in `tests/pipelines/test_xxx.py` - compute the position to which we add `pipeline_model_mapping` - add `pipeline_model_mapping` and write to file Remark: There are (very) few exception cases not handled in this PR. For example, 2 test classes defined in a single test file, like `Blip2ForConditionalGenerationDecoderOnlyTest` and `Blip2ModelTest`. Example usage / demo: ```python python utils\add_pipeline_model_mapping_to_test.py --test_file tests\models\bert\test_modeling_bert.py --overwrite ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22180/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22180", "html_url": "https://github.com/huggingface/transformers/pull/22180", "diff_url": "https://github.com/huggingface/transformers/pull/22180.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22180.patch", "merged_at": 1680797294000 }
https://api.github.com/repos/huggingface/transformers/issues/22179
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22179/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22179/comments
https://api.github.com/repos/huggingface/transformers/issues/22179/events
https://github.com/huggingface/transformers/issues/22179
1,625,205,160
I_kwDOCUB6oc5g3qmo
22,179
When I use Trainer with Deepspeed, the Number of trainable parameters is 0
{ "login": "noob-ctrl", "id": 63763578, "node_id": "MDQ6VXNlcjYzNzYzNTc4", "avatar_url": "https://avatars.githubusercontent.com/u/63763578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/noob-ctrl", "html_url": "https://github.com/noob-ctrl", "followers_url": "https://api.github.com/users/noob-ctrl/followers", "following_url": "https://api.github.com/users/noob-ctrl/following{/other_user}", "gists_url": "https://api.github.com/users/noob-ctrl/gists{/gist_id}", "starred_url": "https://api.github.com/users/noob-ctrl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/noob-ctrl/subscriptions", "organizations_url": "https://api.github.com/users/noob-ctrl/orgs", "repos_url": "https://api.github.com/users/noob-ctrl/repos", "events_url": "https://api.github.com/users/noob-ctrl/events{/privacy}", "received_events_url": "https://api.github.com/users/noob-ctrl/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "cc @stas00 ", "Thank you for the report, @noob-ctrl \r\n\r\nPlease let me know if this fix works for you: https://github.com/huggingface/transformers/pull/22193\r\n", "@stas00 Hi, it works now. Thank you!", "Thank you for testing, @noob-ctrl - the PR has been merged." ]
1,678
1,679
1,678
NONE
null
The version information is as follows: - Deepspeed. 0.8.1 - transformers. 4.26.1 ## Problem When I use Trainer with Deepspeed, the Number of trainable parameters is 0. Like this: ![image](https://user-images.githubusercontent.com/63763578/225277324-3650bbea-78f7-493a-97a2-3f9ef0bdcd5a.png) And it happens when using zero3. When I use zero2, it does not have this problem.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22179/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22178
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22178/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22178/comments
https://api.github.com/repos/huggingface/transformers/issues/22178/events
https://github.com/huggingface/transformers/issues/22178
1,625,176,878
I_kwDOCUB6oc5g3jsu
22,178
Add BEiTv3
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "I will start working on adding the model !", "If required I can help as well", "hello, I want to test the zero shot Image text retrieval of the model on some images and texts, can you help me ?\r\n", "Is this PR going to be merged soon for Beit3 modules to be available for everyone?" ]
1,678
1,697
null
CONTRIBUTOR
null
### Model description Microsoft just open-sourced BEiTv3: https://github.com/microsoft/unilm/tree/master/beit3 This is a very powerful vision-language model that can be used as backbone for a variety of downstream tasks, from image classification to VQA to object detection. Time to add it to HF Transformers! :) ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/microsoft/unilm/tree/master/beit3
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22178/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22177
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22177/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22177/comments
https://api.github.com/repos/huggingface/transformers/issues/22177/events
https://github.com/huggingface/transformers/pull/22177
1,625,031,840
PR_kwDOCUB6oc5MFIGp
22,177
[`bnb`] Let's make serialization of int8 models possible
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The design is not easy enough to use. If a user saves a quantized model and pushes to the Hub, it should work directly with `from_pretrained`. This is why I insisted that the quantization config should be saved inside the model config. This way you won't need to have the user pass `load_in_8_bit=True`, as you can read it from the config.", "awesome ok, I'll work on that, so if there is a quantized config on the repo we should force-use `device_map=auto` & `load_in_8bit` in this case", "The PR is ready for review @sgugger ! \r\nThis PR is not mergeable before the bnb release of course", "Thanks for the heads up! :D \r\nIt should be much better now! For me the PR is ready for a review now " ]
1,678
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Before this PR, it was not possible to save an 8bit model, or load an 8bit model from the Hub. This PR makes this feature possible. If this PR gets merged, users can upload 8bit models on the Hub and/or load 8bit models from the Hub, hence save 2x memory compared to half-precision models. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) >>> Hello my name is Nate, I am a professional photographer and I am a member of the model.save_pretrained("./saved_int8") model = AutoModelForCausalLM.from_pretrained("./saved_int8") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0])) >>> Hello my name is Nate, I am a professional photographer and I am a member of the ``` Depends on https://github.com/TimDettmers/bitsandbytes/pull/159 Let's put it as draft before I address the last TODOs and open questions & before https://github.com/TimDettmers/bitsandbytes/pull/159 gets merged. ## TODOs and open questions: - ability to push `BitsAndBytesConfig` - Do we want to save the serialized model under the name `pytorch_model.bin` ? I would say yes for simplicity reasons but we need to make sure that a user calls `from_pretrained` with `load_in_8bit`, hence add a warning if there is a `quantization_config.json` on the Hub repo + the user is not passing `load_in_8bit=True`. - Force `load_in_8bit=True` if there is a `quantization_config.json` on the Hub repo? - Update docs - Update warnings - Safety checkers for `bnb` versions - Add a test to check if it works using sharded fp16 weights cc @sgugger I left few open questions, would love to hear your thoughts on these!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22177/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22177/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22177", "html_url": "https://github.com/huggingface/transformers/pull/22177", "diff_url": "https://github.com/huggingface/transformers/pull/22177.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22177.patch", "merged_at": 1681300879000 }
https://api.github.com/repos/huggingface/transformers/issues/22176
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22176/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22176/comments
https://api.github.com/repos/huggingface/transformers/issues/22176/events
https://github.com/huggingface/transformers/issues/22176
1,625,009,662
I_kwDOCUB6oc5g263-
22,176
Deepspeed initialization AttributeError: 'EncoderDecoderConfig' object has no attribute 'hidden_size'
{ "login": "ksopyla", "id": 64201, "node_id": "MDQ6VXNlcjY0MjAx", "avatar_url": "https://avatars.githubusercontent.com/u/64201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ksopyla", "html_url": "https://github.com/ksopyla", "followers_url": "https://api.github.com/users/ksopyla/followers", "following_url": "https://api.github.com/users/ksopyla/following{/other_user}", "gists_url": "https://api.github.com/users/ksopyla/gists{/gist_id}", "starred_url": "https://api.github.com/users/ksopyla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ksopyla/subscriptions", "organizations_url": "https://api.github.com/users/ksopyla/orgs", "repos_url": "https://api.github.com/users/ksopyla/repos", "events_url": "https://api.github.com/users/ksopyla/events{/privacy}", "received_events_url": "https://api.github.com/users/ksopyla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ksopyla Thanks for raising this issue and for giving all the script and environment details. Could you share the full traceback of the error encountered? \r\n\r\nAlthough I'm not immediately sure where the error is being raised, it is expected that the error occurs if `hidden_size` is being references from the model's config i.e. `model.config.hidden_size` as it's only the encoder and decoder configs that have this parameter. ", "HI @amyeroberts I have updated the issue and added the traceback. I hope it helps. \r\nYes, you are right problem occurs when script tries to get ```model.config.hidden_size``` \r\n\r\nI would add to this that the encoder and decoder could have different sizes in terms of the number of layers and hidden_size\r\n", "Thank you for the full traceback, @ksopyla. Now it's easy to support you.\r\n\r\nPlease try again with the latest version of transformers. You can see here that this situation has been dealt with on Feb 10th so this assert shouldn't happen again as it now carefully checks different scenarios:\r\n \r\n https://github.com/huggingface/transformers/blob/1c4a9acc7319221643555c0e8ff1fda2f758c400/src/transformers/deepspeed.py#L179-L213\r\n \r\n However if you don't set `hidden_size` then please don't use `auto` values for zero configuration section. This is what the proper assert in the latest version will tell you to do.\r\n \r\nThis is just an automatic optimization and you can remove these entries completely and deepspeed will use its defaults. Or you can study what those values should be and set them yourself as explained here:\r\nhttps://huggingface.co/docs/transformers/main/main_classes/deepspeed#zero3-config", "Sure, I will check and let you know. \r\nMeanwhile, could you explain what you mean by \" then please don't use auto values for zero configuration section.\" \r\nI use Zero2 https://huggingface.co/docs/transformers/main/main_classes/deepspeed#zero2-config not Zero3-config\r\n\r\nI infer you talk about these parameters, which should be set if I use Zero3. \r\n```\r\nhidden_size_based_keys = [\r\n \"zero_optimization.reduce_bucket_size\",\r\n \"zero_optimization.stage3_prefetch_bucket_size\",\r\n \"zero_optimization.stage3_param_persistence_threshold\",\r\n ]\r\n``` \r\nCorrect me if I am wrong. Or maybe I should also set those in Zero2?\r\n\r\n", "ah, ok, thank you for clarifying the situation - that's even simpler then. Just upgrade transformers, change nothing in your setup and it should just work.\r\n\r\nThe original code just did `model.config.hidden_size` regardless of the config type and thus it is failing for you.", "I have updated the transformers to 4.27 and pytorch 2.0 and it works :)\r\nBut I have an issue that Zero-2 is slower than pytorch distributed approach, try to investigate it further. \r\nMeanwhile thank you for your help. \r\n", "Best to discuss a new issue in a new Issue, but if we can wrap it up quickly - it's absolutely normal that the speed will progressively drop as you enable stages 1, 2 and 3, as each stage creates an additional overhead. \r\n\r\nIf you can fit everything into a single GPU do not use Deepspeed. It's a scalability solution for when one can't fit the training or inference components into a single gpu. If you can, always use straight DDP.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
NONE
null
### System Info System: Ubuntu 22.04 - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.29 - Python version: 3.8.13 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes, 4x RTX 3090 - Using distributed or parallel set-up in script?: yes, deepseed <details> <summary>packages - Click to expand!</summary> ``` Package Version ------------------------- ------------ absl-py 1.4.0 abydos 0.5.0 accelerate 0.17.0 aiohttp 3.8.4 aiosignal 1.3.1 alembic 1.9.4 antlr4-python3-runtime 4.9.3 anyio 3.6.2 appdirs 1.4.4 argon2-cffi 21.3.0 argon2-cffi-bindings 21.2.0 arrow 1.2.3 astroid 2.14.2 asttokens 2.2.1 async-timeout 4.0.2 attrs 22.2.0 Babel 2.12.1 backcall 0.2.0 beautifulsoup4 4.11.2 black 23.1.0 bleach 6.0.0 cachetools 5.3.0 certifi 2022.12.7 cffi 1.15.1 charset-normalizer 3.0.1 click 8.1.3 clldutils 3.19.0 cloudpickle 2.2.1 codecov 2.0.22 colorama 0.4.6 coloredlogs 10.0 colorlog 6.7.0 comm 0.1.2 contourpy 1.0.7 coverage 5.5 csvw 3.1.3 cycler 0.11.0 Cython 0.29.33 databricks-cli 0.17.4 dataclasses 0.6 datasets 2.10.1 debugpy 1.6.6 decorator 5.1.1 deepspeed 0.8.2 defusedxml 0.7.1 deprecation 2.1.0 dill 0.3.6 docker 6.0.1 docker-pycreds 0.4.0 editdistance 0.6.2 entrypoints 0.4 evaluate 0.4.0 executing 1.2.0 fairseq 0.10.0 fastjsonschema 2.16.3 filelock 3.9.0 Flask 2.2.3 fonttools 4.38.0 fqdn 1.5.1 frozenlist 1.3.3 fsspec 2023.1.0 fuzzywuzzy 0.17.0 gitdb 4.0.10 GitPython 3.1.31 google-auth 2.16.1 google-auth-oauthlib 0.4.6 greenlet 2.0.2 grpcio 1.51.3 gunicorn 20.1.0 hjson 3.1.0 huggingface-hub 0.12.1 humanfriendly 10.0 hydra-core 1.3.2 idna 3.4 importlib-metadata 6.0.0 importlib-resources 5.12.0 ipykernel 6.21.2 ipython 8.11.0 ipython-genutils 0.2.0 ipywidgets 8.0.4 isodate 0.6.1 isoduration 20.11.0 isort 5.12.0 itsdangerous 2.1.2 jedi 0.18.2 jellyfish 0.7.2 Jinja2 3.1.2 jiwer 2.5.1 jmespath 1.0.1 joblib 1.2.0 jsonlines 1.2.0 jsonpointer 2.3 jsonschema 4.17.3 jupyter 1.0.0 jupyter_client 8.0.3 jupyter-console 6.6.2 jupyter_core 5.2.0 jupyter-events 0.6.3 jupyter_server 2.3.0 jupyter_server_terminals 0.4.4 jupyterlab-pygments 0.2.2 jupyterlab-widgets 3.0.5 kiwisolver 1.4.4 language-tags 1.2.0 latexcodec 2.0.1 lazy-object-proxy 1.9.0 Levenshtein 0.20.2 lightning-utilities 0.7.1 lingpy 2.6.9 lxml 4.9.2 Mako 1.2.4 many-stop-words 0.2.2 Markdown 3.4.1 MarkupSafe 2.1.2 matplotlib 3.7.0 matplotlib-inline 0.1.6 mccabe 0.7.0 mistune 2.0.5 mlflow 1.27.0 more-itertools 9.1.0 multidict 6.0.4 multiprocess 0.70.14 mypy-extensions 1.0.0 nbclassic 0.5.2 nbclient 0.7.2 nbconvert 7.2.9 nbformat 5.7.3 nest-asyncio 1.5.6 networkx 3.0 newick 1.7.0 ninja 1.11.1 nltk 3.8.1 notebook 6.5.2 notebook_shim 0.2.2 numpy 1.24.2 oauthlib 3.2.2 omegaconf 2.3.0 packaging 23.0 pandas 1.5.3 pandocfilters 1.5.0 parso 0.8.3 pathspec 0.11.0 pathtools 0.1.2 pexpect 4.8.0 pickleshare 0.7.5 Pillow 9.4.0 pip 23.0.1 pkgutil_resolve_name 1.3.10 platformdirs 3.0.0 pluggy 0.13.1 portalocker 2.7.0 progress 1.6 prometheus-client 0.16.0 prometheus-flask-exporter 0.22.2 prompt-toolkit 3.0.38 protobuf 3.20.3 psutil 5.9.4 ptyprocess 0.7.0 pure-eval 0.2.2 py 1.11.0 py-cpuinfo 9.0.0 pyarrow 11.0.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pybtex 0.24.0 pycldf 1.34.0 pycparser 2.21 pydantic 1.10.6 Pygments 2.14.0 PyJWT 2.6.0 pylatexenc 2.10 pylint 2.16.2 pyparsing 3.0.9 pyrsistent 0.19.3 pytest 5.4.3 pytest-cov 2.8.1 python-dateutil 2.8.2 python-docx 0.8.11 python-frontmatter 1.0.0 python-json-logger 2.0.7 python-Levenshtein 0.12.2 python-nexus 2.9.0 pytorch-lightning 1.8.6 pytz 2022.7.1 pyxDamerauLevenshtein 1.7.1 PyYAML 6.0 pyzmq 25.0.0 qtconsole 5.4.0 QtPy 2.3.0 querystring-parser 1.2.4 rapidfuzz 2.13.7 rdflib 6.2.0 regex 2022.10.31 requests 2.28.2 requests-oauthlib 1.3.1 responses 0.18.0 rfc3339-validator 0.1.4 rfc3986 1.5.0 rfc3986-validator 0.1.1 rope 0.14.0 rsa 4.9 rapidfuzz 2.13.7 rdflib 6.2.0 regex 2022.10.31 requests 2.28.2 requests-oauthlib 1.3.1 responses 0.18.0 rfc3339-validator 0.1.4 rfc3986 1.5.0 rfc3986-validator 0.1.1 rope 0.14.0 rsa 4.9 sacrebleu 2.3.1 sacremoses 0.0.53 scikit-learn 0.22.2.post1 scipy 1.10.1 seaborn 0.11.2 Send2Trash 1.8.0 sentencepiece 0.1.97 sentry-sdk 1.16.0 setproctitle 1.3.2 setuptools 67.4.0 six 1.16.0 smmap 5.0.0 sniffio 1.3.0 soupsieve 2.4 SQLAlchemy 2.0.4 sqlparse 0.4.3 sru 3.0.0.dev6 stack-data 0.6.2 symspellpy 0.1.0 tabulate 0.9.0 tensorboard 2.12.0 tensorboard-data-server 0.7.0 tensorboard-plugin-wit 1.8.1 tensorboardX 2.6 termcolor 2.2.0 terminado 0.17.1 textdistance 4.5.0 tinycss2 1.2.1 tokenizers 0.13.2 tomli 2.0.1 tomlkit 0.11.6 torch 1.13.1+cu117 torchmetrics 0.11.3 tornado 6.2 tqdm 4.64.1 traitlets 5.9.0 transformers 4.26.1 typing_extensions 4.5.0 Unidecode 1.3.6 uri-template 1.2.0 uritemplate 4.1.1 urllib3 1.26.14 wandb 0.13.10 wcwidth 0.2.6 webcolors 1.12 webencodings 0.5.1 websocket-client 1.5.1 weighted-levenshtein 0.2.2 Werkzeug 2.2.3 wheel 0.38.4 widgetsnbextension 4.0.5 wrapt 1.15.0 xlrd 1.2.0 xxhash 3.2.0 yarl 1.8.2 zipp 3.15.0 ``` </details> ### Who can help? HF Trainer: @stas00, Accelerate: @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```cmd deepspeed --num_gpus=4 training_enc_dec_model_from_scratch.py \ --output_dir="./hf_output/" \ --per_device_train_batch_size=128 \ --dataloader_num_workers=8 \ --gradient_accumulation_steps=1 \ --gradient_checkpointing=False \ --fp16 \ --logging_steps=500 \ --eval_steps=5000 \ --save_steps=50000 \ --num_train_epochs=2 \ --learning_rate=0.001 \ --warmup_steps=5000 \ --logging_first_step=True \ --eval_accumulation_steps=100 \ --log_level=warning \ --deepspeed deepspeed_zero2.json ``` deepspeed_zero2.json >> ``` { "wandb": { "enabled": true, "project": "Project" }, "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": 3e-7 } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": 2e-6, "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 3e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 3e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "steps_per_print": 500 } ``` The training script ```python from transformers import ( AutoTokenizer, AutoModelForSeq2SeqLM, Seq2SeqTrainer, Seq2SeqTrainingArguments, BartForConditionalGeneration, HfArgumentParser, BertConfig, EncoderDecoderConfig, EncoderDecoderModel, BartConfig, BartForConditionalGeneration, ReformerConfig, LEDConfig, LEDForConditionalGeneration, ) from transformers.data.data_collator import DataCollatorForSeq2Seq from encoder_decoder_utils import ( DataCollatorForEncoderDecoder, Seq2SeqTrainerForEncoderDecoder, ) import torch import torch.distributed import transformers import datasets import os import sys import logging import socket from datetime import datetime, date logger = logging.getLogger(__name__) if __name__ == "__main__": parser = HfArgumentParser(Seq2SeqTrainingArguments) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: training_args = parser.parse_args_into_dataclasses() # get the first value from tuple, probably lib error training_args = training_args[0] # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" ) logger.info(f"Training/evaluation parameters {training_args}") experiment_name = f"hf_enc_dec_custom" # just for loading tokenizer model_name = "allegro/herbert-base-cased" # %% define training parameters batch_size = training_args.per_device_train_batch_size output_dir = f"{training_args.output_dir}/" path_datatime = datetime.now().strftime("%Y_%m_%d-%I_%M_%S") training_args.run_name = f"{experiment_name}-{model_name}-{path_datatime}" training_args.predict_with_generate = True training_args.do_train = True training_args.do_eval = True training_args.evaluation_strategy = ( transformers.trainer_utils.IntervalStrategy.STEPS ) training_args.logging_strategy = ( transformers.trainer_utils.IntervalStrategy.STEPS ) # "steps" training_args.save_total_limit = 5 training_args.seed = 123 training_args.report_to = ["wandb"] logger.info(f"After set new values Training/evaluation parameters {training_args}") #! data local machine data_file = "gec_data_file.jsonl" # 1M json line file # gec_data_file.jsonl content: # {"correct": "Ciasne, koronkowe, podniecające.", "incorrect": "Ciasne, koronkowe, podniwcające."} # {"correct": "Ślinka cieknie, serce rwie żebra, szabla w dłoń.", "incorrect": "Ślinka ciekni4, srve rwie żebra, sszabla w dloń."} num_proc = 1 data_file = os.path.abspath(data_file) dataset_name = os.path.basename(data_file) hf_cache_dir = f"{training_args.output_dir}/{experiment_name}/data/{dataset_name}/" hf_cache_dir = os.path.abspath(hf_cache_dir) output_dir = f"{training_args.output_dir}/{experiment_name}/{dataset_name}/{path_datatime}" training_args.output_dir = f"{output_dir}/checkpoints/" dataset = datasets.load_dataset( "json", data_files=data_file, cache_dir=hf_cache_dir, ) test_size = 2000 train_size = len(dataset) - test_size dataset = dataset.train_test_split(test_size=test_size, seed=123) train_data = dataset["train"] train_data = train_data.select(range(train_size)) val_data = dataset["test"] logger.info(f"\n\n*********\nTrain={len(train_data)} val={len(val_data)}") # %% # %% # %% load tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.model_max_length = 512 tokenizer.bos_token = tokenizer.cls_token tokenizer.eos_token = tokenizer.sep_token # %% initialize the Model # all the parameters could be found here # https://huggingface.co/docs/transformers/v4.26.1/en/model_doc/bert#transformers.BertConfig config_encoder = BertConfig() config_decoder = BertConfig() config_encoder.hidden_size = 512 config_encoder.num_hidden_layers = 2 config_encoder.num_attention_heads = 4 config_encoder.intermediate_size = 1024 config_encoder.decoder_start_token_id = tokenizer.cls_token_id config_encoder.bos_token_id = tokenizer.bos_token_id config_encoder.eos_token_id = tokenizer.sep_token_id config_encoder.pad_token_id = tokenizer.pad_token_id config_encoder.vocab_size = tokenizer.vocab_size config_decoder.hidden_size = 512 config_decoder.intermediate_size = 1024 config_decoder.num_hidden_layers = 2 config_decoder.num_attention_heads = 4 config_decoder.is_decoder = True config_decoder.add_cross_attention = True config_decoder.decoder_start_token_id = tokenizer.cls_token_id config_decoder.bos_token_id = tokenizer.bos_token_id config_decoder.eos_token_id = tokenizer.sep_token_id config_decoder.pad_token_id = tokenizer.pad_token_id config_decoder.vocab_size = tokenizer.vocab_size config = EncoderDecoderConfig.from_encoder_decoder_configs( config_encoder, config_decoder ) # https://huggingface.co/blog/how-to-generate config.max_length = 512 config.min_length = 0 config.no_repeat_ngram_size = 3 config.early_stopping = True config.length_penalty = 2.0 config.num_beams = 5 # config.tie_word_embeddings = True config.tie_encoder_decoder = False config.decoder_start_token_id = tokenizer.cls_token_id config.eos_token_id = tokenizer.sep_token_id config.pad_token_id = tokenizer.pad_token_id config.vocab_size = config.encoder.vocab_size enc_dec = EncoderDecoderModel(config=config) model_file_name = f"{model_name}-custom" # Saving the model, including its configuration enc_dec.save_pretrained(model_file_name) # loading model and config from pretrained folder encoder_decoder_config = EncoderDecoderConfig.from_pretrained(model_file_name) model = EncoderDecoderModel.from_pretrained( model_file_name, config=encoder_decoder_config ) # set the wandb project where this run will be logged os.environ["WANDB_PROJECT"] = "Project" # save your trained model checkpoint to wandb os.environ["WANDB_LOG_MODEL"] = "false" # turn off watch to log faster os.environ["WANDB_WATCH"] = "false" logger.info(f"\n\nNum Params: {model_size}") # %%### process data, tokenize and prepare for training logger.info(f"process train data (tokenization)") def process_data_to_model_inputs(batch, max_len=512): """map function for transformation text to ids, tokenize the inputs and labels """ # Tokenizer will automatically set [BOS] <text> [EOS] inputs = batch["incorrect"] targets = batch["correct"] # tokenize the inputs and labels # without padding, the data collator will pad model_inputs = tokenizer(inputs, max_length=max_len, truncation=True) labels = tokenizer(text_target=targets, max_length=512, truncation=True) model_inputs["labels"] = labels.input_ids return model_inputs process_batch = 5000 train_data_tok = train_data.map( process_data_to_model_inputs, batched=True, batch_size=process_batch, remove_columns=["incorrect", "correct"], num_proc=num_proc ) logger.info(f"process val data (tokenization)") val_data_tok = val_data.map( process_data_to_model_inputs, batched=True, batch_size=process_batch, remove_columns=["incorrect", "correct"], num_proc=num_proc, cache_file_name=f"{hf_cache_dir}/val_mapped_{test_size}.arrow", # keep_in_memory=True ) del train_data del val_data del dataset logger.info(f"done process data (tokenization)") data_collator = DataCollatorForSeq2Seq( tokenizer=tokenizer, model=model, max_length=512, pad_to_multiple_of=8 ) trainer = Seq2SeqTrainerForEncoderDecoder( args=training_args, model=model, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=None, train_dataset=train_data_tok, eval_dataset=val_data_tok, ) logger.info(f"start training") trainer.train() # %% trainer.save_model(f"{output_dir}/final") ``` ### Expected behavior Start traning without error ### Traceback ``` [2023-03-16 05:54:45,026] [INFO] [launch.py:142:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]} [2023-03-16 05:54:45,026] [INFO] [launch.py:148:main] nnodes=1, num_local_procs=4, node_rank=0 [2023-03-16 05:54:45,026] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3]}) [2023-03-16 05:54:45,026] [INFO] [launch.py:162:main] dist_world_size=4 [2023-03-16 05:54:45,026] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3 [2023-03-16 05:54:48,465] [INFO] [comm.py:661:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl 03/16/2023 05:54:49 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: True 03/16/2023 05:54:49 - WARNING - __main__ - Process rank: 3, device: cuda:3, n_gpu: 1distributed training: True, 16-bits training: True 03/16/2023 05:54:49 - WARNING - __main__ - Process rank: 2, device: cuda:2, n_gpu: 1distributed training: True, 16-bits training: True 03/16/2023 05:54:49 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1distributed training: True, 16-bits training: True 03/16/2023 05:54:50 - WARNING - datasets.builder - Found cached dataset json (/home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) 0%| | 0/1 [00:00<?, ?it/s] 03/16/2023 05:54:50 - WARNING - datasets.builder - Found cached dataset json (/home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) 0%| | 0/1 [00:00<?, ?it/s] 03/16/2023 05:54:50 - WARNING - datasets.builder - Found cached dataset json (/home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) 0%| | 0/1 [00:00<?, ?it/s] 03/16/2023 05:54:50 - WARNING - datasets.builder - Found cached dataset json (/home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.90it/s]100%| ████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.90it/s]100%| ████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.89it/s]100%| ████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.87it/s] 03/16/2023 05:54:51 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-1488b483c5004ed7_*_of_00008.arrow 03/16/2023 05:54:51 - WARNING - datasets.arrow_dataset - Loading cached split indices for dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-d86219c9d32c5215.arrow and /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-b4b18a39600bbc9f.arrow 03/16/2023 05:54:55 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/train_mapped_29636272_*_of_00008.arrow 03/16/2023 05:54:56 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/datagec_data_file.jsonl/val_mapped_10000_*_of_00008.arrow Loading results from main process Traceback (most recent call last): File "playground/hf_transformers/training_enc_dec_model_from_scratch.py", line 458, in <module> trainer.train() File "/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/lib/python3.8/site-packages/transformers/trainer.py", line 1543, in train return inner_training_loop( File "/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/lib/python3.8/site-packages/transformers/trainer.py", line 1612, in _inner_training_loop deepspeed_engine, optimizer, lr_scheduler = deepspeed_init( File "/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/lib/python3.8/site-packages/transformers/deepspeed.py", line 312, in deepspeed_init hf_deepspeed_config.trainer_config_finalize(args, model, num_training_steps) File "/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/lib/python3.8/site-packages/transformers/deepspeed.py", line 174, in trainer_config_finalize hidden_size = model.config.hidden_size File "/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/lib/python3.8/site-packages/transformers/configuration_utils.py", line 260, in __getattribute__ return super().__getattribute__(key) AttributeError: 'EncoderDecoderConfig' object has no attribute 'hidden_size' 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-1488b483c5004ed7_*_of_00008.arrow 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-1488b483c5004ed7_*_of_00008.arrow 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-1488b483c5004ed7_*_of_00008.arrow 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached split indices for dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-d86219c9d32c5215.arrow and /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-b4b18a39600bbc9f.arrow 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached split indices for dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-d86219c9d32c5215.arrow and /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-b4b18a39600bbc9f.arrow 03/16/2023 05:54:58 - WARNING - datasets.arrow_dataset - Loading cached split indices for dataset at /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/datagec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-d86219c9d32c5215.arrow and /home/ksopyla/dev/ml/hf_output/hf_enc_dec_custom/data/gec_data_file.jsonl/json/default-6447d29028c8f08e/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-b4b18a39600bbc9f.arrow [2023-03-16 05:54:59,072] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 526261 [2023-03-16 05:54:59,073] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 526262 [2023-03-16 05:54:59,218] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 526263 [2023-03-16 05:54:59,362] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 526264 [2023-03-16 05:54:59,545] [ERROR] [launch.py:324:sigkill_handler] ['/home/ksopyla/.cache/pypoetry/virtualenvs/ml-A9X51t2i-py3.8/bin/python', '-u', 'playground/hf_transformers/training_enc_dec_model_from_scratch.py', '--local_rank=3', '--output_dir=./hf_output/', '--per_device_train_batch_size=128', '--dataloader_num_workers=8', '--gradient_accumulation_steps=1', '--gradient_checkpointing=False', '--fp16', '--logging_steps=500', '--eval_steps=5000', '--save_steps=50000', '--num_train_epochs=2', '--learning_rate=0.001', '--warmup_steps=5000', '--logging_first_step=True', '--eval_accumulation_steps=100', '--deepspeed', 'playground/hf_transformers/deepspeed_zero2.json', '--log_level=warning'] exits with return code = 1 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22176/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22176/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22175
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22175/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22175/comments
https://api.github.com/repos/huggingface/transformers/issues/22175/events
https://github.com/huggingface/transformers/issues/22175
1,625,003,876
I_kwDOCUB6oc5g25dk
22,175
wav2vec processor batching logic is too restrictive
{ "login": "LWprogramming", "id": 13173037, "node_id": "MDQ6VXNlcjEzMTczMDM3", "avatar_url": "https://avatars.githubusercontent.com/u/13173037?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LWprogramming", "html_url": "https://github.com/LWprogramming", "followers_url": "https://api.github.com/users/LWprogramming/followers", "following_url": "https://api.github.com/users/LWprogramming/following{/other_user}", "gists_url": "https://api.github.com/users/LWprogramming/gists{/gist_id}", "starred_url": "https://api.github.com/users/LWprogramming/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LWprogramming/subscriptions", "organizations_url": "https://api.github.com/users/LWprogramming/orgs", "repos_url": "https://api.github.com/users/LWprogramming/repos", "events_url": "https://api.github.com/users/LWprogramming/events{/privacy}", "received_events_url": "https://api.github.com/users/LWprogramming/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi @ArthurZucker ", "Hey @LWprogramming! Thanks for the comprehensive issue description - I agree that the logic for checking if the input `is_batched` is broken when the input is a batched numpy array, e.g. the feature extractor **should** set `is_batched=True` when the numpy array is 2-d, but currently does not:\r\nhttps://github.com/huggingface/transformers/blob/57f25f4b7fb85ff069f8701372710b2a3207bf2d/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py#L184-L187\r\n\r\nWould you like to open a PR to fix this? 🤗 We can just do one additional check to set `is_batched = True` if the input is a 2-d numpy array. Note that it should be 2-d with dims [batch, audio_input] and not 3-d since we only expect mono channel input to the feature extractor.", "Hey @LWprogramming! Just checking-in to see whether you'd like to open a PR to fix the issue you uncovered? Think you're in a good position to submit a clean fix! 🤗", "Hi! I'll take care of it, got preoccupied with some irl stuff that came up the past few weeks but things should be settling down soon :)", "That's awesome @LWprogramming! Excited for the PR 🤗 Feel free to tag me as soon as it's ready and I'll get you a review", "marking as still active, just fixing up the PR" ]
1,678
1,684
1,684
CONTRIBUTOR
null
### System Info transformers version at the time of writing is `4.26.1` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python # !pip install transformers torch # in jupyter notebook from transformers import Wav2Vec2Processor import torch import numpy as np batch = 4 # create Wav2Vec2Processor processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft") # generate random input tensor input_tensor = torch.tensor(np.random.randn(batch, 10, 10)) # pass input tensor through processor output = processor(input_tensor, return_tensors="pt") print(output["input_values"].shape) # 1 x 4 x 10 x 10 ``` ### Expected behavior It seems reasonable that an input could be of shape `batch x d_1 x d_2 ...` and I'd expect the output to have the same shape. However, [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py#L184) the code has an extra check for type list or tuple that results in it misinterpreting the input as a single example. Side note: I'm unsure what to infer from the type checking logic because it doesn't match the type hints i.e. `tuple` isn't supposed to be possible here anyways, according to the `__call__` type hint. I did check some other examples of `is_batched` appearing in the `src/transformers/models` directory and they look similar but unexpected.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22175/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22174
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22174/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22174/comments
https://api.github.com/repos/huggingface/transformers/issues/22174/events
https://github.com/huggingface/transformers/pull/22174
1,624,972,604
PR_kwDOCUB6oc5ME7ZA
22,174
[WIP] Add codegeex
{ "login": "yyz218", "id": 104395647, "node_id": "U_kgDOBjjzfw", "avatar_url": "https://avatars.githubusercontent.com/u/104395647?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yyz218", "html_url": "https://github.com/yyz218", "followers_url": "https://api.github.com/users/yyz218/followers", "following_url": "https://api.github.com/users/yyz218/following{/other_user}", "gists_url": "https://api.github.com/users/yyz218/gists{/gist_id}", "starred_url": "https://api.github.com/users/yyz218/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yyz218/subscriptions", "organizations_url": "https://api.github.com/users/yyz218/orgs", "repos_url": "https://api.github.com/users/yyz218/repos", "events_url": "https://api.github.com/users/yyz218/events{/privacy}", "received_events_url": "https://api.github.com/users/yyz218/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,678
1,678
1,678
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22174/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22174", "html_url": "https://github.com/huggingface/transformers/pull/22174", "diff_url": "https://github.com/huggingface/transformers/pull/22174.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22174.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22173
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22173/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22173/comments
https://api.github.com/repos/huggingface/transformers/issues/22173/events
https://github.com/huggingface/transformers/pull/22173
1,624,933,139
PR_kwDOCUB6oc5MEy3G
22,173
Fix DeiT Masked Image Modeling output
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22173). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Fixes the output of `DeiTForMaskedImageModeling` and `TFDeiTForMaskedImageModeling` by replacing the inaccurate `MaskedLMOutput` with the `MaskedImageCompletionOutput` class. Follow-up PR on #22152 ## Before submitting - [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22173/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22173", "html_url": "https://github.com/huggingface/transformers/pull/22173", "diff_url": "https://github.com/huggingface/transformers/pull/22173.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22173.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22172
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22172/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22172/comments
https://api.github.com/repos/huggingface/transformers/issues/22172/events
https://github.com/huggingface/transformers/issues/22172
1,624,913,128
I_kwDOCUB6oc5g2jTo
22,172
How to save the model after using checkpoint to continue training
{ "login": "yung1231", "id": 48431284, "node_id": "MDQ6VXNlcjQ4NDMxMjg0", "avatar_url": "https://avatars.githubusercontent.com/u/48431284?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yung1231", "html_url": "https://github.com/yung1231", "followers_url": "https://api.github.com/users/yung1231/followers", "following_url": "https://api.github.com/users/yung1231/following{/other_user}", "gists_url": "https://api.github.com/users/yung1231/gists{/gist_id}", "starred_url": "https://api.github.com/users/yung1231/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yung1231/subscriptions", "organizations_url": "https://api.github.com/users/yung1231/orgs", "repos_url": "https://api.github.com/users/yung1231/repos", "events_url": "https://api.github.com/users/yung1231/events{/privacy}", "received_events_url": "https://api.github.com/users/yung1231/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It looks like you accidentally deleted the best checkpoint. To fix this and be able to resume training, I'd advise to manually modify the `training_state` (which should be stored in a file named `trainer_state.json` in the checkpoint-70000 folder) and remove the key for `best_model_checkpoint`.", "I'm having the same problem.\r\n\r\nFor me it was because I cloned the repository to a different path (of a different machine).\r\n\r\nSince `best_model_checkpoint` contains an absolute path it cannot find the checkpoint at that path.\r\n\r\nI fixed it by manually editing `best_model_checkpoint` from `trainer_state.json`.\r\n\r\nIs there a way to store a relative path instead of an absolute one?" ]
1,678
1,685
1,679
NONE
null
I am trying to continue training my model (gpt2) from a checkpoint. However, the error occurred in the trainer area when I finished training. I save a checkpoint for every 5000 steps, but because there is not enough space, the previous checkpoint will be deleted. The last checkpoint I used was `checkpoint-70000`, but when I saved it, I had to find `checkpoint-35000`, but I had already deleted `checkpoint-35000` Then how can I save the final trained model? ``` training_args = TrainingArguments( output_dir=model_checkpoints_dir, # The directory where the model checkpoints and other output files will be saved. num_train_epochs=5, # The total number of training epochs to run. per_device_train_batch_size=64, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_steps=200, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=model_log_dir, # directory for storing logs prediction_loss_only=True, save_steps=5000, logging_steps=5000, evaluation_strategy="steps", save_strategy="steps", load_best_model_at_end=True ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above data_collator=data_collator, train_dataset=tokenized_train_dataset, # training dataset eval_dataset=tokenized_eval_dataset ) trainer.train(resume_from_checkpoint = True) ``` error message ``` /usr/local/lib/python3.9/dist-packages/transformers/trainer.py in _sorted_checkpoints(self, output_dir, checkpoint_prefix, use_mtime) 2758 # Make sure we don't delete the best model. 2759 if self.state.best_model_checkpoint is not None: -> 2760 best_model_index = checkpoints_sorted.index(str(Path(self.state.best_model_checkpoint))) 2761 for i in range(best_model_index, len(checkpoints_sorted) - 2): 2762 checkpoints_sorted[i], checkpoints_sorted[i + 1] = checkpoints_sorted[i + 1], checkpoints_sorted[i] ValueError: '/content/drive/MyDrive/exp/model_checkpoints/checkpoint-35000' is not in list ``` Thanks a lot for the help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22172/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22171
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22171/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22171/comments
https://api.github.com/repos/huggingface/transformers/issues/22171/events
https://github.com/huggingface/transformers/issues/22171
1,624,894,882
I_kwDOCUB6oc5g2e2i
22,171
ValueError: Some specified arguments are not used by the HfArgumentParser: ['--local-rank=0']
{ "login": "bestpredicts", "id": 12403152, "node_id": "MDQ6VXNlcjEyNDAzMTUy", "avatar_url": "https://avatars.githubusercontent.com/u/12403152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bestpredicts", "html_url": "https://github.com/bestpredicts", "followers_url": "https://api.github.com/users/bestpredicts/followers", "following_url": "https://api.github.com/users/bestpredicts/following{/other_user}", "gists_url": "https://api.github.com/users/bestpredicts/gists{/gist_id}", "starred_url": "https://api.github.com/users/bestpredicts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bestpredicts/subscriptions", "organizations_url": "https://api.github.com/users/bestpredicts/orgs", "repos_url": "https://api.github.com/users/bestpredicts/repos", "events_url": "https://api.github.com/users/bestpredicts/events{/privacy}", "received_events_url": "https://api.github.com/users/bestpredicts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @bestpredicts, thanks for raising this issue. \r\n\r\nI can confirm that I see the same error with the most recent version of transformers and pytorch 2. I wasn't able to replicate the issue with pytorch 1.13.1 and the same transformers version.\r\n\r\nFollowing the messages in the shared error output, if I set `LOCAL_RANK` in my environment and pass in `--use-env` I am able to run on pytorch 2.\r\n```\r\nLOCAL_RANK=0,1 CUDA_VISIBLE_DEVICES=0,1 \\\r\npython -m torch.distributed.launch --nproc_per_node 2 --use-env examples/pytorch/language-modeling/run_clm.py \\\r\n--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \\\r\n--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200\r\n```", "Also note that `torch.distributed.launch` is deprecated and `torchrun` is preferred in PyTorch 2.0.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Does anyone solved this problem? I got same problem when use torchrun or torch.distributed.launch, the self.local_rank is -1. my env is pytorch==2.0.0 and transorformers=4.30.1.", "You might try migrating to torchrun? i.e.:\r\n```\r\ntorchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \\\r\n--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \\\r\n--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200\r\n```\r\nfor reference on migrating:\r\nhttps://pytorch.org/docs/stable/elastic/run.html", "Have you solve your problems? I came up with the same error when using deepspeed. Solutions provided above didn't work at all. :(", "> 另请注意,它`torch.distributed.launch`已被弃用,并且`torchrun`在 PyTorch 2.0 中是首选。\r\n\r\nThanks for this tip.", "watching", "Print from `sys.argv`:\r\n\r\n```sh\r\n['train.py', '--local-rank=0', '--model_name_or_path', './checkpoints/vicuna-7b-v1.5', ...]\r\n```\r\n\r\nother arguments have the format 'key', 'value', but `locak_rank` is not properly parsed. In the above example, `local_rank=0` is treated as a whole.\r\nI think this may be something wrong with `torch.distributed.launch`, since it appends `local_rank=0` to the arguments list, but the appended argument can not be properly parsed by `HFArgumentParser`.\r\n\r\nSo use `torchrun` and use `--use-env` which uses environment variable `LOCAL_RANK` but not arguments `--local_rank` is an optional solution.\r\n\r\nA hack fix can add this before `parse_args_into_dataclasses()`\r\n\r\n```python\r\nimport sys\r\nfor arg in sys.argv:\r\n if arg.startswith(\"--local-rank=\"):\r\n rank = arg.split(\"=\")[1]\r\n sys.argv.remove(arg)\r\n sys.argv.append('--local_rank')\r\n sys.argv.append(rank)\r\n```", "i have this problem\r\n\r\nValueError: Some specified arguments are not used by the HfArgumentParser: ['-f', '/root/.local/share/jupyter/runtime/kernel-8d0db21b-3ec1-4b17-987c-be497d81b3c5.json']\r\n![image](https://github.com/huggingface/transformers/assets/108888294/b74062a7-d3ee-4c10-81bc-7cd263b0ed52)\r\n", "> You might try migrating to torchrun? i.e.:\r\n> \r\n> ```\r\n> torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \\\r\n> --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \\\r\n> --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200\r\n> ```\r\n> \r\n> for reference on migrating: https://pytorch.org/docs/stable/elastic/run.html\r\n\r\nthanks, it is ok for me", "can it run on colab \r\ni can't do that\r\n" ]
1,678
1,704
1,682
NONE
null
### System Info transformers version 4.7 , pytorch2.0, python3.9 run the example code in document of transformers ```shell rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python -m torch.distributed.launch --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 ``` error info ```shell /nfs/v100-022/anaconda3/lib/python3.9/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use-env is set by default in torchrun. If your script expects `--local-rank` argument to be set, please change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions warnings.warn( WARNING:torch.distributed.run: ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** Traceback (most recent call last): File "/nfs/v100-022/run_clm.py", line 772, in <module> main() File "/nfs/v100-022/run_clm.py", line 406, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/nfs/v100-022//anaconda3/lib/python3.9/site-packages/transformers/hf_argparser.py", line 341, in parse_args_into_dataclasses raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}") ValueError: Some specified arguments are not used by the HfArgumentParser: ['--local-rank=0'] ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1.Install the following configuration environment: python 3.9 pytroch 2.1 dev trasnsformers 4.7 2. then run code ``` rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python -m torch.distributed.launch --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 ``` 3. then you can get error. ValueError: Some specified arguments are not used by the HfArgumentParser: ['--local-rank=0'] ### Expected behavior 1.Install the following configuration environment: python 3.9 pytroch 2.1 dev trasnsformers 4.7 2. then run code ``` rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python -m torch.distributed.launch --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 ``` 3. then you can get error. ValueError: Some specified arguments are not used by the HfArgumentParser: ['--local-rank=0']
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22171/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22170
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22170/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22170/comments
https://api.github.com/repos/huggingface/transformers/issues/22170/events
https://github.com/huggingface/transformers/issues/22170
1,624,662,292
I_kwDOCUB6oc5g1mEU
22,170
resume_from_checkpoint is not working with Deepspeed
{ "login": "Raibows", "id": 37944786, "node_id": "MDQ6VXNlcjM3OTQ0Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/37944786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Raibows", "html_url": "https://github.com/Raibows", "followers_url": "https://api.github.com/users/Raibows/followers", "following_url": "https://api.github.com/users/Raibows/following{/other_user}", "gists_url": "https://api.github.com/users/Raibows/gists{/gist_id}", "starred_url": "https://api.github.com/users/Raibows/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Raibows/subscriptions", "organizations_url": "https://api.github.com/users/Raibows/orgs", "repos_url": "https://api.github.com/users/Raibows/repos", "events_url": "https://api.github.com/users/Raibows/events{/privacy}", "received_events_url": "https://api.github.com/users/Raibows/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Raibows, you're giving me no reproduction so there is nothing I can do here as i have no idea what you did.\r\n\r\nthere is no need for tag, deepspeed's `save_checkpoint` creates a `latest` file and uses that to find the checkpoint for resume.\r\n\r\nI can send you to a test that validates the resume works - give it a try:\r\n\r\nhttps://github.com/huggingface/transformers/blob/f7329751fe5c43365751951502c00df5a4654359/tests/deepspeed/test_deepspeed.py#L636-L691\r\n\r\nTo run this test do:\r\n```\r\nRUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py -k test_can_resume_training_normal\r\n```\r\n\r\nor is it something specific to `save_strategy = 'epoch'`? I have only used the default strategy - can you change my test to reproduce your Issue?", "Hi, thanks for your reply. \r\n\r\nBut actually I don't have any \"latest\" file in the ``output_dir``. Here is the screenshot: \r\n![image](https://user-images.githubusercontent.com/37944786/225226161-147b8002-4192-4dea-98dc-eaaefed15a57.png)\r\nAnd in every checkpoint-xxx directory, we have\r\n![image](https://user-images.githubusercontent.com/37944786/225226325-a96d00bc-f991-405f-b3f5-d6bf8e8bc9c3.png)\r\nIn the globalstepxxx directory, we have\r\n![image](https://user-images.githubusercontent.com/37944786/225226642-bd703406-5b7a-4e2d-872e-96d4b05f813c.png)\r\n\r\nIf I pass ``resume_from_checkpoint = output_dir/checkpoint-xxx``, it will throw the error I mentioned.\r\n\r\nThanks for your test scripts. I will try it later.\r\n", "I totally believe you that this is the case. But I don't have access to your computer. So if there is a bug I need to be able to reproduce it. which means that ideally you'd send a small script that shows the problem.\r\n\r\nAs I suggested perhaps you could adapt the test I sent to you to your particular situation and use it as the reproduction that demonstrates the problem.", "Hi, sorry for the late response. I test many times and find it very weird. Now the latest file exists. \r\n\r\nBut \"zero_pp_rank_x_mp_rank_00_optim_states.pt\" this file has some problems in saving.\r\n\r\nI have posted a gist in https://gist.github.com/Raibows/73c3a6105c0226669910d5608f5efb4e\r\n\r\nIf you set the num of training samples to very few, which indicates that the ``save_checkpoint`` will be executed very soon after running. All the ckpts are saved very well.\r\n\r\nHowever, if you let it run for a longer time. it will only save 1 ckpt \"zero_pp_rank_0_mp_rank_00_optim_states.pt\"\r\nno other \"zero_pp_rank_1_mp_rank_00_optim_states.pt, zero_pp_rank_2_mp_rank_00_optim_states.pt\" ...... which should be saved. This will cause the fatal error when you are trying to resume from them.\r\n\r\nComment out L59 in the gist and run it with\r\n\r\n```\r\ntorchrun --nproc_per_node 4 test_save.py\r\n```", "I'm not sure if you had the same issue, but when I tried to resume a deepspeed run, it would try to load the right checkpoint but fail to find a `pytorch_model.bin` file. So I just ran the `zero_to_fp32.py` script to create the checkpoint and resuming with deepspeed just worked, it loaded the optimizer states / model states from the `global_stepXXX/` folder.\r\n\r\nI'm on transformers version `4.27.1`", "@Raibows, thank you for providing an easy to use repro - you can use `model_name = 'patrickvonplaten/t5-tiny-random'` while debugging this as it'd be much faster and not require many resources.\r\n\r\nI did run it for a bit and had no problems on 2 gpus.\r\n\r\nAs we are only integrating Deepspeed and the call to `save_checkpoint` is done correctly I think - you probably will have a better luck asking directly at https://github.com/microsoft/DeepSpeed/issues while providing your repro script.\r\n\r\nYou can validate that the integration is calling it on all ranks:\r\n\r\nhttps://github.com/huggingface/transformers/blob/60d51ef5123d949fd8c59cd4d3254e711541d278/src/transformers/trainer.py#L2297-L2300\r\n\r\nIf you'd like to debug this yourself, I'd add a debug print that would include a rank `self.args.local_rank` - so that you'd want to see that each rank calls this deepspeed method. If it gets called on all ranks for each save, then you definitely have to take it up to the Deepspeed team. If it doesn't, which I doubt, but who knows - do get back to me.\r\n\r\nHonestly, I have seen some reports in the past where users had some weird filesystem issues where files would not appear. Is it your personal computer that you're running this one, or some particular cloud?", "@stas00 Hi, really thanks for your help! \r\n\r\nNow I find the reason, finally. It's my own code's fault. Since I use time-based as the path of output directory. However, we use distributed launch to launch the script which causes each process will have a little bit different path of output directory.\r\n\r\nI'm going to close this issue. Thanks!", "Glad you figured it out, @Raibows!\r\n\r\nThat's why we have unit tests that help us know whether the feature is working correctly and when it doesn't for a user often it has to do with some peculiarity of user's code." ]
1,678
1,679
1,679
NONE
null
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-139-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: true - Using distributed or parallel set-up in script?: true ### Who can help? @stas00 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. trainer with deepspeed using stage_2 or 3 (I think it does not matter) 2. set save_strategy = 'epoch', i.e., save every epoch 3. you cannot use ``resume_from_checkpoint`` to resume the training procedure 4. why? in ``transformers/deepspeed.py/L359``, ``deepspeed_engine.load_checkpoint`` actually needs an argument called ``tag`` or you need have a "latest file" in the checkpoint directory. However, neither of them are supported by trainer. The trainer does not provide a chance to pass ``tag`` , and does not store a "latest file" in the checkpoint directory. 5. related to ``deepspeed/runtime/engine.py/L2712`` ### Expected behavior It should work well as passed ``resume_from_checkpoint``.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22170/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22169
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22169/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22169/comments
https://api.github.com/repos/huggingface/transformers/issues/22169/events
https://github.com/huggingface/transformers/issues/22169
1,624,558,030
I_kwDOCUB6oc5g1MnO
22,169
Wrong "view source" links on main docs
{ "login": "gau-nernst", "id": 26946864, "node_id": "MDQ6VXNlcjI2OTQ2ODY0", "avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gau-nernst", "html_url": "https://github.com/gau-nernst", "followers_url": "https://api.github.com/users/gau-nernst/followers", "following_url": "https://api.github.com/users/gau-nernst/following{/other_user}", "gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}", "starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions", "organizations_url": "https://api.github.com/users/gau-nernst/orgs", "repos_url": "https://api.github.com/users/gau-nernst/repos", "events_url": "https://api.github.com/users/gau-nernst/events{/privacy}", "received_events_url": "https://api.github.com/users/gau-nernst/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @stevhliu ", "The tag will be out in a couple of hours. The doc for the v4.27.0 release was pushed last night and the rest of the release will follow this morning. Thanks for catching this so quicly!" ]
1,678
1,679
1,679
CONTRIBUTOR
null
### System Info NA ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When viewing the latest docs, "view source" link expansion leads to an invalid GitHub link. e.g. https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertConfig View source link: https://github.com/huggingface/transformers/blob/v4.27.0/src/transformers/models/bert/configuration_bert.py#L72 Since `v4.27.0` tag does not exist, GitHub reports an invalid link. It should be `main` instead. I believe this is a problem of configuring the auto-generate docs. ### Expected behavior Show correct link to source code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22169/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22168
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22168/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22168/comments
https://api.github.com/repos/huggingface/transformers/issues/22168/events
https://github.com/huggingface/transformers/issues/22168
1,624,460,056
I_kwDOCUB6oc5g00sY
22,168
(Not So) Bad words list for text generation
{ "login": "iiglesias-asapp", "id": 108540116, "node_id": "U_kgDOBngw1A", "avatar_url": "https://avatars.githubusercontent.com/u/108540116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iiglesias-asapp", "html_url": "https://github.com/iiglesias-asapp", "followers_url": "https://api.github.com/users/iiglesias-asapp/followers", "following_url": "https://api.github.com/users/iiglesias-asapp/following{/other_user}", "gists_url": "https://api.github.com/users/iiglesias-asapp/gists{/gist_id}", "starred_url": "https://api.github.com/users/iiglesias-asapp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iiglesias-asapp/subscriptions", "organizations_url": "https://api.github.com/users/iiglesias-asapp/orgs", "repos_url": "https://api.github.com/users/iiglesias-asapp/repos", "events_url": "https://api.github.com/users/iiglesias-asapp/events{/privacy}", "received_events_url": "https://api.github.com/users/iiglesias-asapp/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[ "cc @gante ", "Hey @iiglesias-asapp 👋 Thank you for the suggestion! \r\n\r\nBefore we dive into adding code, a disclaimer -- one of the current problems with `.generate()` is that there are too many options, scaring users away from the docs. This means that I will be conservative before giving the green light to add more options 🤗 \r\n\r\nWe do have an option to have control over extractive vs abstraction summarization, the `encoder_repetition_penalty` ([docs](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.encoder_repetition_penalty)). This is a multiplicative factor to the logits that increases/decreases the odds of reusing the tokens in the input.\r\n\r\nDo you have more use cases in mind, where your suggestion would be critical?", "Hi @gante! Thanks for the reply.\r\n\r\nI agree that there many options already 😅 I wasn't thinking of this as an additional option but more like an \"upgrade\" of the existing feature since it gives the user a bit more flexibility while keeping the previous functionality, i.e. tokens are boosted/penalized instead of forced/forbidden and users willing to forbid the appearance of certain token can still input float(\"-Inf\") as score.\r\n\r\nMain use case in mind was cheap model customization by a set of score,[tokens]. I guess, more generally, it is desirable to allow the model to generate a certain token if there is no natural replacement for it and discourage it otherwise; the sort of soft penalization that is allowed in other APIs.", "@iiglesias-asapp I see your point - controlling at a token level may be advantageous. Nevertheless, i) without a specific common use case in mind and ii) having not heard the demand for this feature before, I'm reluctant to add it. Remember that custom logits processors can be used, so not adding it to the codebase doesn't mean that it can't be used 🤗 \r\n\r\nLet's not close this issue and do the following. If this comment gets 10 reactions/this issue gets mentioned 10 times, then it means that folks have been searching for this feature. In that case, let's roll back my decision above, and add it to the codebase. That way, we can balance HF's limited maintenance resources with actual feature demand! (Whoever does the 10th react, plz tag me)\r\n\r\n@iiglesias-asapp does it sound good to you?\r\n", "Sounds good! Thanks for considering it @gante ", "Please add this because I have alpaca model and it was trained on a bad dataset with many cases of input and output fields having \"<noinput\" and \"nooutput>\" text in them which causes my LLM to constantly respond with those words :/", "@teknium1 I think that `bad_words_list` as it is would be enough for your example. But if you still feel something like the `logit_bias` parameter is what you need, react to @gante comment to make this available\r\n\r\n", "> @teknium1 I think that `bad_words_list` as it is would be enough for your example. But if you still feel something like the `logit_bias` parameter is what you need, react to @gante comment to make this available\r\n\r\nOh can you point me to where/how I can use the bad_words_list\r\n\r\nedit: nvm found it ty", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> custom logits processors\r\n\r\n\r\n> @iiglesias-asapp I see your point - controlling at a token level may be advantageous. Nevertheless, i) without a specific common use case in mind and ii) having not heard the demand for this feature before, I'm reluctant to add it. Remember that custom logits processors can be used, so not adding it to the codebase doesn't mean that it can't be used 🤗\r\n> \r\n> Let's not close this issue and do the following. If this comment gets 10 reactions/this issue gets mentioned 10 times, then it means that folks have been searching for this feature. In that case, let's roll back my decision above, and add it to the codebase. That way, we can balance HF's limited maintenance resources with actual feature demand! (Whoever does the 10th react, plz tag me)\r\n> \r\n> @iiglesias-asapp does it sound good to you?\r\n\r\n@gante\r\n\r\nThere are many use cases:\r\n\r\n1) Increase length of generated text, by making **end of text** token less probable.\r\n\r\n2) If you use few shot learning, and you have problem with labels that use used, you can increase probability of a label.\r\n for example:\r\n instruction: write me a joke about cars\r\n answer: some response\r\n instruction: write me a joke about [subject2]\r\n answer: some response\r\n instruction: write me a joke about [subject3]\r\n answer: some response\r\n \r\n then you need to increase probability for answer: in some cases, when not everything work as it should.\r\n encoded norepeat engrams is one option, but it sometimes generates strange text.\r\n\r\n 2a) The same thing if you do a few shot learning to generate html text.\r\n For example, when you want text not to repeat, if you set params for that,\r\n then also html tags wont be repeated and text will be strangely formated. So then you just increase the probability of html tags\r\nand you get much better output.\r\n\r\n\r\n3) paraphrasing for dataset multiplying\r\n to get more unique paraphrases, it is good to lower probability of original words\r\n\r\n4) openai has this feature, i really doubt they would implement something, and write documentation for that, if they did not think that some users would use it.\r\n\r\n\r\n\r\n\r\n\r\n \r\n\r\n\r\n\r\n", "@gante \r\nHere comes the 10th reaction! \r\nThanks for considering adding this feature. Really need this since I'm currently working on building APIs similar to [OpenAI API](https://platform.openai.com/docs/api-reference/completions/create#completions/create-logit_bias). It would be convenient if it is officially supported!", "As promised, I've added it to my queue! 🫡 ", "Hey everyone 👋 A way to bias specific tokens has been added on `main`. You can check its docs [here](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.SequenceBiasLogitsProcessor) (which contains a thorough example) and the corresponding `GenerationConfig` flag [here](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.sequence_bias). Let me know if it is not working properly 🤗 \r\n\r\nTagging the folks that have upvoted the comment above and/or replied on this thread for visibility: @iiglesias-asapp @teknium1 @liaeh @skevy @talkhaldi @francislabountyjr @tristanvdb @thjwhite @NanoCode012 @zhuhl98 @Oxi84 @andyh0913 @Vonathar " ]
1,678
1,687
1,687
NONE
null
### Feature request Support a soft penalization logits processor in the transformers generate method (extends NoBadWordsLogitsProcessor). ### Motivation - The [NoBadWordsLogitsProcessor](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.NoBadWordsLogitsProcessor) forbids the generation of certain tokens _in absolute terms_ by overwriting the logits to minus infinity - The request is to add a softer version of this, one in which certain tokens are penalized or boosted but _only mildly_ - This is in the spirit of the `logit_bias` parameter in the generate methods [here](https://beta.openai.com/docs/api-reference/completions/create#completions/create-logit_bias) (OpenAI) and [here](https://docs.cohere.ai/reference/generate) (Cohere) - Possible use cases include, but are not limited to: enhance extractiveness during document summarization by boosting tokens present in the input and style guidance by penalizing/boosting the appropriate vocabulary ### Your contribution **Overview** - A new class is defined as `BendLogitsProcessor` based on the current `NoBadWordsLogitsProcessor` class - The current argument `bad_words_ids` is enriched to include a float value per list of tokens_ids, aka the penalization/boosting score. Positive large values encourage the token to be generated while negative large values do the opposite - Penalization/boosting scores are unbounded but could be later scaled as it seems to be the case in the implementations referenced above, e.g. `logit bias` is in [-10,10] [here](https://beta.openai.com/docs/api-reference/completions/create#completions/create-logit_bias) and [-100,100] [here](https://docs.cohere.ai/reference/generate) - Observe that `NoBadWordsLogitsProcessor` behavior could be recovered just by explicitly setting penalization/boosting scores to float(“-Inf”) **The new class** This is very much the same as `NoBadWordsLogitsProcessor`, I tried to keep as much as possible intact. There might be a more efficient implementation. ```py class BendLogitsProcessor(LogitsProcessor): """ [`LogitsProcessor`] that softly penalizes or boosts certain token/s Args: bend_list (`List[Union[float, List[int]]]`): List of list of lists with penalization/boosting coefficients and list of token ids. In order to get the token ids of the words, use `tokenizer(bad_words, add_prefix_space=True, add_special_tokens=False).input_ids`. eos_token_id (`int`): The id of the *end-of-sequence* token. """ def __init__(self, bend_list: List[Union[float, List[int]]], eos_token_id: int): self.bend_list = bend_list coefs = [coef for coef,tok in self.bend_list] words_ids = [tok for coef,tok in self.bend_list] if not isinstance(bend_list, List) or len(bend_list) == 0: raise ValueError(f"`bend_list` has to be a non-empty list, but is {bend_list}.") if any(not isinstance(word_ids, list) for word_ids in words_ids): raise ValueError(f"`words_ids` has to be a list of lists, but is {words_ids}.") if any( any((not isinstance(token_id, (int, np.integer)) or token_id < 0) for token_id in word_ids) for word_ids in words_ids ): raise ValueError( f"Each list in `words_ids` has to be a list of positive integers, but is {words_ids}." ) if any(not isinstance(coef, float) for coef in coefs): raise ValueError(f"`coefs` has to be a float, but is {coefs}.") words_ids = list(filter(lambda token_seq: token_seq != [eos_token_id], words_ids)) self.words_id_length_1, self.coefs_length_1 = [],[] self.words_id_length_greater_than_1, self.coefs_length_greater_than_1 = [],[] for coef,word in zip(coefs,words_ids): if len(word) == 1: self.words_id_length_1.append(word[0]) self.coefs_length_1.append(coef) else: self.words_id_length_greater_than_1.append(word) self.coefs_length_greater_than_1.append(coef) for token_seq in self.words_id_length_greater_than_1: if len(token_seq) == 0: raise ValueError(f"Words token sequences {words_ids} cannot have an empty list") def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: masks_length_1, scores_length_1 = [], torch.zeros_like(scores) masks_length_greater_than_1, scores_length_greater_than_1 = [], torch.zeros_like(scores) if len(self.words_id_length_1) > 0: for word_id,coef in zip(self.words_id_length_1,self.coefs_length_1): mask = self._get_mask_length_1(scores,word_id) masks_length_1.append(mask) if coef >= 0: score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) * (1 + coef) + \ scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) / (1 + coef) if coef < 0: score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) / (1 + abs(coef)) + \ scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) * (1 + abs(coef)) scores_length_1 += score if len(self.words_id_length_greater_than_1) > 0: for word_ids,coef in zip(self.words_id_length_greater_than_1,self.coefs_length_greater_than_1): mask = self._get_mask_length_greater_than_1(input_ids.tolist(),scores,word_ids) masks_length_greater_than_1.append(mask) if coef >= 0: score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) * (1 + coef) + \ scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) / (1 + coef) if coef < 0: score = scores.masked_fill(scores.masked_fill(~mask,0) < 0,0) / (1 + abs(coef)) + \ scores.masked_fill(scores.masked_fill(~mask,0) >= 0,0) * (1 + abs(coef)) scores_length_greater_than_1 += score masks_all_lengths = masks_length_1 + masks_length_greater_than_1 one_large_mask = torch.zeros_like(scores).bool() for mask in masks_all_lengths: one_large_mask = torch.bitwise_or(one_large_mask,mask) base_scores = scores.masked_fill(one_large_mask,0.) new_scores = base_scores + scores_length_1 + scores_length_greater_than_1 return new_scores def _get_mask_length_1(self, scores: torch.FloatTensor, word_id:List[int]) -> torch.BoolTensor: mask = torch.zeros(scores.shape[1]) mask[word_id] = 1 return mask.unsqueeze(0).to(scores.device).bool() def _tokens_match(self, prev_tokens: List[int], tokens: List[int]) -> bool: if len(tokens) == 0: return True elif len(tokens) > len(prev_tokens): return False else: return prev_tokens[-len(tokens) :] == tokens def _calc_word_ids(self, prev_input_ids: List[List[int]], word_ids:List[int]) -> Iterable[int]: tokens = [] for prev_input_ids_slice in prev_input_ids: tokens_slice = [] if self._tokens_match(prev_input_ids_slice, word_ids[:-1]): tokens_slice.append(word_ids[-1]) tokens.append(tokens_slice) return tokens def _get_mask_length_greater_than_1(self, input_ids: list, scores: torch.FloatTensor, word_ids:List[int]) -> torch.BoolTensor: dynamic_tokens = self._calc_word_ids(input_ids, word_ids) mask_list = [] for idx, batch_tokens in enumerate(dynamic_tokens): for token in batch_tokens: # Eliminates invalid bad word IDs that are over the vocabulary size. if token <= scores.shape[1]: mask_list.append([idx, token]) else: logger.error( f"An invalid bad word ID is defined: {token}. This ID is not contained in the " "vocabulary, and is therefore ignored." ) if not mask_list: mask = torch.zeros_like(scores).bool() else: mask = torch.LongTensor(mask_list) indices = torch.ones(len(mask)) mask = ( torch.sparse.LongTensor(mask.t(), indices, scores.size()) .to(scores.device) .to_dense() .bool() ) return mask ``` **An example** Take the summarization example in BART documentation [here](https://huggingface.co/docs/transformers/model_doc/bart#transformers.BartForConditionalGeneration.forward.example). Set `add_prefix_space=True` in the tokenizer and remove the `max_length = 20` in the generate method call. ```py from transformers import AutoTokenizer, BartForConditionalGeneration model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn") tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn", add_prefix_space=True) ARTICLE_TO_SUMMARIZE = ( "PG&E stated it scheduled the blackouts in response to forecasts for high winds " "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were " "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." ) inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="pt") # Generate Summary summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0) tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] ``` This yields the following summary: > Nearly 800 thousand customers were scheduled to be affected by the shutoffs. PG&E stated it scheduled the blackouts in response to forecasts for high winds. At this point the new logits processor class is applied. The objective will be to make the model output the number of customers affected as digits and replace the word “shutoffs”. We do so by penalizing the token ids for “thousand” and “shutoffs” while boosting the ones for “shutdowns”. ```py logits_processor = LogitsProcessorList( [ BendLogitsProcessor( bend_list = [[-10000.,[7673]], # thousand [1000.,[5001, 29]], # shutdowns [-1000000.,[2572, 10816]], # shutoffs [-1000000.,[2572, 1529]], # shutoffs ], eos_token_id=model.config.eos_token_id ) ] ) # Generate Summary summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0, logits_processor=logits_processor, renormalize_logits=True) tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] ``` If we call the the summary generation again, this time including the logits processor and renormalizing we get: > Nearly 800,000 customers were scheduled to be affected by the shutdowns. PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22168/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22168/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22167
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22167/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22167/comments
https://api.github.com/repos/huggingface/transformers/issues/22167/events
https://github.com/huggingface/transformers/issues/22167
1,624,454,672
I_kwDOCUB6oc5g0zYQ
22,167
No checkpoint saved during training
{ "login": "lxlxlxx", "id": 15678197, "node_id": "MDQ6VXNlcjE1Njc4MTk3", "avatar_url": "https://avatars.githubusercontent.com/u/15678197?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lxlxlxx", "html_url": "https://github.com/lxlxlxx", "followers_url": "https://api.github.com/users/lxlxlxx/followers", "following_url": "https://api.github.com/users/lxlxlxx/following{/other_user}", "gists_url": "https://api.github.com/users/lxlxlxx/gists{/gist_id}", "starred_url": "https://api.github.com/users/lxlxlxx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lxlxlxx/subscriptions", "organizations_url": "https://api.github.com/users/lxlxlxx/orgs", "repos_url": "https://api.github.com/users/lxlxlxx/repos", "events_url": "https://api.github.com/users/lxlxlxx/events{/privacy}", "received_events_url": "https://api.github.com/users/lxlxlxx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lxlxlxx thanks for raising this issue. Could you share which script you're running e.g. `run_mlm.py`? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
NONE
null
### System Info - `transformers` version: 4.10.0 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.9.0 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction . ### Expected behavior There is no checkpoint saved during the training. My arguments are: --do_train --train_file xxxxx --output_dir xxxx --model_name_or_path bert-base-uncased --eval_steps 1 --num_train_epochs 5 --per_device_train_batch_size 24 --learning_rate 5e-5 --max_seq_length 16 --pooler_type cls --temp 0.05 --downscale_dim 64 --fp16 --num_hidden_layers 4 --keep_in_memory True --save_strategy "steps" --save_steps 20 Can anyone give me a hint on it? Many thanks for considering my request.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22167/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22166
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22166/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22166/comments
https://api.github.com/repos/huggingface/transformers/issues/22166/events
https://github.com/huggingface/transformers/issues/22166
1,624,354,320
I_kwDOCUB6oc5g0a4Q
22,166
Mismatch between CLIP fast and non-fast tokenizers
{ "login": "xenova", "id": 26504141, "node_id": "MDQ6VXNlcjI2NTA0MTQx", "avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xenova", "html_url": "https://github.com/xenova", "followers_url": "https://api.github.com/users/xenova/followers", "following_url": "https://api.github.com/users/xenova/following{/other_user}", "gists_url": "https://api.github.com/users/xenova/gists{/gist_id}", "starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xenova/subscriptions", "organizations_url": "https://api.github.com/users/xenova/orgs", "repos_url": "https://api.github.com/users/xenova/repos", "events_url": "https://api.github.com/users/xenova/events{/privacy}", "received_events_url": "https://api.github.com/users/xenova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker ", "Note: If you would like to get matching tokenizing before a fix goes in, installing `ftfy` first should do it.\r\n\r\nInitially looked to fix this specific issue around apostrophes, but it became apparent there were other potential formatting inconsistencies. For example, running the below also shows differences in things like `'ll` and `!!` tokenization: \r\n\r\n```py\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer_fast = AutoTokenizer.from_pretrained('openai/clip-vit-base-patch16', use_fast=True)\r\ntokenizer = AutoTokenizer.from_pretrained('openai/clip-vit-base-patch16', use_fast=False)\r\n\r\ntext = \"A\\n'll !!to?'d''d of, can't.\"\r\n\r\nprint(tokenizer(text))\r\nprint(tokenizer_fast(text))\r\nprint(tokenizer(text) == tokenizer_fast(text))\r\n\r\n# Outputs:\r\n# {'input_ids': [49406, 320, 262, 865, 256, 256, 531, 286, 262, 323, 262, 262, 323, 539, 267, 753, 262, 339, 269, 49407], ...}\r\n# {'input_ids': [49406, 320, 1342, 748, 531, 13610, 323, 8445, 323, 539, 267, 753, 713, 269, 49407], ...}\r\n# False\r\n\r\n```\r\n\r\n\r\nI put up a first go at fixing this. It's ready for review but not merge until we pick a change to apply more broadly", "Hey! \r\nI can't reproduce the issue, when I ran your code I got: \r\n```python \r\n{'input_ids': [49406, 592, 1535, 1200, 1700, 589, 49407], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}\r\n{'input_ids': [49406, 592, 1535, 1200, 1700, 589, 49407], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}\r\n```", "I just updated to `transformers-4.27.2`, but it still produces the incorrect output. Are you sure you are running the non-fast tokenizer @ArthurZucker ?\r\n\r\nAs stated above by @connor-henderson, it's probably because you have `ftfy` installed, which I assume will use its own basic tokenizer.", "Hey Arthur and xenova, in my case uninstalling ftfy or commenting out [these import lines](https://github.com/huggingface/transformers/blob/48327c57182fdade7f7797d1eaad2d166de5c55b/src/transformers/models/clip/tokenization_clip.py#L313-L317) leads to repro, I believe since [this conditional](https://github.com/huggingface/transformers/blob/48327c57182fdade7f7797d1eaad2d166de5c55b/src/transformers/models/clip/tokenization_clip.py#L469-L470) determines whether the BasicTokenizer is used for CLIPTokenizer.", "I made sure that I was using fast yes 😉 \r\nThough it is our goal to have the same output from fast and non-fast, I don't really know why there is this ftfy used here. But yes, this is most probably not used in the `fast` tokenization. This also means that the expected behaviour should probably be the one that normalized the text with `ftfy`. This is something that is going to be hard to port to tokenizer depending on what kind of normalization is going on. ", "@ArthurZucker I believe it is the opposite, the mismatch happens when ftfy is not installed. (@connor-henderson correct me if I misunderstood your posts).", "> @ArthurZucker I believe it is the opposite, the mismatch happens when ftfy is not installed. (@connor-henderson correct me if I misunderstood your posts).\r\n\r\nYes, this is correct. I don't have ftfy installed, and I get the mismatch.", "@sgugger yes thanks that is what I was saying.\r\n\r\nI think this comes down to the expected behavior when using the BasicTokenizer generally. If it is supposed to match the fast tokenizer output I believe we have a bug. But if its not, and it's just expected to split naively on punctuation then I don't think we have a bug and I should close my PR", "I think this PR is good for ppl who do not have `ftfy`! Thanks both of you for pointing this out and will be reviewing the PR!", "Commenting to mark as not stale :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,687
1,687
CONTRIBUTOR
null
### System Info - `transformers` version: 4.26.1 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.8.1 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here is a minimal working example to show the mismatch: ```python from transformers import AutoTokenizer tokenizer_fast = AutoTokenizer.from_pretrained('openai/clip-vit-base-patch16', use_fast=True) tokenizer = AutoTokenizer.from_pretrained('openai/clip-vit-base-patch16', use_fast=False) text = "You should've done this" print(tokenizer(text)) print(tokenizer_fast(text)) print(tokenizer(text) == tokenizer_fast(text)) # Outputs: # {'input_ids': [49406, 592, 1535, 262, 563, 1700, 589, 49407], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]} # {'input_ids': [49406, 592, 1535, 1200, 1700, 589, 49407], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]} # False ``` It appears to stem from the `'ve` token. ### Expected behavior The non-fast tokenization should match the fast tokenization (https://github.com/huggingface/tokenizers)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22166/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22165
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22165/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22165/comments
https://api.github.com/repos/huggingface/transformers/issues/22165/events
https://github.com/huggingface/transformers/issues/22165
1,624,260,024
I_kwDOCUB6oc5g0D24
22,165
TypeError: zero_grad() got an unexpected keyword argument 'set_to_none'
{ "login": "savitamittal1", "id": 39776179, "node_id": "MDQ6VXNlcjM5Nzc2MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/39776179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/savitamittal1", "html_url": "https://github.com/savitamittal1", "followers_url": "https://api.github.com/users/savitamittal1/followers", "following_url": "https://api.github.com/users/savitamittal1/following{/other_user}", "gists_url": "https://api.github.com/users/savitamittal1/gists{/gist_id}", "starred_url": "https://api.github.com/users/savitamittal1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/savitamittal1/subscriptions", "organizations_url": "https://api.github.com/users/savitamittal1/orgs", "repos_url": "https://api.github.com/users/savitamittal1/repos", "events_url": "https://api.github.com/users/savitamittal1/events{/privacy}", "received_events_url": "https://api.github.com/users/savitamittal1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, this has been fixed already on main. You just need to pull from source :-)", "ah, thankyou for quick fix. :)" ]
1,678
1,678
1,678
NONE
null
### System Info Getting below error while runing Bert pretraining using Huggingface trainer with deepspeed 0.8.2 version and pytorch 2.0RC Traceback (most recent call last): File "pretrain_glue.py", line 124, in <module> result = trainer.train() File "/opt/conda/envs/env/lib/python3.8/site-packages/transformers/trainer.py", line 1631, in train return inner_training_loop( File "/opt/conda/envs/env/lib/python3.8/site-packages/transformers/trainer.py", line 1814, in _inner_training_loop model.zero_grad(set_to_none=True) TypeError: zero_grad() got an unexpected keyword argument 'set_to_none' @sgugger , could this be due to below change? Enforce same behavior as PyTorch 2.0 for older versions (https://github.com/huggingface/transformers/pull/22136) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run Bert pretrain with HF trainer and deepspeed 0.8.2 version on Pytorch 2.0 RC build ### Expected behavior run with no errors
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22165/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22164
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22164/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22164/comments
https://api.github.com/repos/huggingface/transformers/issues/22164/events
https://github.com/huggingface/transformers/issues/22164
1,624,044,082
I_kwDOCUB6oc5gzPIy
22,164
Error when running pipeline with whisper and using the 'return_dict_in_generate=True' option
{ "login": "panagiotidi", "id": 4665941, "node_id": "MDQ6VXNlcjQ2NjU5NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/4665941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/panagiotidi", "html_url": "https://github.com/panagiotidi", "followers_url": "https://api.github.com/users/panagiotidi/followers", "following_url": "https://api.github.com/users/panagiotidi/following{/other_user}", "gists_url": "https://api.github.com/users/panagiotidi/gists{/gist_id}", "starred_url": "https://api.github.com/users/panagiotidi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/panagiotidi/subscriptions", "organizations_url": "https://api.github.com/users/panagiotidi/orgs", "repos_url": "https://api.github.com/users/panagiotidi/repos", "events_url": "https://api.github.com/users/panagiotidi/events{/privacy}", "received_events_url": "https://api.github.com/users/panagiotidi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker ", "Hey! Thanks for reporting. This is normal as the `pipeline` does not support returning the usual `dictionary`. \r\nWe should probably prevent this behaviour (raise an error when `return_dict_in_generate` is set in the pipeline) cc @Narsil this is a duplicate of another issue but I can't find it! \r\nedit: #21185\r\n", "Best recommendation in the mean time is to define a custom pipeline, where you process the inputs before feeding them to `super.preprocess`! ", "> Best recommendation in the mean time is to define a custom pipeline, where you process the inputs before feeding them to `super.preprocess`!\r\n\r\nThanks for your reply, I now understand the issue.\r\n\r\nHowever, I am not sure how to preprocess the input to achieve this. \r\nI can see the output and the dictionary still contains the tokens (inside the ModelOutput):\r\n\r\n```\r\n{'tokens': ModelOutput([('sequences', tensor([[50258, 50342, 50358, 50364, 1044, 291, 337, 1976, 0, 50864,\r\n 50257]])), ('scores', (tensor([[2.3064, -inf, -inf, ..., 2.8053, 2.7866, 3.3406]]), tensor([[3.7724, -inf, -inf, ..., 3.1328, 3.6590, 3.8489]]), tensor([[ -inf, -inf, -inf, ..., -7.8979, -7.7944, -11.4352]]), tensor([[-5.0041, -inf, -inf, ..., -5.5928, -5.6329, -6.7607]]), tensor([[16.9060, -inf, -inf, ..., -inf, -inf, -inf]]), tensor([[ 4.7684, -inf, -inf, ..., -4.7718, -4.7031, -6.6440]]), tensor([[ 3.5967, -inf, -inf, ..., -0.2559, -0.4887, -1.7837]]), tensor([[ 1.7885, -inf, -inf, ..., -8.9040, -8.4750, -12.0667]]), tensor([[ -inf, -inf, -inf, ..., -15.8636, -15.3132, -18.1436]]), tensor([[ -inf, -inf, -inf, ..., 13.3971, 12.9880, 10.2999]])))]), 'stride': (160000, 0, 26667)}\r\n```\r\n\r\nand where it fails is when it tries to execute `outputs[\"tokens\"].numpy()`. Would you mean maybe post process the output?\r\n\r\n", "Hi @panagiotidi , thanks for raising this issue. \r\n\r\nYes, in this case as the error is being raise in the `postprocess` method, this is the one you'd need to adapt. Generally for custom workflows, it's probably easier to start with lower-level API such as `AutoModel` to define your steps and then move to something like a custom pipeline. \r\n\r\nIf all that you want to do automatic speech recognition with the audio input, removing `return_dict_in_generate` from the `generate_kwargs` will work i.e.:\r\n\r\n```python\r\nfrom pathlib import Path\r\nfrom transformers import pipeline, AutomaticSpeechRecognitionPipeline, Pipeline, GenerationConfig\r\n\r\naudio_path = 'xxx.wav'\r\n\r\ngenerate_kwargs = {'temperature': 1, 'max_length': 448, 'output_scores': True}\r\n\r\npipe = pipeline(\r\n model=\"openai/whisper-small\",\r\n chunk_length_s=10,\r\n framework=\"pt\",\r\n batch_size=1\r\n)\r\n\r\nprint(pipe(audio_path, return_timestamps=True, generate_kwargs=generate_kwargs))\r\n```", "I am actually trying to implement the `--logprob_threshold` from the original paper of whisper as I would like to be able to experiment with it when transcribing. There is a relevant discussion [here](https://github.com/openai/whisper/discussions/654#discussioncomment-4510801), but as you said too, in order to implement in a pipeline, a custom implementation of post process is needed on the output results.\r\n\r\nWill you maybe include in later versions?", "@panagiotidi I don't know of any plans to add this at the moment. As this is a specific generation case, it's not something that's likely to be included into a pipeline. \r\n\r\nIf I've understood `--logprob_threshold`, then the desire is to stop generation if the average logprob is below a certain threshold. In this case, a custom [`Constraint` class](https://huggingface.co/docs/transformers/v4.27.1/en/internal/generation_utils#transformers.Constraint) could be implemented and passed in to the `generate_kwargs`. Questions about an implementation of this is probably best placed in the [forums](https://discuss.huggingface.co/). \r\n\r\nAs mentioned above, when applying custom code, it is easier to work from the `AutoModel` level first e.g. [adapting the examples in the docs](https://huggingface.co/docs/transformers/v4.27.1/en/model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
NONE
null
### System Info - `transformers` version: 4.26.1 - Platform: macOS-13.1-x86_64-i386-64bit - Python version: 3.9.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sanchit-gandhi @Narsil When running a simple whisper pipeline, e.g., using the options 'return_dict_in_generate': True and 'output_scores': True, e.g., ``` from pathlib import Path from transformers import pipeline, AutomaticSpeechRecognitionPipeline, Pipeline, GenerationConfig audio_path = 'xxx.wav' generate_kwargs = {'temperature': 1, 'max_length': 448, 'return_dict_in_generate': True, 'output_scores': True} pipe = pipeline( model="openai/whisper-small", chunk_length_s=10, framework="pt", batch_size=1 ) print(pipe(audio_path, return_timestamps=True, generate_kwargs=generate_kwargs)) ``` I am getting the following error: ``` Traceback (most recent call last): File "/Users/sofia/PycharmProjects/openAI-whisper/test4.py", line 39, in <module> print(pipe(audio_path, return_timestamps=True, generate_kwargs=generate_kwargs)) File "/Users/sofia/miniforge3/envs/openAI-whisper/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 378, in __call__ return super().__call__(inputs, **kwargs) File "/Users/sofia/miniforge3/envs/openAI-whisper/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1076, in __call__ return next( File "/Users/sofia/miniforge3/envs/openAI-whisper/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__ processed = self.infer(item, **self.params) File "/Users/sofia/miniforge3/envs/openAI-whisper/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 611, in postprocess items = outputs[key].numpy() AttributeError: 'ModelOutput' object has no attribute 'numpy' ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Run the code ``` from pathlib import Path from transformers import pipeline, AutomaticSpeechRecognitionPipeline, Pipeline, GenerationConfig audio_path = 'xxx.wav' generate_kwargs = {'temperature': 1, 'max_length': 448, 'return_dict_in_generate': True, 'output_scores': True} pipe = pipeline( model="openai/whisper-small", chunk_length_s=10, framework="pt", batch_size=1 ) print(pipe(audio_path, return_timestamps=True, generate_kwargs=generate_kwargs)) ``` ### Expected behavior I expect to get the text result accompanied with the timestamps and the prediction scores
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22164/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22163
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22163/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22163/comments
https://api.github.com/repos/huggingface/transformers/issues/22163/events
https://github.com/huggingface/transformers/pull/22163
1,623,964,867
PR_kwDOCUB6oc5MBjKw
22,163
Revert "Enforce same behavior as PyTorch 2.0 for older versions"
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
COLLABORATOR
null
Reverts huggingface/transformers#22136 As we discovered this was breaking the DeepSpeed integration (and thus potential other integrations wrapping the model), it's safer to revert this change for now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22163/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22163/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22163", "html_url": "https://github.com/huggingface/transformers/pull/22163", "diff_url": "https://github.com/huggingface/transformers/pull/22163.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22163.patch", "merged_at": 1678815946000 }
https://api.github.com/repos/huggingface/transformers/issues/22162
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22162/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22162/comments
https://api.github.com/repos/huggingface/transformers/issues/22162/events
https://github.com/huggingface/transformers/pull/22162
1,623,719,145
PR_kwDOCUB6oc5MAuF3
22,162
Run all tests by default
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? This PR makes the default value for the crosstests PT<>TF and PT<>FLAX true by default. This way when a user runs tests locally, all tests are run (the only exception being the hub staging tests, which require setting an env variable anyway to use moon-staging instead of moon-landing). In the CI however each job runs the same tests as before since the env variables are set at False by default.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22162/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22162", "html_url": "https://github.com/huggingface/transformers/pull/22162", "diff_url": "https://github.com/huggingface/transformers/pull/22162.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22162.patch", "merged_at": 1678829444000 }
https://api.github.com/repos/huggingface/transformers/issues/22161
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22161/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22161/comments
https://api.github.com/repos/huggingface/transformers/issues/22161/events
https://github.com/huggingface/transformers/issues/22161
1,623,663,792
I_kwDOCUB6oc5gxySw
22,161
GPT Neox rotary embedding does not work with padding left
{ "login": "OlivierDehaene", "id": 23298448, "node_id": "MDQ6VXNlcjIzMjk4NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/23298448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OlivierDehaene", "html_url": "https://github.com/OlivierDehaene", "followers_url": "https://api.github.com/users/OlivierDehaene/followers", "following_url": "https://api.github.com/users/OlivierDehaene/following{/other_user}", "gists_url": "https://api.github.com/users/OlivierDehaene/gists{/gist_id}", "starred_url": "https://api.github.com/users/OlivierDehaene/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OlivierDehaene/subscriptions", "organizations_url": "https://api.github.com/users/OlivierDehaene/orgs", "repos_url": "https://api.github.com/users/OlivierDehaene/repos", "events_url": "https://api.github.com/users/OlivierDehaene/events{/privacy}", "received_events_url": "https://api.github.com/users/OlivierDehaene/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Hey thanks for reporting! ", "It is possible that this is not the root cause but there is an issue with these lines:\r\n\r\n```python\r\noffset = 0\r\nif has_layer_past:\r\n offset = layer_past[0].shape[-2]\r\n seq_len += offset\r\ncos, sin = self.rotary_emb(value, seq_len=seq_len)\r\nquery, key = apply_rotary_pos_emb(query_rot, key_rot, cos, sin, offset=offset)\r\n```\r\n\r\n`offset` and `seq_len` are not computed correctly when you have padding.\r\nOn a sidenote, it is impossible to have a single value for `offset` as different sequences in the batch might have different length and therefore different offsets when padding left.", "We use padding left extensively on the serving side as we have a dynamic batching logic that batches sequence of very different lengths together. \r\n\r\nWhile the pad==256 example above seems extreme in isolation, it is completely normal when serving. We sometimes even go higher in chat applications where a member of the batch has a very large history (> 1000 tokens) and other sequences only just started ( ~ 40 tokens).\r\n\r\nWe also serve all the models in bfloat16 if available and we almost always use sampling which amplifies the logits issue even more.", "Hey everyone! Yes, it is correct, it is pretty much the same issue as I reported [here](https://github.com/huggingface/transformers/pull/21853#issuecomment-1461028782) -- we should be passing `position_ids` all the way down to the attention layer, and compute the sequence length from it.\r\n\r\nWe have an open PR to fix the same issue with GPT-J (#22069), I'll make sure it is ported to GPT NeoX when it is merged. We are currently ironing out `torch.fx` issues (adding the correct behavior makes the tensors dynamic, which blocks existing features)", "Hi @OlivierDehaene, I'm actually in the middle of porting the fix from #22069 to GPT-Neox too, since I was also interested in that one (in parallel with other things including resolving this torch.fx issue).\r\n\r\nAlso for reference there's a similar existing issue which went stale: https://github.com/huggingface/transformers/issues/18999", "Hi @njhill!\r\nNice thanks for working on this! \r\nFor now I have a fix on my text-generation-inference fork as we have multiple neox in prod and I need a fix asap. It's sensibly the same to yours I think.\r\n\r\n```python\r\nclass RotaryEmbedding(torch.nn.Module):\r\n def __init__(self, dim, max_position_embeddings, base=10000, device=None):\r\n super().__init__()\r\n inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim))\r\n self.register_buffer(\"inv_freq\", inv_freq)\r\n\r\n # Build here to make `torch.jit.trace` work.\r\n self.max_seq_len_cached = max_position_embeddings\r\n self.cos_cached = None\r\n self.sin_cached = None\r\n\r\n @staticmethod\r\n def rotate_half(x):\r\n \"\"\"Rotates half the hidden dims of the input.\"\"\"\r\n x1 = x[..., : x.shape[-1] // 2]\r\n x2 = x[..., x.shape[-1] // 2 :]\r\n return torch.cat((-x2, x1), dim=-1)\r\n\r\n @staticmethod\r\n def _create_cos_sin(inv_freq, max_position_embeddings, dtype, device):\r\n t = torch.arange(max_position_embeddings, device=inv_freq.device, dtype=inv_freq.dtype)\r\n freqs = torch.einsum(\"i,j->ij\", t, inv_freq)\r\n # Different from paper, but it uses a different permutation in order to obtain the same calculation\r\n emb = torch.cat((freqs, freqs), dim=-1)\r\n return emb.cos().to(device).to(dtype), emb.sin().to(device).to(dtype)\r\n\r\n def forward(self, q, k, position_ids, seq_len=None):\r\n # x: [bs, num_attention_heads, seq_len, head_size]\r\n if seq_len > self.max_seq_len_cached or self.cos_cached is None or self.sin_cached is None:\r\n if seq_len > self.max_seq_len_cached:\r\n self.max_seq_len_cached = seq_len\r\n self.cos_cached, self.sin_cached = self._create_cos_sin(\r\n self.inv_freq, self.max_seq_len_cached, q.dtype, q.device\r\n )\r\n cos = self.cos_cached[position_ids].unsqueeze(1)\r\n sin = self.sin_cached[position_ids].unsqueeze(1)\r\n\r\n q_embed = (q * cos) + (rotate_half(q) * sin)\r\n k_embed = (k * cos) + (rotate_half(k) * sin)\r\n return q_embed, k_embed\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,684
1,684
MEMBER
null
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-1097-aws-x86_64-with-glibc2.27 - Python version: 3.10.9 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker, @younesbelkada, @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/oasst-sft-1-pythia-12b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("OpenAssistant/oasst-sft-1-pythia-12b", device_map="auto") f_not_padded = model.forward(**tokenizer(["<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"], padding=False, return_tensors="pt")) f_padded = model.forward(**tokenizer(["<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"], padding=True, pad_to_multiple_of=256, return_tensors="pt")) torch.testing.assert_allclose(f_not_padded.logits[:, -1], f_padded.logits[:, -1]) # AssertionError: Tensor-likes are not close! # Mismatched elements: 6057 / 50288 (12.0%) # Greatest absolute difference: 0.0003177821636199951 at index (0, 4649) (up to 1e-05 allowed) # Greatest relative difference: 1.5682868874196898 at index (0, 30410) (up to 0.0001 allowed) ``` The problem is exacerbated in bfloat16 ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/oasst-sft-1-pythia-12b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("OpenAssistant/oasst-sft-1-pythia-12b", device_map="auto", torch_dtype=torch.bfloat16) f_not_padded = model.forward(**tokenizer(["<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"], padding=False, return_tensors="pt")) f_padded = model.forward(**tokenizer(["<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>"], padding=True, pad_to_multiple_of=256, return_tensors="pt")) torch.testing.assert_allclose(f_not_padded.logits[:, -1], f_padded.logits[:, -1]) # AssertionError: Tensor-likes are not equal! # Mismatched elements: 49417 / 50288 (98.3%) # Greatest absolute difference: 1.154541015625 at index (0, 50271) # Greatest relative difference: 2058.906976744186 at index (0, 29917) ``` ### Expected behavior padding left should have no influence on the resulting logits. While the differences do not look like much, it has a huge impact on generation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22161/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22161/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22160
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22160/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22160/comments
https://api.github.com/repos/huggingface/transformers/issues/22160/events
https://github.com/huggingface/transformers/issues/22160
1,623,612,178
I_kwDOCUB6oc5gxlsS
22,160
[i18n-it] Translating docs to it
{ "login": "davidegazze", "id": 1748729, "node_id": "MDQ6VXNlcjE3NDg3Mjk=", "avatar_url": "https://avatars.githubusercontent.com/u/1748729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidegazze", "html_url": "https://github.com/davidegazze", "followers_url": "https://api.github.com/users/davidegazze/followers", "following_url": "https://api.github.com/users/davidegazze/following{/other_user}", "gists_url": "https://api.github.com/users/davidegazze/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidegazze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidegazze/subscriptions", "organizations_url": "https://api.github.com/users/davidegazze/orgs", "repos_url": "https://api.github.com/users/davidegazze/repos", "events_url": "https://api.github.com/users/davidegazze/events{/privacy}", "received_events_url": "https://api.github.com/users/davidegazze/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "Please stop opening those issues without filling the template. This is spamming every maintainer of the library." ]
1,678
1,678
1,678
CONTRIBUTOR
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 --> ## How-to guides - [] [perf_infer_gpu_one](https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_infer_gpu_one.mdx)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22160/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22159
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22159/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22159/comments
https://api.github.com/repos/huggingface/transformers/issues/22159/events
https://github.com/huggingface/transformers/pull/22159
1,623,583,846
PR_kwDOCUB6oc5MAQy9
22,159
Load optimizer state on CPU to avoid CUDA OOM
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? As reported in #22123 resuming from a checkpoint can get a user out of memory if the optimizer state is loaded directly on GPU. This PR loads it on CPU by default and it will be copied over to the proper device by PyTorch in `load_state_dict`. This might be a bit slower at the checkpoint loading time (so just once) but will benefit users training large models. Fixes #22123
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22159/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22159", "html_url": "https://github.com/huggingface/transformers/pull/22159", "diff_url": "https://github.com/huggingface/transformers/pull/22159.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22159.patch", "merged_at": 1678829433000 }
https://api.github.com/repos/huggingface/transformers/issues/22158
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22158/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22158/comments
https://api.github.com/repos/huggingface/transformers/issues/22158/events
https://github.com/huggingface/transformers/pull/22158
1,623,556,381
PR_kwDOCUB6oc5MAK5X
22,158
to_pil - don't rescale if int and in range 0-255
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? #21969 Introduced a bug where binary masks would have their values rescaled to between 0-255. This was because of [this part](https://github.com/huggingface/transformers/blob/4063fd9cba6b72ebfd5c663a307ab9d5ff1a153d/src/transformers/image_transforms.py#L161) of the logic check. The original assumption was that inputs with their values between 0-1 would be rescaled images with float pixels. However, binary masks aren't and shouldn't be rescaled. We now check first if the input is of type uint8. Then check if any precision is lost when converting to int and that the int values are in the valid range 0-255 and finally if float values are between 0-1. Fixes #22147 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22158/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22158", "html_url": "https://github.com/huggingface/transformers/pull/22158", "diff_url": "https://github.com/huggingface/transformers/pull/22158.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22158.patch", "merged_at": 1678808624000 }
https://api.github.com/repos/huggingface/transformers/issues/22157
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22157/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22157/comments
https://api.github.com/repos/huggingface/transformers/issues/22157/events
https://github.com/huggingface/transformers/issues/22157
1,623,474,670
I_kwDOCUB6oc5gxEHu
22,157
LayoutLM model only able to classify individual words instead of entire sections
{ "login": "keval2415", "id": 105478351, "node_id": "U_kgDOBkl4zw", "avatar_url": "https://avatars.githubusercontent.com/u/105478351?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keval2415", "html_url": "https://github.com/keval2415", "followers_url": "https://api.github.com/users/keval2415/followers", "following_url": "https://api.github.com/users/keval2415/following{/other_user}", "gists_url": "https://api.github.com/users/keval2415/gists{/gist_id}", "starred_url": "https://api.github.com/users/keval2415/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keval2415/subscriptions", "organizations_url": "https://api.github.com/users/keval2415/orgs", "repos_url": "https://api.github.com/users/keval2415/repos", "events_url": "https://api.github.com/users/keval2415/events{/privacy}", "received_events_url": "https://api.github.com/users/keval2415/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi @keval2415 - thanks for opening an issue. \r\n\r\nCan you please use the [forum](https://discuss.huggingface.co/) for questions like this. We try to keep the github issues reserved for bugs or feature requests. " ]
1,678
1,678
1,678
NONE
null
### Model description Model I am using (LayoutLM ...): Here, I would like to develop a custom resume parser model that can accurately predict the sections for *EDUCATION*, *SKILLS*, and *EXPERIENCE* based on the resume. I have fine-tuned the *LayoutLMv3* model on a custom dataset that is similar to the *FUNSD* dataset. Although the LayoutLM model can predict education keywords, it only does so at the word level. For instance, if the resume states "My education is in computer engineering from LD College Ahmedabad," the model will label "computer" and "engineering" as *EDUCATION*. However, I aim to have all classified words in a single section rather than in individual word sections. Therefore, here are some random screenshots of the LayoutLM model output. ![Screenshot from 2023-03-13 18-36-47](https://user-images.githubusercontent.com/105478351/225013023-a1ea58b0-4e26-49f2-9cd5-cdebc8365d55.png) And here, I would like the output to include box coordinates for the EDUCATION section as well as the SKILLS section, identified by their respective keywords. ![Screenshot from 2023-03-13 18-32-05](https://user-images.githubusercontent.com/105478351/225013056-31073b3f-605d-4728-8058-0300bb8fd977.png) Note: I have attempted to use the *Layout Parser* model with the *PublayNet* dataset. However, this model was unable to accurately predict and classify the sections for *EDUCATION*, *SKILLS,* *EXPERIENCE*, etc. If there are any other models that would be suitable for my use case, please kindly suggest them. *Thank you all for your help.* ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22157/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22156
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22156/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22156/comments
https://api.github.com/repos/huggingface/transformers/issues/22156/events
https://github.com/huggingface/transformers/issues/22156
1,623,287,790
I_kwDOCUB6oc5gwWfu
22,156
[i18n-it] Translating docs to it
{ "login": "davidegazze", "id": 1748729, "node_id": "MDQ6VXNlcjE3NDg3Mjk=", "avatar_url": "https://avatars.githubusercontent.com/u/1748729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidegazze", "html_url": "https://github.com/davidegazze", "followers_url": "https://api.github.com/users/davidegazze/followers", "following_url": "https://api.github.com/users/davidegazze/following{/other_user}", "gists_url": "https://api.github.com/users/davidegazze/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidegazze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidegazze/subscriptions", "organizations_url": "https://api.github.com/users/davidegazze/orgs", "repos_url": "https://api.github.com/users/davidegazze/repos", "events_url": "https://api.github.com/users/davidegazze/events{/privacy}", "received_events_url": "https://api.github.com/users/davidegazze/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[]
1,678
1,678
1,678
CONTRIBUTOR
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 --> ## How-to Guide - [] [big_models](https://github.com/huggingface/transformers/blob/main/docs/source/en/big_models.mdx) ## How-to guides - [] [big_models](https://github.com/huggingface/transformers/blob/main/docs/source/en/big_models.mdx)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22156/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22155
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22155/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22155/comments
https://api.github.com/repos/huggingface/transformers/issues/22155/events
https://github.com/huggingface/transformers/pull/22155
1,623,216,672
PR_kwDOCUB6oc5L_A4S
22,155
Fix GPT2 position ids issues
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22155). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
COLLABORATOR
null
# What does this PR do? Follow up PR of #21080
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22155/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22155", "html_url": "https://github.com/huggingface/transformers/pull/22155", "diff_url": "https://github.com/huggingface/transformers/pull/22155.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22155.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22154
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22154/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22154/comments
https://api.github.com/repos/huggingface/transformers/issues/22154/events
https://github.com/huggingface/transformers/issues/22154
1,623,200,185
I_kwDOCUB6oc5gwBG5
22,154
data collator or tokenizer.pad has bug when add new features to data
{ "login": "lanlanlan3", "id": 28173281, "node_id": "MDQ6VXNlcjI4MTczMjgx", "avatar_url": "https://avatars.githubusercontent.com/u/28173281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lanlanlan3", "html_url": "https://github.com/lanlanlan3", "followers_url": "https://api.github.com/users/lanlanlan3/followers", "following_url": "https://api.github.com/users/lanlanlan3/following{/other_user}", "gists_url": "https://api.github.com/users/lanlanlan3/gists{/gist_id}", "starred_url": "https://api.github.com/users/lanlanlan3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lanlanlan3/subscriptions", "organizations_url": "https://api.github.com/users/lanlanlan3/orgs", "repos_url": "https://api.github.com/users/lanlanlan3/repos", "events_url": "https://api.github.com/users/lanlanlan3/events{/privacy}", "received_events_url": "https://api.github.com/users/lanlanlan3/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lanlanlan3 - thanks for opening this issue. \r\n\r\nThe reason this error is being thrown is that `\"new_feature\"` won't be padded and therefore the tensors can't be concatenated to create a batch. This can be seen if the inputs passed are lists and the return type not specified: \r\n\r\n```python\r\n>>> samples = [\r\n... {'input_ids': list(range(8)), 'new_feature': list(range(3))},\r\n... {'input_ids': list(range(11)), 'new_feature': list(range(5))},\r\n... ]\r\n>>> batch = tokenizer.pad(samples, max_length=12, padding='max_length', return_tensors=None)\r\n{'input_ids': [[0, 1, 2, 3, 4, 5, 6, 7, 0, 0, 0, 0], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 0]], 'new_feature': [[0, 1, 2], [0, 1, 2, 3, 4]]}\r\n```\r\n\r\nThis occurs for a few reasons: \r\n* All input features are expected to be padded by the same amount per sample. The amount of padding needed is calculated based on the padding strategy and the length of the `input_ids`. For example, if `padding='max_length'`, then for sample 0, the padding to be added is calculated as `max_length - sample_length = 5 - 3 = 2` for all features (`input_ids` and `new_feature`). However `new_features` isn't padded at all because of the next point. \r\n* The padding behaviour for `\"new_feature\"` is undefined i.e. what should the sequence be padded with? You can see how this is controlled in the padding internals [here](https://github.com/huggingface/transformers/blob/ebdb185befaa821304d461ed6aa20a17e4dc3aa2/src/transformers/tokenization_utils_base.py#L3379).\r\n\r\nThis behaviour from the tokenizer is expected. \r\n\r\nNote: `model_input_names` defines the expected inputs to the model during the forward pass. Therefore changing this will mean that the tokenizer outputs aren't in the expected format for a model in the transformers library. To modify it, it should be passed when creating the tokenizer, rather than modifying the class attribute directly: \r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', model_input_names=[\"input_ids\", \"new_feature\"])\r\n```\r\n\r\nIf the outputs of the tokenizer are being passed to a custom model that ingests `input_ids` and `new_feature`, and the model expects them to be of different length, then I would suggest defining your own tokenizer class which subclasses `PreTrainedTokenizer` or `BertTokenizer`; or a custom data collator which performs the expected padding behaviour. \r\n\r\n", "> Hi @lanlanlan3 - thanks for opening this issue.\r\n> \r\n> The reason this error is being thrown is that `\"new_feature\"` won't be padded and therefore the tensors can't be concatenated to create a batch. This can be seen if the inputs passed are lists and the return type not specified:\r\n> \r\n> ```python\r\n> >>> samples = [\r\n> ... {'input_ids': list(range(8)), 'new_feature': list(range(3))},\r\n> ... {'input_ids': list(range(11)), 'new_feature': list(range(5))},\r\n> ... ]\r\n> >>> batch = tokenizer.pad(samples, max_length=12, padding='max_length', return_tensors=None)\r\n> {'input_ids': [[0, 1, 2, 3, 4, 5, 6, 7, 0, 0, 0, 0], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 0]], 'new_feature': [[0, 1, 2], [0, 1, 2, 3, 4]]}\r\n> ```\r\n> \r\n> This occurs for a few reasons:\r\n> \r\n> * All input features are expected to be padded by the same amount per sample. The amount of padding needed is calculated based on the padding strategy and the length of the `input_ids`. For example, if `padding='max_length'`, then for sample 0, the padding to be added is calculated as `max_length - sample_length = 5 - 3 = 2` for all features (`input_ids` and `new_feature`). However `new_features` isn't padded at all because of the next point.\r\n> * The padding behaviour for `\"new_feature\"` is undefined i.e. what should the sequence be padded with? You can see how this is controlled in the padding internals [here](https://github.com/huggingface/transformers/blob/ebdb185befaa821304d461ed6aa20a17e4dc3aa2/src/transformers/tokenization_utils_base.py#L3379).\r\n> \r\n> This behaviour from the tokenizer is expected.\r\n> \r\n> Note: `model_input_names` defines the expected inputs to the model during the forward pass. Therefore changing this will mean that the tokenizer outputs aren't in the expected format for a model in the transformers library. To modify it, it should be passed when creating the tokenizer, rather than modifying the class attribute directly:\r\n> \r\n> ```python\r\n> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', model_input_names=[\"input_ids\", \"new_feature\"])\r\n> ```\r\n> \r\n> If the outputs of the tokenizer are being passed to a custom model that ingests `input_ids` and `new_feature`, and the model expects them to be of different length, then I would suggest defining your own tokenizer class which subclasses `PreTrainedTokenizer` or `BertTokenizer`; or a custom data collator which performs the expected padding behaviour.\r\n\r\n![image](https://user-images.githubusercontent.com/28173281/225197675-c8bb0283-50ff-45a7-a7c8-6cfae645f468.png)\r\n" ]
1,678
1,679
1,679
NONE
null
### System Info transformers 4.26.1, mac m1, python 3.9.13 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import torch from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') if 'new_feature' not in tokenizer.model_input_names: tokenizer.model_input_names.append('new_feature') samples = [ {'input_ids': torch.arange(3), 'new_feature': torch.arange(8)}, {'input_ids': torch.arange(5), 'new_feature': torch.arange(11)}, ] batch = tokenizer.pad(samples) ``` ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`new_feature` in this case) have excessive nesting (inputs type `list` where type `int` is expected). ### Expected behavior no error. batch['input_ids'].shape == (2, 5) batch['new_feature'].shape == (2, 11)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22154/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22153
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22153/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22153/comments
https://api.github.com/repos/huggingface/transformers/issues/22153/events
https://github.com/huggingface/transformers/issues/22153
1,623,179,826
I_kwDOCUB6oc5gv8Iy
22,153
Specify metric aggregation strategy when evaluating on multiple validation datasets using `Trainer` class
{ "login": "larrylawl", "id": 40198156, "node_id": "MDQ6VXNlcjQwMTk4MTU2", "avatar_url": "https://avatars.githubusercontent.com/u/40198156?v=4", "gravatar_id": "", "url": "https://api.github.com/users/larrylawl", "html_url": "https://github.com/larrylawl", "followers_url": "https://api.github.com/users/larrylawl/followers", "following_url": "https://api.github.com/users/larrylawl/following{/other_user}", "gists_url": "https://api.github.com/users/larrylawl/gists{/gist_id}", "starred_url": "https://api.github.com/users/larrylawl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/larrylawl/subscriptions", "organizations_url": "https://api.github.com/users/larrylawl/orgs", "repos_url": "https://api.github.com/users/larrylawl/repos", "events_url": "https://api.github.com/users/larrylawl/events{/privacy}", "received_events_url": "https://api.github.com/users/larrylawl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is a bit too niche to be added in the Trainer, it might be best to write your own subclass for this." ]
1,678
1,678
1,678
NONE
null
### Feature request Specify metric aggregation strategy when evaluating on multiple validation datasets using `Trainer` class. This metric aggregation strategy will output a `metric: Dict[str, float]` by aggregating the metrics computed from the multiple validation sets. ### Motivation When evaluating on multiple validation datasets using `Trainer`, the metrics used to compute the best checkpoint is the last metric. However, the user will likely want an aggregation of metrics from the multiple validation datasets. https://github.com/huggingface/transformers/blob/ff8870350151091d3d8b2af4c1c0fa3ebcc1052a/src/transformers/trainer.py#L2224-L2235 ### Your contribution I would be happy to make a PR. My idea is as follows: 1. Collate all metrics from evaluation dataset 2. Aggregate the metrics based on a user-specified strategy (e.g. average a user-specified common metric between all evaluation dataset; here, I can leverage the TrainingArgument `metric_for_best_model `). This step should return a `Dict[str, float]` so as to be compatible with the `metric` type.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22153/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22152
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22152/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22152/comments
https://api.github.com/repos/huggingface/transformers/issues/22152/events
https://github.com/huggingface/transformers/pull/22152
1,623,126,659
PR_kwDOCUB6oc5L-tch
22,152
Create MaskedImageCompletionOutput and fix ViT docs
{ "login": "alaradirik", "id": 8944735, "node_id": "MDQ6VXNlcjg5NDQ3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaradirik", "html_url": "https://github.com/alaradirik", "followers_url": "https://api.github.com/users/alaradirik/followers", "following_url": "https://api.github.com/users/alaradirik/following{/other_user}", "gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions", "organizations_url": "https://api.github.com/users/alaradirik/orgs", "repos_url": "https://api.github.com/users/alaradirik/repos", "events_url": "https://api.github.com/users/alaradirik/events{/privacy}", "received_events_url": "https://api.github.com/users/alaradirik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Gently pinging @amyeroberts for the final approval", "masked-image-completion pipeline sounds awesome 🙌" ]
1,678
1,678
1,678
CONTRIBUTOR
null
# What does this PR do? Fixes the output class and docs of `ViTForMaskedImageModeling `which returns images of shape (batch_size, num_channels, height, width) whereas MaskedLMOutput logits are of shape (batch_size, seq_length, vocab_size). **Notes:** - We have no checkpoints for `ViTForMaskedImageModeling` and the docstrings use the pretrained ViTModel combined with random head parameters. I will open a separate PR for the other affected model - `DeiTForMaskedImageModeling`, for which no checkpoints are available either. - Swin has its own MaskedImageOutput class but only has base model checkpoints (trained on the masked image modeling task) and no task-specific checkpoints. - I'm planning to create a masked-image-completion pipeline and add Swin and ICT (once Sheon's PR is merged). CC: @sheonhan ## Before submitting - [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22152/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/22152/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22152", "html_url": "https://github.com/huggingface/transformers/pull/22152", "diff_url": "https://github.com/huggingface/transformers/pull/22152.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22152.patch", "merged_at": 1678802119000 }
https://api.github.com/repos/huggingface/transformers/issues/22151
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22151/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22151/comments
https://api.github.com/repos/huggingface/transformers/issues/22151/events
https://github.com/huggingface/transformers/pull/22151
1,623,098,578
PR_kwDOCUB6oc5L-ncq
22,151
Translation Italian: perf_train_cpu and perf_train_cpu_many
{ "login": "nickprock", "id": 11136646, "node_id": "MDQ6VXNlcjExMTM2NjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/11136646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nickprock", "html_url": "https://github.com/nickprock", "followers_url": "https://api.github.com/users/nickprock/followers", "following_url": "https://api.github.com/users/nickprock/following{/other_user}", "gists_url": "https://api.github.com/users/nickprock/gists{/gist_id}", "starred_url": "https://api.github.com/users/nickprock/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nickprock/subscriptions", "organizations_url": "https://api.github.com/users/nickprock/orgs", "repos_url": "https://api.github.com/users/nickprock/repos", "events_url": "https://api.github.com/users/nickprock/events{/privacy}", "received_events_url": "https://api.github.com/users/nickprock/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
## What does this PR do? Italian translation of doc related to the preprocessing of :hugs: Transformers. * updated _toctree.yml * added perf_train_cpu.mdx * added perf_train_cpu_many.mdx ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). See issue: [[#17459](https://www.linkedin.com/feed/hashtag/?keywords=%2317459)](https://github.com/huggingface/transformers/issues/17459) @sgugger, @stevhliu and @MKhalusova @omarespejel
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22151/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22151", "html_url": "https://github.com/huggingface/transformers/pull/22151", "diff_url": "https://github.com/huggingface/transformers/pull/22151.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22151.patch", "merged_at": 1678792177000 }
https://api.github.com/repos/huggingface/transformers/issues/22150
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22150/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22150/comments
https://api.github.com/repos/huggingface/transformers/issues/22150/events
https://github.com/huggingface/transformers/issues/22150
1,622,997,697
I_kwDOCUB6oc5gvPrB
22,150
Returning n-best hypotheses from Wav2Vec2ProcessorWithLM decoder
{ "login": "vsokolovskii", "id": 48914918, "node_id": "MDQ6VXNlcjQ4OTE0OTE4", "avatar_url": "https://avatars.githubusercontent.com/u/48914918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vsokolovskii", "html_url": "https://github.com/vsokolovskii", "followers_url": "https://api.github.com/users/vsokolovskii/followers", "following_url": "https://api.github.com/users/vsokolovskii/following{/other_user}", "gists_url": "https://api.github.com/users/vsokolovskii/gists{/gist_id}", "starred_url": "https://api.github.com/users/vsokolovskii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vsokolovskii/subscriptions", "organizations_url": "https://api.github.com/users/vsokolovskii/orgs", "repos_url": "https://api.github.com/users/vsokolovskii/repos", "events_url": "https://api.github.com/users/vsokolovskii/events{/privacy}", "received_events_url": "https://api.github.com/users/vsokolovskii/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "> \r\n\r\nplease, tell me your opinion on this feature :)" ]
1,678
1,679
1,679
CONTRIBUTOR
null
### Feature request Currently, the [Wav2Vec2ProcessorWithLM](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L67) decode function returns [only the best hypothesis](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L572). Shall we extend its functionality and make it return n-best hypotheses, logit_scores, lm_scores, word_offsets so that people could rescore these hypotheses with a larger LM. For example, take a look at [NeMo article](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html#neural-rescoring) regarding the rescoring of n-best hypotheses. ### Motivation I suppose many people use n-gram models during the shallow fusion stage, the n-grams models are a good fit during the beam search because they are fast. People perform the rescoring of the n-best hypotheses with a larger LM (using them during the decoding is too slow so it makes sense to apply them during the rescoring of n-best hypotheses that come out of the ASR system). They fuse the score which comes out of the ASR with the perplexity-like score from the LM. If this external model is trained on the domain data it will drastically improve the WER of the resulting model. ### Your contribution If it sounds like a good feature to you, that can be potentially adopted let me know and I'll prepare the PR 😃
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22150/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22149
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22149/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22149/comments
https://api.github.com/repos/huggingface/transformers/issues/22149/events
https://github.com/huggingface/transformers/issues/22149
1,622,903,657
I_kwDOCUB6oc5gu4tp
22,149
Failed to dump torchscript model for GPT2
{ "login": "zhuango", "id": 5491519, "node_id": "MDQ6VXNlcjU0OTE1MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5491519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhuango", "html_url": "https://github.com/zhuango", "followers_url": "https://api.github.com/users/zhuango/followers", "following_url": "https://api.github.com/users/zhuango/following{/other_user}", "gists_url": "https://api.github.com/users/zhuango/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhuango/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhuango/subscriptions", "organizations_url": "https://api.github.com/users/zhuango/orgs", "repos_url": "https://api.github.com/users/zhuango/repos", "events_url": "https://api.github.com/users/zhuango/events{/privacy}", "received_events_url": "https://api.github.com/users/zhuango/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for reporting. This is indeed a bug, will see what we can do to fix that!", "Hi @zhuango,\r\n\r\nI think your problem is more related to how you trace the model rather than the transformers library itself. Since you're tracing a function (not the model itself), JIT trace knows nothing about model parameters, but instead sees them as unnamed tensors that take part in the forward pass calculations. As the origins of these tensors are unknown, it cannot build an autograd chain for them, but since those tensors have autograd enabled, it shows this error.\r\n\r\nSo, I see the following ways you could solve this:\r\n\r\n1. Disable autograd for all model parameters before tracing:\r\n```\r\nmodel.requires_grad_(False)\r\n```\r\n\r\n2. Transform your `dict_test `function into a model that wraps the original model and trace it (this way JIT will discover model parameters and corresponding tensors and will be able to use autograd for them):\r\n```\r\nclass DictModel(torch.nn.Module):\r\n def __init__(self, model):\r\n super().__init__()\r\n self.model = model\r\n\r\n def forward(self, inputs: Dict[str, torch.Tensor]):\r\n return self.model(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'])\r\n\r\ndict_model = DictModel(model)\r\nmodel_scripted = torch.jit.trace(dict_model, inputs)\r\n```\r\n\r\n3. Just trace the model itself sending input parameters as a tuple instead of a dict (but I guess you intentionally want to use a dict to make the resulting torchscript usage easier?):\r\n```\r\ntorch.jit.trace(model, ...)\r\n```\r\n\r\n@ArthurZucker, let me know if you think this needs any additions to the library itself or documentation?", "Hi @vvmnnnkv, thanks a lot. That works for me.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
NONE
null
### System Info python version, 3.7 transformers version, 4.26.1 ### Who can help? @ArthurZucker, @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` model_inputs = dict( input_ids=torch.zeros((1, 1024), dtype=torch.long).cuda(), attention_mask=torch.ones((1, 1024), dtype=torch.long).cuda()) model = GPT2LMHeadModel.from_pretrained(args.model, torchscript=True).eval().cuda() def dict_test(example_inputs: Dict[str, torch.Tensor]): return model(input_ids=example_inputs['input_ids'], attention_mask=example_inputs['attention_mask']) model_scripted = torch.jit.trace(dict_test, model_inputs) torch.jit.save(model_scripted, "traced_bert.pt") ``` I used the above code to generate GPT2 torchscript model and got error as follows: `RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient` when running to ``` "./transformers/models/gpt2/modeling_gpt2.py", line 830, in forward inputs_embeds = self.wte(input_ids) ``` ### Expected behavior generate GPT2 torchscript model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22149/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22148
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22148/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22148/comments
https://api.github.com/repos/huggingface/transformers/issues/22148/events
https://github.com/huggingface/transformers/pull/22148
1,622,834,153
PR_kwDOCUB6oc5L9vMm
22,148
Update 2 doctest expected values for torch 2.0.0
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? 2 doctests need to update their expected values with torch 2.0.0. (same reason as in #21975)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22148/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22148", "html_url": "https://github.com/huggingface/transformers/pull/22148", "diff_url": "https://github.com/huggingface/transformers/pull/22148.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22148.patch", "merged_at": 1678785197000 }
https://api.github.com/repos/huggingface/transformers/issues/22147
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22147/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22147/comments
https://api.github.com/repos/huggingface/transformers/issues/22147/events
https://github.com/huggingface/transformers/issues/22147
1,622,778,172
I_kwDOCUB6oc5guaE8
22,147
OneFormerProcessor、MaskFormerImageProcessor will cause errors if segmentation_maps only have elements 0 and 1
{ "login": "yuyijiong", "id": 73890704, "node_id": "MDQ6VXNlcjczODkwNzA0", "avatar_url": "https://avatars.githubusercontent.com/u/73890704?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuyijiong", "html_url": "https://github.com/yuyijiong", "followers_url": "https://api.github.com/users/yuyijiong/followers", "following_url": "https://api.github.com/users/yuyijiong/following{/other_user}", "gists_url": "https://api.github.com/users/yuyijiong/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuyijiong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuyijiong/subscriptions", "organizations_url": "https://api.github.com/users/yuyijiong/orgs", "repos_url": "https://api.github.com/users/yuyijiong/repos", "events_url": "https://api.github.com/users/yuyijiong/events{/privacy}", "received_events_url": "https://api.github.com/users/yuyijiong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts @alaradirik " ]
1,678
1,678
1,678
NONE
null
### System Info transformers-4.26.0 do not have this bug but transformers-4.27.0.dev0 has. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import OneFormerProcessor, OneFormerForUniversalSegmentation, OneFormerImageProcessor, OneFormerConfig from transformers import Mask2FormerImageProcessor, Mask2FormerForUniversalSegmentation from PIL import Image import requests import torch import numpy as np import matplotlib processor = OneFormerProcessor.from_pretrained("shi-labs/oneformer_ade20k_swin_tiny",num_text=134,do_reduce_labels=True,) image_np=np.random.randint(0,255,(3,512,512)) #segmentation_maps only have elements 0 and 1 segmentation_maps = torch.randint(0, 2, (image_np.shape[1], image_np.shape[2]), dtype=torch.long) inst2class={1: 4} raw_inputs=processor.image_processor([image_np], task_inputs=["panoptic"], segmentation_maps=[segmentation_maps], return_tensors="pt", instance_id_to_semantic_id=inst2class, do_reduce_labels=True, ignore_index=None) ``` #ERROR ``` E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py:419: FutureWarning: The `reduce_labels` argument is deprecated and will be removed in v4.27. Please use `do_reduce_labels` instead. warnings.warn( Traceback (most recent call last): File "E:\condaenv\yaogan\lib\site-packages\IPython\core\interactiveshell.py", line 3460, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-ed9733992fe8>", line 23, in <module> raw_inputs=processor.image_processor([image_np], File "E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py", line 524, in __call__ return self.preprocess(images, task_inputs=task_inputs, segmentation_maps=segmentation_maps, **kwargs) File "E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py", line 708, in preprocess encoded_inputs = self.encode_inputs( File "E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py", line 962, in encode_inputs masks, classes = self.convert_segmentation_map_to_binary_masks( File "E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py", line 516, in convert_segmentation_map_to_binary_masks return convert_segmentation_map_to_binary_masks( File "E:\condaenv\yaogan\lib\site-packages\transformers\models\oneformer\image_processing_oneformer.py", line 288, in convert_segmentation_map_to_binary_masks class_id = instance_id_to_semantic_id[label + 1 if reduce_labels else label] KeyError: 255 ``` This bug is caused by a **resize** function of OneFormerProcessor, which convert segmentation_maps to PIL.Image and then convert to np.ndarray. After **resize**, segmentation_maps have elements 0 and 255, so the bug arise. ### Expected behavior fix this bug before release 4.27.0 as stable version transformers-4.26.0 do not have this bug
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22147/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22146
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22146/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22146/comments
https://api.github.com/repos/huggingface/transformers/issues/22146/events
https://github.com/huggingface/transformers/issues/22146
1,622,567,046
I_kwDOCUB6oc5gtmiG
22,146
Missing parameter settings in BLIP 2
{ "login": "Marcophono2", "id": 22599855, "node_id": "MDQ6VXNlcjIyNTk5ODU1", "avatar_url": "https://avatars.githubusercontent.com/u/22599855?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Marcophono2", "html_url": "https://github.com/Marcophono2", "followers_url": "https://api.github.com/users/Marcophono2/followers", "following_url": "https://api.github.com/users/Marcophono2/following{/other_user}", "gists_url": "https://api.github.com/users/Marcophono2/gists{/gist_id}", "starred_url": "https://api.github.com/users/Marcophono2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Marcophono2/subscriptions", "organizations_url": "https://api.github.com/users/Marcophono2/orgs", "repos_url": "https://api.github.com/users/Marcophono2/repos", "events_url": "https://api.github.com/users/Marcophono2/events{/privacy}", "received_events_url": "https://api.github.com/users/Marcophono2/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hey! Did you try playing with `generation_config` ? All the arguments that you are looking for can either be setup inside, or provided in the `generate` kwargs. Tempertature and penalty length are both availble 😉 not sure about nucleus sampling, but what you are looking for is probably [here](https://huggingface.co/docs/transformers/internal/generation_utils#utilities-for-generation) or [here](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/text_generation#transformers.GenerationMixin.generate). Tell me if you can't find what you were looking for! ", "@ArthurZucker , that sounds wonderful! I have no idea why I missed this at least a dozen times. :) I will try it out later today. Thank you very much!", "It's pretty hard for us to debug if there's no error message being given. :(\r\n\r\nAlso, BLIP-2 should support all arguments of the `generate` method, and there's no need to use the `with torch.device(\"cuda\")` context manager, as this might break the code. The `device_map` argument of the `from_pretrained` method will take care of placing everything on the appropriate device.\r\n\r\nRefer to the example code snippets shown at the bottom of the model cards like [this one](https://huggingface.co/Salesforce/blip2-flan-t5-xxl) on the hub regarding usage in 8bit.", "Thank you, @NielsRogge , I tried the example code as the very first one but had the same problem. To confirm (myself) I tried it again. Without setting a minimum length the inference is so fast that I wouldn't mind about the performance issue. But when adding a minimum length then this bottleneck is really annoying. I can confirm that the cpu usage, while inference, is exactly 100% for the inference job. Either 1 cpu thread (from 24) is at 100% or one is at 66% and a second one at 33%. It is caused by the 8 bit setting. Can't anyone confirm (or refute) my observations?", "Oh! I think I answered to the wrong comment or in the wrong thread. Or you answered in the wrong thread, @NielsRogge ! 😄 \r\nMay be you refered to my other thread https://github.com/huggingface/transformers/issues/22011 ?", "@Marcophono2 did you figure out nice settings to use? I also switched from BLIP codebase to using transformers version and the generated captions are not as good. There is a lot of repeating. I've tried with default, contrastive search, multinomial sampling, beam search, and diverse beam search and still haven't found settings that give consistent captions like the old BLIP library. ", "@pharmapsychotic Wow, Mr Clip-Interrogator! I love your tool and use it very often!\r\nUnfortunatelly I didn't find a solution for better control over BLIP2. Also I switched back from transformers to the native codebase since I realized that opt2.7b is working as good as the flan-t5-xxl (for me at least) and I am able to put it into my 4090 vram without needing a 8 bit conversion. The inference time is much shorter now, about 0.6 seconds, if using standard length. And now I have some more control over the settings, excluded length + senseful output. Meanwhile I think there is no really solution for it. The captions of the training sets are simply too small. The only thing I could imagine is to ask certain questions in a second step depending on the (short) standard output of BLIP2. Another \"workaround\" that I use meanwhile is to analyse an image additionally with CLIP related to pre-defined points of interest. Using feature extraction is a mighty tool for a lot of things here. For example to estimate the age of a person I use feature extraction + classification like\r\n\r\n`cls_namesA = [\"age of 1 year\",\"age of 2 years\",\"age of 3 years\",\"age of 4 years\",\"age of 5 years\",\"age of 6 years\",\"age of 7 years\",\"age of 8 years\",\"age of 9 years\",\"age of 10 years\",\"age of 11 years\",\"age of 12 years\",\"age of 13 years\",\"age of 14 years\",\"age of 15 years\",\"age of 16 years\",\"age of 17 years\",\"age of 18 years\",\"age of 19 years\",\"age of 20 years\",\"age of 21 years\",\"age of 22 years\",\"age of 23 years\",\"age of 24 years\",\"age of 25 years\",\"age of 26 years\",\"age of 27 years\",\"age of 28 years\",\"age of 29 years\",\"age of 30 years\",\"age of 31 years\",\"age of 32 years\",\"age of 33 years\",\"age of 34 years\",\"age of 35 years\",\"age of 36 years\",\"age of 37 years\",\"age of 38 years\",\"age of 39 years\",\"age of 40 years\",\"age of 41 years\",\"age of 42 years\",\"age of 43 years\",\"age of 44 years\",\"age of 45 years\",\"age of 46 years\",\"age of 47 years\",\"age of 48 years\",\"age of 49 years\",\"age of 50 years\",\"age of 51 years\",\"age of 52 years\",\"age of 53 years\",\"age of 54 years\",\"age of 55 years\",\"age of 56 years\",\"age of 57 years\",\"age of 58 years\",\"age of 59 years\",\"age of 60 years\",\"age of 61 years\",\"age of 62 years\",\"age of 63 years\",\"age of 64 years\",\"age of 65 years\",\"age of 66 years\",\"age of 67 years\",\"age of 68 years\",\"age of 69 years\",\"age of 70 years\",\"age of 71 years\",\"age of 72 years\",\"age of 73 years\",\"age of 74 years\",\"age of 75 years\",\"age of 76 years\",\"age of 77 years\",\"age of 78 years\",\"age of 79 years\",\"age of 80 years\",\"age of 81 years\",\"age of 82 years\",\"age of 83 years\",\"age of 84 years\",\"age of 85 years\",\"age of 86 years\",\"age of 87 years\",\"age of 88 years\",\"age of 89 years\",\"age of 90 years\",\"age of 91 years\",\"age of 92 years\",\"age of 93 years\",\"age of 94 years\",\"age of 95 years\",\"age of 96 years\",\"age of 97 years\",\"age of 98 years\",\"age of 99 years\",\"age of 100 years\",\"age of 101 years\",\"age of 102 years\",\"age of 103 years\"]`\r\n\r\nwith a filtering and second classification in a second step.\r\n\r\nThat works extremly fast and well! Also for other points of interest. I found out that ViT-B-32 brings the best results.\r\n\r\n`modelC, vis_processors2, txt_processors2 = load_model_and_preprocess(\"clip_feature_extractor\", model_type=\"ViT-B-32\", is_eval=True, device=device)`\r\n\r\nBest regards\r\nMarc\r\n", "Thanks for reporting, we are looking into why this is the case. cc @gante ", "Hi, I wonder how should I do if I would like to generate multiple captions for each image?\r\n\r\nFor example, we could use \"use_nucleus_sampling\" in Lavis version of BLIP2 to accomplish that, but I haven't found a way in hugging face version of BLIP2.\r\n\r\ngenerated_text = model.generate(\r\n {\"image\": image},\r\n use_nucleus_sampling=True,\r\n num_captions=20\r\n )", "Oh yes one reason why results weren't the same was because you might have used different generation settings. Note that if you do `model.generate(**inputs)`, greedy decoding is used by default (which is the most simple form of generating text by taking the token with the highest probability at each time step). \r\n\r\nTo match the settings in the BLIP-2 repo, which uses beam search by default as seen [here](https://github.com/salesforce/LAVIS/blob/5ee63d688ba4cebff63acee04adaef2dee9af207/lavis/models/blip2_models/blip2_opt.py#L149), you can do `model.generate(**inputs, num_beams=5, max_new_tokens=30, repetition_penalty=1.0, length_penalty=1.0, temperature=1)`. To use nucleus sampling, you can do `model.generate(**inputs, do_sample=True, top_p=0.9)`", "I've had really good success with BLIP2 since it came out a couple months ago, and now am rebuilding my notebooks on transformers. However, being new to transformers, it would be nice having `num_captions` natively available, as it is this feature that makes captioning powerful on my end.", "Hi @rodrigo-barraza this is supported, just pass in `num_return_sequences` as argument to the `generate()` method.", "> Hi @rodrigo-barraza this is supported, just pass in `num_return_sequences` as argument to the `generate()` method.\r\n\r\nOh wow, amazing. Not sure how I missed that. Thanks a bunch! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,697
1,685
NONE
null
### System Info - `transformers` version: 4.27.0.dev0 - Platform: Linux-5.19.0-31-generic-x86_64-with-glibc2.36 - Python version: 3.10.6 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 2.0.0.dev20230209+cu118 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The following code is working but can not process parameters like nucleus sampling, length penalty or temperature as provided in the original prokect from Salesforce. (to test out at https://huggingface.co/spaces/Salesforce/BLIP2) ``` from transformers import Blip2Processor,AutoProcessor, Blip2ForConditionalGeneration processor3 = AutoProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map={'':torch.cuda.current_device()}) with torch.device("cuda"): model3 = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map={'':torch.cuda.current_device()}) raw_image = Image.open('UIDimgsages/x.jpg').convert('RGB') inputs = processor3(raw_image, return_tensors="pt").to(device, torch.float16) out = model3.generate(**inputs, max_length=64, min_length=12) blip2_output = processor3.decode(out[0], skip_special_tokens=True) print(blip2_output) ``` ### Expected behavior It should be possible to adjust all parameters which are given in the original BLIP 2 project. @ArthurZucker @amyeroberts Best regards Marc
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22146/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22146/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22145
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22145/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22145/comments
https://api.github.com/repos/huggingface/transformers/issues/22145/events
https://github.com/huggingface/transformers/pull/22145
1,622,462,277
PR_kwDOCUB6oc5L8fli
22,145
Update BridgeTowerForContrastiveLearning
{ "login": "abhiwand", "id": 12353176, "node_id": "MDQ6VXNlcjEyMzUzMTc2", "avatar_url": "https://avatars.githubusercontent.com/u/12353176?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhiwand", "html_url": "https://github.com/abhiwand", "followers_url": "https://api.github.com/users/abhiwand/followers", "following_url": "https://api.github.com/users/abhiwand/following{/other_user}", "gists_url": "https://api.github.com/users/abhiwand/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhiwand/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhiwand/subscriptions", "organizations_url": "https://api.github.com/users/abhiwand/orgs", "repos_url": "https://api.github.com/users/abhiwand/repos", "events_url": "https://api.github.com/users/abhiwand/events{/privacy}", "received_events_url": "https://api.github.com/users/abhiwand/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank @ydshieh and @amyeroberts for your suggestions. In the latest commit, we addressed all of your feedbacks except one that regards to tests inspired from Clip's tests.\r\n@ydshieh Regarding tests, yes, you are right, it will require more works. We plan to have another PR to improve tests (following your suggestion) soon, yet this is not the main purpose of this PR. We would love to have you help us working on this, we truly appreciate your offer to help. \r\nIt will be great if you can help to merge this PR without restructuring tests, and please feel free to contact/tag me if you would like me to cooperate on improving/restructuring tests. \r\nThanks", "Thank @amyeroberts for approving this PR.\r\n@ydshieh We have updated the 2 positions that you most recently suggested to change in the latest commit. Could you please review, approve, and merge the PR? \r\nWe are looking forward to having this PR merged soon. \r\nThanks a lot.\r\n ", "Thank you again, @tileintel and @abhiwand, for the work! Merge now 🚀 ! ", "> and please feel free to contact/tag me if you would like me to cooperate on improving/restructuring tests. Thanks\r\n\r\nHi @tileintel. If you are willing and have time to work on the tests part, it would be really great. But as I mentioned earlier, we understand this part is not always the main interest of community contributors: as long as the basic necessary tests are added, it is sufficient for a model addition PR. So I can work on the test restructuring unless you would like to do it :-). WDYT?", "> > and please feel free to contact/tag me if you would like me to cooperate on improving/restructuring tests. Thanks\r\n> \r\n> Hi @tileintel. If you are willing and have time to work on the tests part, it would be really great. But as I mentioned earlier, we understand this part is not always the main interest of community contributors: as long as the basic necessary tests are added, it is sufficient for a model addition PR. So I can work on the test restructuring unless you would like to do it :-). WDYT?\r\n\r\n@ydshieh thanks for pointing this out. We would really appreciate if you can work on restructuring tests. Thank you!", "> > > and please feel free to contact/tag me if you would like me to cooperate on improving/restructuring tests. Thanks\r\n> > \r\n> > \r\n> > Hi @tileintel. If you are willing and have time to work on the tests part, it would be really great. But as I mentioned earlier, we understand this part is not always the main interest of community contributors: as long as the basic necessary tests are added, it is sufficient for a model addition PR. So I can work on the test restructuring unless you would like to do it :-). WDYT?\r\n> \r\n> @ydshieh thanks for pointing this out. We would really appreciate if you can work on restructuring tests. Thank you!\r\n\r\nSure, thanks for letting me know 😊" ]
1,678
1,678
1,678
CONTRIBUTOR
null
# What does this PR do? 1. Use return_loss in BridgeTowerForContrastiveLearning 2. Update example in BridgeTowerForContrastiveLearning 3. Handles @amyeroberts suggestion from https://github.com/huggingface/transformers/pull/21964 to use smaller vocab_size for BridgeTowerTester <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @sgugger can you please help review this fix. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22145/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22145", "html_url": "https://github.com/huggingface/transformers/pull/22145", "diff_url": "https://github.com/huggingface/transformers/pull/22145.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22145.patch", "merged_at": 1678910079000 }
https://api.github.com/repos/huggingface/transformers/issues/22144
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22144/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22144/comments
https://api.github.com/repos/huggingface/transformers/issues/22144/events
https://github.com/huggingface/transformers/pull/22144
1,622,297,409
PR_kwDOCUB6oc5L76_p
22,144
[trainer] add `--optim adamw_torch_fused` for pt-2.0+
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Arg, indeed let's wait a bit to make it the default then and leave the default untouched for now.", "I made additional fixes, please see the OP for details. Of course, I'm open to suggestions to handle these multiple nuances differently.", "its give me the following error\r\n\r\n`ValueError: --optim adamw_torch_fused with --fp16 requires PyTorch>2.0`", "That's correct. the new fused version is broken for fp16/amp in pt-2.0. It's fixed in pt-nightly and will be fully available in 2.0.1 and/or 2.1.0." ]
1,678
1,679
1,678
CONTRIBUTOR
null
This PR implement the discussion of https://github.com/huggingface/transformers/issues/22141 to 1. add support for the fused version of torch's AdamW. via `--optim adamw_torch_fused` 2. due it being too new, untested and a known bug the fix of which didn't make it into pt-2.0.0 - I did not make `--optim adamw_torch_fused` the default for pt>=2.0 - but prepared a place holder to do that for pt-2.1 instead. 3. added an assert for `--fp16` as apparently it's buggy with fp16/AMP https://github.com/huggingface/transformers/issues/22141#issuecomment-1467013132 fixed in https://github.com/pytorch/pytorch/issues/95781 but didn't make it into pt-2.0 cut off - this should be fixed in pt-2.0.1. But should already work with pt-2.1.0 nightly - except I think it's broken still (reported on pytroch slack). **Bottom line: for pt-2.0 `--optim adamw_torch_fused` will become available for any use except `--fp16` which will automatically be re-enabled upon pt-2.0.1 release, which probably will happen a month later. And we want to give `--optim adamw_torch_fused` time before making it a default.** ## Quality and Speed Comparison Benchmarks: ``` PYTHONPATH=src CUDA_VISIBLE_DEVICES=0 python scripts/benchmark/trainer-benchmark.py --base-cmd ' \ examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir \ --do_train --label_smoothing 0.1 --logging_strategy no --save_strategy no --per_device_train_batch_size 32 \ --max_source_length 512 --max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \ --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \ --source_prefix "translate English to Romanian: " --warmup_steps 50 \ --max_train_samples 20000 --dataloader_num_workers 2 --fp16 \ ' --target-metric-key train_samples_per_second --repeat-times 1 --variations '--optim adamw_torch_fused|--optim adamw_torch|--optim adamw_apex_fused' --report-metric-keys train_loss --base-variation '--optim adamw_torch' ``` and then adding `--bf16` and `--fp16` for the non-fp32 benchmarks. ### bf16 | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:--------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch_fused | 382.06 | 9 | 2.22 | | --optim adamw_torch | 350.22 | 0 | 2.22 | | --optim adamw_apex_fused | 386.81 | 10 | 2.22 | ### fp16 | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:--------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch_fused | 389.41 | 0 | 2.66 | | --optim adamw_torch | 389.37 | 0 | 2.55 | | --optim adamw_apex_fused | 399.27 | 3 | 2.53 | it's easy to see fp16 is broken - bad loss ### fp32 | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:--------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch_fused | 107.98 | 3 | 2.21 | | --optim adamw_torch | 105.14 | 0 | 2.21 | | --optim adamw_apex_fused | 108.20 | 3 | 2.21 | *** Setup: ``` Datetime : 2023-03-13 15:17:47 Software: transformers: 4.27.0.dev0 torch : 2.1.0.dev20230312+cu117 cuda : 11.7 python : 3.8.16 Hardware: 1 GPUs : NVIDIA A100 80GB PCIe, 79.21GB ``` Fixes: https://github.com/huggingface/transformers/issues/22141
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22144/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22144", "html_url": "https://github.com/huggingface/transformers/pull/22144", "diff_url": "https://github.com/huggingface/transformers/pull/22144.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22144.patch", "merged_at": 1678814523000 }
https://api.github.com/repos/huggingface/transformers/issues/22143
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22143/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22143/comments
https://api.github.com/repos/huggingface/transformers/issues/22143/events
https://github.com/huggingface/transformers/issues/22143
1,622,169,508
I_kwDOCUB6oc5gsFek
22,143
[trainer] bug in resume and gas>1
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Actually thought more about it this morning. The gradients accumulated before the save will be lost, so even if we save the `total_batched_samples` variable, we won't be able to resume training with the same gradients (they will be 0 instead of whatever was accumulated before the checkpoint).\r\n\r\nSo I think leaving the situation as is is okay, there is a tiny bit of training lost but it shouldn't impact convergence. And we should document somewhere that we do not guarantee checkpoints will not yield the exact same model using `save_strategy=\"epoch\"` in conjunction with gradient accumulation.", "oh, I wrongly assumed that they were saved. Yes, then it makes sense. There will be no miscalculation then, just some very minor intermediary results loss. I think it's all good.\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
CONTRIBUTOR
null
https://github.com/huggingface/transformers/pull/22098 fixed the issue with GAS>1 at the epoch boundary. the same bug will still happens at resume boundary, since `total_batched_samples` is currently reset to 0. So need to save `total_batched_samples` and restore from the saved value on resume.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22143/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22143/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22142
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22142/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22142/comments
https://api.github.com/repos/huggingface/transformers/issues/22142/events
https://github.com/huggingface/transformers/issues/22142
1,622,150,862
I_kwDOCUB6oc5gsA7O
22,142
fix url post pt-2.0 release
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[]
1,678
1,681
1,681
CONTRIBUTOR
null
change: https://pytorch.org/docs/2.0/generated/torch.compile.html?highlight=torch+compile#torch.compile to: https://pytorch.org/docs/stable/generated/torch.compile.html?highlight=torch+compile#torch.compile once the latter doc appears post pt-2.0 release for the trainer code here: https://github.com/huggingface/transformers/pull/22140 Actually it looks like all the good stuff is at https://pytorch.org/docs/master/dynamo/index.html - but again be wary of `/master`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22142/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22142/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22141
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22141/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22141/comments
https://api.github.com/repos/huggingface/transformers/issues/22141/events
https://github.com/huggingface/transformers/issues/22141
1,622,069,198
I_kwDOCUB6oc5grs_O
22,141
[Trainer] fused torch `AdamW` is added in 2.0
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "Good question! It might also revive the question of switching the optimizer from the HF implementation to the PyTorch one. Maybe we could add `adamw_fused_torch` as an option and then use for the default value of optim:\r\n- `adamw_hf` on PyTorch < 2.0 (as before)\r\n- `adamw_fused_torch` on PyTorch >= 2.0 so that users get the nice speed-up\r\n\r\nWhat do you think?", "You know me, I'm all for progress so I'd vote for your proposal +1.\r\n\r\nThe problem is that `adam_hf` != `adamw_*torch` algorithmically, so I will let you decide if you're OK with such a change of the default. I guess since `--adamw_hf` is still available - it's only a matter of communication to the community of the change of the default.\r\n\r\nIn a way perhaps pt-2.0 release allows us to change things as well.\r\n\r\n-----------\r\n\r\nNow practically let's implement `adamw_fused_torch`, then I can benchmark that it is faster while keeping loss not worse than `adamw_torch` and if all goes well then make it the default. Of course, it can be the same PR. or in 2 PRs\r\n\r\nI can work on it, unless you prefer to do it.\r\n", "Discussed with Lysandre and he's also fine changing the default with the PyTorch 2.0 release (we will highlight it in the release notes as a breaking change so users are aware). I think users upgrading to PyTorch 2.0 will expect some differences anyway. Also note that we have used the PyTorch AdamW in all the Accelerate example and no one raised an issue of convergence problems or differences with the Trainer examples in the past year.\r\n\r\nAs for the practicalities, go ahead with a PR if you want to do it. We just need to have it in main by tomorrow evening for the release :-) ", "I would like to add from the PyTorch side that fused AdamW is still in its nascent stage and has had recent fixes regarding grad scaling interaction on float16s which unfortunately were too recent to be included in PT 2.0 (https://github.com/pytorch/pytorch/pull/95847). If most hf models will not be using auto-mixed precision, this may not be an issue, but I did want to call out the risk and add a +1 for the safer \"add as an option for now so people can enroll for faster implementations, but don't make fused adamw the default yet\". ", "oh, thank you for the heads up, @janeyx99 - most models are using AMP. can the fix be pushed into 2.0.1?\r\n\r\nso should we not allow its use if AMP is used?\r\n\r\nedit: further discussion with Jane - only fp16/AMP is affected.\r\n\r\n---------------------\r\nOK, so @sgugger - we have to recall the plan on making it the default.\r\n\r\nI already implemented the default change in https://github.com/huggingface/transformers/pull/22144 so roll back and make an issue to switch to it in pt-2.0.1 or pt-2.1.0? but perhaps it's a good idea to let it steep for a few months anyway.\r\n\r\nand need to deal with f16/AMP too", "The issue I'm talking about is https://github.com/pytorch/pytorch/issues/95781, and the fix has already landed last week and will go into whatever next release we have. And the repro given in the issue was used with fp16/amp and interacts with grad scaler. I agree it may be wise to let the implementation bake in, but it would be great to get people trying it out to harden its implementation.", "Thanks for chiming in @janeyx99 . Let's wait a bit to change the default then!", "ok, so https://github.com/huggingface/transformers/pull/22144 adds `--optim adamw_torch_fused` - but will assert on fp16/AMP w/ pt-2.0.0 and is already programmed to allow it for pt-2.0.1.\r\n\r\nMaking it a default will happen at pt-2.1.0 release or later.", "I confirmed that the fp16 issue has been solved in today's nightly, the last fix was here https://github.com/pytorch/pytorch/pull/97415, discussed here https://github.com/pytorch/pytorch/issues/96755\r\n\r\nAs you can see the loss now matches for fp16/amp:\r\n\r\n| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |\r\n|:--------------------------|------------------------------------:|------------:|----------------:|\r\n| --optim adamw_torch_fused | 387.10 | 3 | 2.66 |\r\n| --optim adamw_torch | 377.61 | 0 | 2.66 |\r\n| --optim adamw_apex_fused | 389.49 | 3 | 2.66 |\r\n\r\nso our future proofing (allowing `--fp16` with `--adamw_torch_fused`) should work once pytorch makes a new release." ]
1,678
1,679
1,678
CONTRIBUTOR
null
### Feature request there is a faster pytorch version of AdamW, it's the fused one. it was added in Feb-23 and will be part of pt-2.0. `AdamW(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8…, fused=True).` there is also `Adam` (not `AdamW`) which also has the fused version since pt-1.13 but we don't expose this one. Now the question is this: should we add `--optim adamw_fused_torch` and allow it only for pt-2.0+ or silently switch `--optim adamw_torch` to the fused version when pt-2.0+ is used? cc: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22141/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22141/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22140
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22140/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22140/comments
https://api.github.com/repos/huggingface/transformers/issues/22140/events
https://github.com/huggingface/transformers/pull/22140
1,622,061,848
PR_kwDOCUB6oc5L7I5G
22,140
Remove backend check for torch.compile
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? Since many of the choices in the list of backends for `torch.compile` do not work, this PR removes any check on the backend selected and let PyTorch itself errors if not happy. This also cleans up a bit the integration and mark everything as experimental.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22140/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22140", "html_url": "https://github.com/huggingface/transformers/pull/22140", "diff_url": "https://github.com/huggingface/transformers/pull/22140.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22140.patch", "merged_at": 1678739641000 }
https://api.github.com/repos/huggingface/transformers/issues/22139
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22139/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22139/comments
https://api.github.com/repos/huggingface/transformers/issues/22139/events
https://github.com/huggingface/transformers/pull/22139
1,621,966,642
PR_kwDOCUB6oc5L61CL
22,139
Update configuration_align.py (projected_dim=640)
{ "login": "bishmdl76", "id": 68867214, "node_id": "MDQ6VXNlcjY4ODY3MjE0", "avatar_url": "https://avatars.githubusercontent.com/u/68867214?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bishmdl76", "html_url": "https://github.com/bishmdl76", "followers_url": "https://api.github.com/users/bishmdl76/followers", "following_url": "https://api.github.com/users/bishmdl76/following{/other_user}", "gists_url": "https://api.github.com/users/bishmdl76/gists{/gist_id}", "starred_url": "https://api.github.com/users/bishmdl76/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bishmdl76/subscriptions", "organizations_url": "https://api.github.com/users/bishmdl76/orgs", "repos_url": "https://api.github.com/users/bishmdl76/repos", "events_url": "https://api.github.com/users/bishmdl76/events{/privacy}", "received_events_url": "https://api.github.com/users/bishmdl76/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
updated projected_dim=640 in the argument section of the class AlignConfig
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22139/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22139", "html_url": "https://github.com/huggingface/transformers/pull/22139", "diff_url": "https://github.com/huggingface/transformers/pull/22139.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22139.patch", "merged_at": 1678731132000 }
https://api.github.com/repos/huggingface/transformers/issues/22138
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22138/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22138/comments
https://api.github.com/repos/huggingface/transformers/issues/22138/events
https://github.com/huggingface/transformers/pull/22138
1,621,694,281
PR_kwDOCUB6oc5L563g
22,138
Fix doc link for MGP-STR
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? This fixes the link to the doc.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22138/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22138", "html_url": "https://github.com/huggingface/transformers/pull/22138", "diff_url": "https://github.com/huggingface/transformers/pull/22138.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22138.patch", "merged_at": 1678721211000 }
https://api.github.com/repos/huggingface/transformers/issues/22137
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22137/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22137/comments
https://api.github.com/repos/huggingface/transformers/issues/22137/events
https://github.com/huggingface/transformers/issues/22137
1,621,682,507
I_kwDOCUB6oc5gqOlL
22,137
Return attention_mask in FeatureExtractionPipeline output
{ "login": "anruijian", "id": 115125339, "node_id": "U_kgDOBtysWw", "avatar_url": "https://avatars.githubusercontent.com/u/115125339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anruijian", "html_url": "https://github.com/anruijian", "followers_url": "https://api.github.com/users/anruijian/followers", "following_url": "https://api.github.com/users/anruijian/following{/other_user}", "gists_url": "https://api.github.com/users/anruijian/gists{/gist_id}", "starred_url": "https://api.github.com/users/anruijian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anruijian/subscriptions", "organizations_url": "https://api.github.com/users/anruijian/orgs", "repos_url": "https://api.github.com/users/anruijian/repos", "events_url": "https://api.github.com/users/anruijian/events{/privacy}", "received_events_url": "https://api.github.com/users/anruijian/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This doesn't seem like a use-case for the pipeline though. Since you want access to the process inputs, you should just used the tokenizer and the model directly.", "Your comment makes sense. As my goal aligns with the pipeline's main functionality, I think I will subclass `FeatureExtractionPipeline` and make small modifications to achieve my goal. Feel free to close the issue. Thank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
CONTRIBUTOR
null
### Feature request Return `attention_mask` as one output of the FeatureExtractionPipeline so that padding token embeddings can be ignored. ### Motivation **Who can help?** @Narsil When using the `FeatureExtractionPipeline` to generate sentence embeddings, the input to the pipeline processes a raw sentence with a tokenizer. The output of the pipeline is a tensor of shape `[1, seq_len, hidden_dim]`. If the input is padded, `seq_len` is equal to the `max_length` of the tokenizer or longest seq in the batch. However, when performing mean pooling of individual word embeddings to obtain the sentence embedding, one may want to use `attention_mask` in order to ignore the padding token embeddings (see the mean pooling example below). But, FeatureExtractionPipeline does not return `attention_mask` as part of its output. ```python #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1) sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9) return sum_embeddings / sum_mask ``` ### Your contribution I can submit a pull request to the issue if it sounds good to you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22137/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22136
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22136/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22136/comments
https://api.github.com/repos/huggingface/transformers/issues/22136/events
https://github.com/huggingface/transformers/pull/22136
1,621,596,268
PR_kwDOCUB6oc5L5lkb
22,136
Enforce same behavior as PyTorch 2.0 for older versions
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "hmm, that didn't work on DS side - perhaps they have an old pytorch - checking\r\n\r\n```\r\n2023-03-13T21:25:45.7750922Z Traceback (most recent call last):\r\n2023-03-13T21:25:45.7751317Z File \"/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/transformers/examples/pytorch/translation/run_translation.py\", line 664, in <module>\r\n2023-03-13T21:25:45.7751409Z main()\r\n2023-03-13T21:25:45.7751794Z File \"/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/transformers/examples/pytorch/translation/run_translation.py\", line 581, in main\r\n2023-03-13T21:25:45.7751970Z train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n2023-03-13T21:25:45.7752299Z File \"/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/transformers/src/transformers/trainer.py\", line 1631, in train\r\n2023-03-13T21:25:45.7752421Z return inner_training_loop(\r\n2023-03-13T21:25:45.7752790Z File \"/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/transformers/src/transformers/trainer.py\", line 1814, in _inner_training_loop\r\n2023-03-13T21:25:45.7752913Z model.zero_grad(set_to_none=True)\r\n2023-03-13T21:25:45.7753166Z TypeError: zero_grad() got an unexpected keyword argument 'set_to_none'\r\n```", "They are using torch==1.8.2 and it fails, which means that this change will break for any user with older pytorch versions - let me find out when `set_to_none` was added.\r\n\r\nUsing this as a test:\r\n```\r\npython -c 'import torch; m=torch.nn.Linear(1,1); m.zero_grad(set_to_none=True)'\r\n```" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? The default of `set_to_none` will change in PyTorch 2.0 because it is slightly better in terms of memory consumption. This PR uses it for all versions of PyTorch in the Trainer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22136/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22136", "html_url": "https://github.com/huggingface/transformers/pull/22136", "diff_url": "https://github.com/huggingface/transformers/pull/22136.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22136.patch", "merged_at": 1678737051000 }
https://api.github.com/repos/huggingface/transformers/issues/22135
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22135/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22135/comments
https://api.github.com/repos/huggingface/transformers/issues/22135/events
https://github.com/huggingface/transformers/pull/22135
1,621,592,947
PR_kwDOCUB6oc5L5k3S
22,135
Prepare daily CI for torch 2.0.0
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? ~~⚠️⚠️ Don't merge before I run it (again) ⚠️⚠️~~ This PR changes docker files / workflow files to use the upcoming torch `2.0.0`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22135/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22135", "html_url": "https://github.com/huggingface/transformers/pull/22135", "diff_url": "https://github.com/huggingface/transformers/pull/22135.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22135.patch", "merged_at": 1678742476000 }
https://api.github.com/repos/huggingface/transformers/issues/22133
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22133/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22133/comments
https://api.github.com/repos/huggingface/transformers/issues/22133/events
https://github.com/huggingface/transformers/pull/22133
1,621,499,615
PR_kwDOCUB6oc5L5Qol
22,133
[`Whiper`] add `get_input_embeddings` to `WhisperForAudioClassification`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Yes I can confirm the script provided by the user:\r\n```python\r\nfrom transformers import WhisperFeatureExtractor\r\nfrom transformers import WhisperTokenizer\r\nfrom transformers import WhisperProcessor\r\nfrom transformers import WhisperForAudioClassification\r\n\r\n# Select CUDA device index\r\nimport os\r\n\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\r\nmodel_name_or_path = \"openai/whisper-small\"\r\n\r\nfeature_extractor = WhisperFeatureExtractor.from_pretrained(model_name_or_path)\r\ntokenizer = WhisperTokenizer.from_pretrained(model_name_or_path)\r\nprocessor = WhisperProcessor.from_pretrained(model_name_or_path)\r\n\r\nmodel = WhisperForAudioClassification.from_pretrained(model_name_or_path, load_in_8bit=True, device_map=\"auto\")\r\n\r\nfrom peft import prepare_model_for_int8_training\r\nmodel = prepare_model_for_int8_training(model)\r\n```\r\nwork perfect now with this fix!", "I am a bit surprised by the fix, as it is not an embedding layer, this kind of break the usages we have for `get_input_embeddings`, which are for example `_resize_token_embeddings` and `tie_weights` which are both incompatible with the Whisper encoder as it does not have an embedding layer. \r\nSo not really sure this is the way to go.\r\nAlso, the `test_model_common_attributes` has to be updated\r\n```python \r\n # WhisperEncoder has no inputs_embeds and thus the `get_input_embeddings` fn is not implemented\r\n def test_model_common_attributes(self):\r\n pass\r\n```\r\nas well as :\r\n```python \r\n # input embeds is meaningless for an encoder-only acoustic model\r\n def test_inputs_embeds(self):\r\n pass\r\n```\r\nin whisper tests\r\n", "Even if it's not text, the layer converts an input to a hidden size, so it's kind of like an embedding. Not sure if there is a better name for this function but it doesn't shock me to use this for Whisper, esp since peft needs a common API across models.\r\n\r\nAs for the potential problems in `_resize_token_embeddings` and `tie_weights` I would wait for a user to actually raise an issue on this before giving them more thought. We can always override and add specific error messages.\r\n\r\n", "@ArthurZucker Added the common tests and there was an issue when running whisper without `decoder_input_ids` and `decoder_embeds` instead that I fixed. Can you please double check ? Thanks" ]
1,678
1,678
1,678
CONTRIBUTOR
null
# What does this PR do? Fixes: https://github.com/huggingface/peft/issues/173 Fixes: https://github.com/huggingface/transformers/issues/22131 This PR adds `get_input_embeddings` method to `WhisperForAudioClassification` and `WhisperForConditionalGeneration` to avoid some issues such as [here](https://github.com/huggingface/peft/issues/173) In my understanding `get_input_embeddings` method should return the first module that converts the input to the first hidden states and not necessarly an `nn.Embedding` layer cc @ArthurZucker @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22133/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22133", "html_url": "https://github.com/huggingface/transformers/pull/22133", "diff_url": "https://github.com/huggingface/transformers/pull/22133.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22133.patch", "merged_at": 1678733161000 }
https://api.github.com/repos/huggingface/transformers/issues/22132
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22132/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22132/comments
https://api.github.com/repos/huggingface/transformers/issues/22132/events
https://github.com/huggingface/transformers/pull/22132
1,621,484,418
PR_kwDOCUB6oc5L5Naw
22,132
Zero-shot image classification task guide
{ "login": "MKhalusova", "id": 1065417, "node_id": "MDQ6VXNlcjEwNjU0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MKhalusova", "html_url": "https://github.com/MKhalusova", "followers_url": "https://api.github.com/users/MKhalusova/followers", "following_url": "https://api.github.com/users/MKhalusova/following{/other_user}", "gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}", "starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions", "organizations_url": "https://api.github.com/users/MKhalusova/orgs", "repos_url": "https://api.github.com/users/MKhalusova/repos", "events_url": "https://api.github.com/users/MKhalusova/events{/privacy}", "received_events_url": "https://api.github.com/users/MKhalusova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Related PR with images https://huggingface.co/datasets/huggingface/documentation-images/discussions/57", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,699
1,678
CONTRIBUTOR
null
This PR adds the inference task guide for zero-shot image classification. It adds examples of inference with a pipeline and manual inference.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22132/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22132", "html_url": "https://github.com/huggingface/transformers/pull/22132", "diff_url": "https://github.com/huggingface/transformers/pull/22132.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22132.patch", "merged_at": 1678719437000 }
https://api.github.com/repos/huggingface/transformers/issues/22131
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22131/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22131/comments
https://api.github.com/repos/huggingface/transformers/issues/22131/events
https://github.com/huggingface/transformers/issues/22131
1,621,478,801
I_kwDOCUB6oc5gpc2R
22,131
WhisperForAudioClassification can't be prepared for int8 training
{ "login": "Nikhil-Paleti", "id": 68870951, "node_id": "MDQ6VXNlcjY4ODcwOTUx", "avatar_url": "https://avatars.githubusercontent.com/u/68870951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nikhil-Paleti", "html_url": "https://github.com/Nikhil-Paleti", "followers_url": "https://api.github.com/users/Nikhil-Paleti/followers", "following_url": "https://api.github.com/users/Nikhil-Paleti/following{/other_user}", "gists_url": "https://api.github.com/users/Nikhil-Paleti/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nikhil-Paleti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nikhil-Paleti/subscriptions", "organizations_url": "https://api.github.com/users/Nikhil-Paleti/orgs", "repos_url": "https://api.github.com/users/Nikhil-Paleti/repos", "events_url": "https://api.github.com/users/Nikhil-Paleti/events{/privacy}", "received_events_url": "https://api.github.com/users/Nikhil-Paleti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada @pacman100 ", "https://github.com/huggingface/transformers/pull/22133 should fix the issue" ]
1,678
1,678
1,678
NONE
null
### System Info System: Google Colab Pro latest version of transformers installed through: !pip install -q git+https://github.com/huggingface/transformers.git@main ### Who can help? The code that I used is: ``` from transformers import WhisperFeatureExtractor from transformers import WhisperTokenizer from transformers import WhisperProcessor from transformers import WhisperForAudioClassification # Select CUDA device index import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" model_name_or_path = "openai/whisper-small" feature_extractor = WhisperFeatureExtractor.from_pretrained(model_name_or_path) tokenizer = WhisperTokenizer.from_pretrained(model_name_or_path) processor = WhisperProcessor.from_pretrained(model_name_or_path) model = WhisperForAudioClassification.from_pretrained(model_name_or_path, load_in_8bit=True, device_map="auto" , num_labels=2, label2id=label2id, id2label=id2label) from peft import prepare_model_for_int8_training model = prepare_model_for_int8_training(model) ``` and the error I got is : ![image](https://user-images.githubusercontent.com/68870951/224712923-41d626a7-e326-4db4-aca4-9fda26ee0992.png) @sanchit-gandhi ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Open colab 2. Install latest version of transformers and peft ``` !pip install -q git+https://github.com/huggingface/transformers.git@main !pip install git+https://github.com/huggingface/peft.git@main ``` 3. use the code ``` from transformers import WhisperFeatureExtractor from transformers import WhisperTokenizer from transformers import WhisperProcessor from transformers import WhisperForAudioClassification # Select CUDA device index import os os.environ["CUDA_VISIBLE_DEVICES"] = "0" model_name_or_path = "openai/whisper-small" feature_extractor = WhisperFeatureExtractor.from_pretrained(model_name_or_path) tokenizer = WhisperTokenizer.from_pretrained(model_name_or_path) processor = WhisperProcessor.from_pretrained(model_name_or_path) model = WhisperForAudioClassification.from_pretrained(model_name_or_path, load_in_8bit=True, device_map="auto" , num_labels=2, label2id=label2id, id2label=id2label) from peft import prepare_model_for_int8_training model = prepare_model_for_int8_training(model) ``` ### Expected behavior Expected to get a model which is prepared for int 8 training
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22131/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22130
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22130/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22130/comments
https://api.github.com/repos/huggingface/transformers/issues/22130/events
https://github.com/huggingface/transformers/pull/22130
1,621,478,433
PR_kwDOCUB6oc5L5MIo
22,130
Fix gradient checkpointing bug in LongT5
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "repos_url": "https://api.github.com/users/KMFODA/repos", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No problem! Yeah I added it because I thought it was incorrectly omitted from LongT5Stack. Should it be removed?", "_The documentation is not available anymore as the PR was closed or merged._", "I think your changes are fine and it was something we forgot to add on our side at the first place ! " ]
1,678
1,678
1,678
CONTRIBUTOR
null
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22130/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22130", "html_url": "https://github.com/huggingface/transformers/pull/22130", "diff_url": "https://github.com/huggingface/transformers/pull/22130.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22130.patch", "merged_at": 1678716379000 }
https://api.github.com/repos/huggingface/transformers/issues/22129
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22129/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22129/comments
https://api.github.com/repos/huggingface/transformers/issues/22129/events
https://github.com/huggingface/transformers/pull/22129
1,621,454,635
PR_kwDOCUB6oc5L5G_p
22,129
Fix gradient checkpointing bug in xmod
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "repos_url": "https://api.github.com/users/KMFODA/repos", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22129/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22129", "html_url": "https://github.com/huggingface/transformers/pull/22129", "diff_url": "https://github.com/huggingface/transformers/pull/22129.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22129.patch", "merged_at": 1678716312000 }
https://api.github.com/repos/huggingface/transformers/issues/22128
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22128/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22128/comments
https://api.github.com/repos/huggingface/transformers/issues/22128/events
https://github.com/huggingface/transformers/pull/22128
1,621,450,459
PR_kwDOCUB6oc5L5GEh
22,128
Fix gradient checkpointing bug in xlm_roberta_xl
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "repos_url": "https://api.github.com/users/KMFODA/repos", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22128/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22128", "html_url": "https://github.com/huggingface/transformers/pull/22128", "diff_url": "https://github.com/huggingface/transformers/pull/22128.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22128.patch", "merged_at": 1678715555000 }
https://api.github.com/repos/huggingface/transformers/issues/22127
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22127/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22127/comments
https://api.github.com/repos/huggingface/transformers/issues/22127/events
https://github.com/huggingface/transformers/pull/22127
1,621,430,140
PR_kwDOCUB6oc5L5Bph
22,127
Fix gradient checkpointing bug in xglm
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "repos_url": "https://api.github.com/users/KMFODA/repos", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22127/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22127", "html_url": "https://github.com/huggingface/transformers/pull/22127", "diff_url": "https://github.com/huggingface/transformers/pull/22127.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22127.patch", "merged_at": 1678715363000 }
https://api.github.com/repos/huggingface/transformers/issues/22126
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22126/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22126/comments
https://api.github.com/repos/huggingface/transformers/issues/22126/events
https://github.com/huggingface/transformers/pull/22126
1,621,425,515
PR_kwDOCUB6oc5L5An4
22,126
Fix gradient checkpointing bug in trocr
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "repos_url": "https://api.github.com/users/KMFODA/repos", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22126/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22126", "html_url": "https://github.com/huggingface/transformers/pull/22126", "diff_url": "https://github.com/huggingface/transformers/pull/22126.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22126.patch", "merged_at": 1678718748000 }
https://api.github.com/repos/huggingface/transformers/issues/22125
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22125/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22125/comments
https://api.github.com/repos/huggingface/transformers/issues/22125/events
https://github.com/huggingface/transformers/pull/22125
1,621,422,142
PR_kwDOCUB6oc5L4_5-
22,125
Fix gradient checkpointing bug in Trajectory Transformer
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "repos_url": "https://api.github.com/users/KMFODA/repos", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing. Fixes Issue https://github.com/huggingface/transformers/issues/21737 cc @younesbelkada or @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22125/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22125", "html_url": "https://github.com/huggingface/transformers/pull/22125", "diff_url": "https://github.com/huggingface/transformers/pull/22125.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22125.patch", "merged_at": 1678715403000 }
https://api.github.com/repos/huggingface/transformers/issues/22124
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22124/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22124/comments
https://api.github.com/repos/huggingface/transformers/issues/22124/events
https://github.com/huggingface/transformers/pull/22124
1,621,350,328
PR_kwDOCUB6oc5L4wYH
22,124
[`Blip2`] skip accelerate test
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I am trained by @sgugger 's words and my prediction of the response to this PR is\r\n\r\nYes\r\n```\r\nthe test doesn't make sense for tiny models and triggers some undefined behaviors.\r\nProbably better to just skip it at this stage.\r\n```\r\nand\r\n```\r\nWe should have slow integration tests instead (on a regular-size model)\r\n```\r\n(but I am not sure if it's required to do this in the same PR, or just something we should change for all such tests in another PR)" ]
1,678
1,678
1,678
CONTRIBUTOR
null
# What does this PR do? This PR skips a test that is currently failing: https://github.com/huggingface/transformers/actions/runs/4395315343/jobs/7697061404 The tiny BLIP2 models uses T5 as a text decoder, that is itself having a parameter named `shared` that is tied with 3 other parameters of the model: `encoder.embed_tokens`, `decoder.embed_tokens`, `lm_head`. In some very specific usecases (small model, + small `max_memory`), the test hits some corner cases as `accelerate` does not support handling multiple tied weights yet: https://github.com/huggingface/accelerate/blob/37831808444e089a182f66713935d27c39a0cf2c/src/accelerate/utils/modeling.py#L232 & https://github.com/huggingface/accelerate/blob/37831808444e089a182f66713935d27c39a0cf2c/src/accelerate/utils/modeling.py#L566 Not that a similar test is also currently being skipped for T5: https://github.com/huggingface/transformers/blob/102b5ff4a813eea848bb82ff2f451e0f6b17b30c/tests/models/t5/test_modeling_t5.py#L689 As this usecase is very corner case and less likely to happen (most of BLIP2 models are large) and fixing this would require a lot of work on `accelerate`, let's skip this test as we did it for T5 cc @sgugger @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22124/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22124", "html_url": "https://github.com/huggingface/transformers/pull/22124", "diff_url": "https://github.com/huggingface/transformers/pull/22124.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22124.patch", "merged_at": 1678716202000 }
https://api.github.com/repos/huggingface/transformers/issues/22123
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22123/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22123/comments
https://api.github.com/repos/huggingface/transformers/issues/22123/events
https://github.com/huggingface/transformers/issues/22123
1,621,164,374
I_kwDOCUB6oc5goQFW
22,123
CUDA OOM when loading optimizer state dict
{ "login": "willxie", "id": 4821081, "node_id": "MDQ6VXNlcjQ4MjEwODE=", "avatar_url": "https://avatars.githubusercontent.com/u/4821081?v=4", "gravatar_id": "", "url": "https://api.github.com/users/willxie", "html_url": "https://github.com/willxie", "followers_url": "https://api.github.com/users/willxie/followers", "following_url": "https://api.github.com/users/willxie/following{/other_user}", "gists_url": "https://api.github.com/users/willxie/gists{/gist_id}", "starred_url": "https://api.github.com/users/willxie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/willxie/subscriptions", "organizations_url": "https://api.github.com/users/willxie/orgs", "repos_url": "https://api.github.com/users/willxie/repos", "events_url": "https://api.github.com/users/willxie/events{/privacy}", "received_events_url": "https://api.github.com/users/willxie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger ", "I tried to find the reference to the issue mentioning a CPU OOM but couldn't find it. Do you have a link handY?\r\nWe could always load the weights on the CPU then move them which would be a tiny bit slower, but this just happens once. With models getting bigger and bigger it's probably makes sens to have this as a default behavior though.", "Sorry I lost track of the issue with the CPU OOM but it was related to GPU memory in aggregate (deepspeed iirc) is greater than CPU. \r\n\r\nhttps://github.com/huggingface/transformers/issues/3730#issuecomment-629563466 is also relevant. My understanding is that `self.args.device` is where the model lives which is usually `gpu` or `tpu`. This puts optimizer state state_dict loading also to device with limited memory. Is there any reason to not always put this into CPU? My understanding is that if the model is trained using GPU they will be implicitly copied over during optimization but avoids the setup OOM. " ]
1,678
1,678
1,678
NONE
null
### System Info I am finetuning GPT NEO 1.3B with 1 GPU and 24GB VRAM. From scratch, model weights load and train fine with <1GB of memory to spare. When loading from checkpoint, I run into issues loading optimizer states. https://github.com/huggingface/transformers/blob/04bfac83b793b757e7b33188f88eebe21ac65ef7/src/transformers/trainer.py#L2434-L2436 If I hard-code `map_location ='cpu'`, the OOM goes away and I am able to resume training. After some reading, this was caused by the optimizer state dict being associated with 'gpu:0' at the time of checkpoint saving. I also found another ticket adding device logic to `map_location=self.args.device` in the first place due to CPU OOM for the PR author. I was wondering if the interface needs to be change in order to accommodate both use cases. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction see above. Otherwise from using run_clm.py from the repo without deepspeed. ### Expected behavior Resume from checkpoint should fit in the same vram footprint as finetune from scratch.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22123/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22122
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22122/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22122/comments
https://api.github.com/repos/huggingface/transformers/issues/22122/events
https://github.com/huggingface/transformers/issues/22122
1,620,937,575
I_kwDOCUB6oc5gnYtn
22,122
device_map='auto' doesn't use MPS backend on Apple M2
{ "login": "srogatch", "id": 5251612, "node_id": "MDQ6VXNlcjUyNTE2MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5251612?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srogatch", "html_url": "https://github.com/srogatch", "followers_url": "https://api.github.com/users/srogatch/followers", "following_url": "https://api.github.com/users/srogatch/following{/other_user}", "gists_url": "https://api.github.com/users/srogatch/gists{/gist_id}", "starred_url": "https://api.github.com/users/srogatch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srogatch/subscriptions", "organizations_url": "https://api.github.com/users/srogatch/orgs", "repos_url": "https://api.github.com/users/srogatch/repos", "events_url": "https://api.github.com/users/srogatch/events{/privacy}", "received_events_url": "https://api.github.com/users/srogatch/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "MPS devices are indeed not supported with `device_map=\"auto\"` yet. As a workaround you should just move your model to that device manually.", "> MPS devices are indeed not supported with `device_map=\"auto\"` yet. As a workaround you should just move your model to that device manually.\r\n\r\nHow to move the model to that device manually? Will I lose CPU and disk offload in that case?", "Yes, CPU and disk offload are not supported with the MPS device either for now. To move your model to the MPS device, you just do `model = model.to(\"mps\")`", "Manually moving a model to MPS does not seem to work. Below is a minimal example:\r\n\r\n```\r\nPython 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:26:08) [Clang 14.0.6 ]\r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 8.11.0 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: from transformers import T5ForConditionalGeneration, AutoTokenizer\r\n\r\nIn [2]: tokenizer = AutoTokenizer.from_pretrained('t5-small', model_max_length=512)\r\n ...: model = T5ForConditionalGeneration.from_pretrained('t5-small', device_map='auto')\r\n\r\nIn [3]: model.device\r\nOut[3]: device(type='cpu')\r\n\r\nIn [4]: input_string = 'translate English to German: The house is wonderful.\"'\r\n ...: inputs = tokenizer(input_string, return_tensors='pt').input_ids\r\n ...: outputs = model.generate(inputs, max_length=200)\r\n ...: print(tokenizer.decode(outputs[0]))\r\n<pad> Das Haus ist wunderbar.\"</s>\r\n\r\nIn [5]: model = model.to('mps')\r\n\r\nIn [6]: model.device\r\nOut[6]: device(type='mps', index=0)\r\n\r\nIn [7]: inputs = inputs.to('mps')\r\n ...: outputs = model.generate(inputs, max_length=200)\r\n ...: print(tokenizer.decode(outputs[0]))\r\n\r\nRuntimeError: Placeholder storage has not been allocated on MPS device!\r\n```\r\n\r\nTransformers version: 4.27.1\r\nAccelerate version: 0.17.1\r\nTorch version: 2.0.0\r\nMacOS 13.2.1 (22D68)", "Yes you need to load it without the `device_map=\"auto\"`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi,\r\nI am on M2 MAX CHIP MACOS that has 12 CPU, 38 GPU. I am having issue with ever modification of this code snippet. Would you please tell me how I can correct it?\r\n\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nimport transformers\r\nimport torch\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"tiiuae/falcon-40b-instruct\", trust_remote_code=True)\r\nmodel = model.to('mps')\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model)\r\npipeline = transformers.pipeline(\r\n \"text-generation\",\r\n model=model,\r\n tokenizer=tokenizer,\r\n torch_dtype=torch.bfloat16,\r\n trust_remote_code=True,\r\n # device = torch.device('mps'),\r\n # device_map=\"auto\",\r\n)", "> Hi, I am on M2 MAX CHIP MACOS that has 12 CPU, 38 GPU. I am having issue with ever modification of this code snippet. Would you please tell me how I can correct it?\r\n> \r\n> from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch\r\n> \r\n> model = AutoModelForCausalLM.from_pretrained(\"tiiuae/falcon-40b-instruct\", trust_remote_code=True) model = model.to('mps')\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( \"text-generation\", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, # device = torch.device('mps'), # device_map=\"auto\", )\r\n\r\nI also meet the problem.", "Any solution yet?", "Should the issue at least stay open as a feature request? This would be very nice to have. ", "THis is solved in the latest version of Accelerate (cc @SunMarc )", "> THis is solved in the latest version of Accelerate (cc @SunMarc )\r\n\r\n@sgugger Is this fix included in the latest https://github.com/huggingface/transformers/releases/tag/v4.30.2 release?", "It's in Accelerate, not Transformers. It will be in the version of Accelerate released today.", "Any solution for this issue? How can we ask the model to use MPS instead of CPU?", "Hi @moradisina, since the version [v0.20.0:](https://github.com/huggingface/accelerate/releases/tag/v0.20.0) of accelerate, `mps` device is supported with `device_map=\"auto\"`. It should automatically map your model to `mps` device if you are using a M2 chip .\r\n```py\r\nfrom transformers import AutoModelForCausalLM\r\nmodel = AutoModelForCausalLM(\"facebook/opt-350m\",device_map=\"auto\")\r\n# should return {\"\":\"mps\"}\r\nprint(model.hf_device_map)\r\n```\r\n\r\nYou can also do it manually by setting `device_map={\"\":\"mps\"}`:\r\n```py\r\nfrom transformers import AutoModelForCausalLM\r\nmodel = AutoModelForCausalLM(\"facebook/opt-350m\",device_map={\"\":\"mps\"})\r\n# should return {\"\":\"mps\"}\r\nprint(model.hf_device_map)\r\n```" ]
1,678
1,695
1,682
NONE
null
With the following program: ``` import os import time import readline import textwrap os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" os.environ["HF_ENDPOINT"] = "https://huggingface.co" os.environ["ACCELERATE_USE_MPS_DEVICE"] = "True" import torch from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig from accelerate import init_empty_weights, load_checkpoint_and_dispatch, Accelerator def main(): print('Pytorch version', torch.__version__) if torch.backends.mps.is_available(): active_device = torch.device('mps') elif torch.cuda.is_available(): active_device = torch.device('cuda', 0) else: active_device = torch.device('cpu') accelerator = Accelerator() print('Accelerator device: ', accelerator.device) checkpoint = "bigscience/bloom" tm_start = time.time() tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained( checkpoint, device_map="auto", offload_folder="offload", offload_state_dict=True, ) tm_end = time.time() print(f'Loaded in {tm_end - tm_start} seconds.') while True: prompt = input('Request to LLM: ') tm_start = time.time() inputs = tokenizer.encode(prompt, return_tensors="pt").to(active_device) tm_end = time.time() print(f'Encoded in {tm_end - tm_start} seconds.') tm_start = time.time() outputs = model.generate( inputs, max_new_tokens=2048, pad_token_id=tokenizer.eos_token_id, repetition_penalty=1.2) tm_end = time.time() print(f'Generated in {tm_end - tm_start} seconds.') tm_start = time.time() response = tokenizer.decode(outputs[0]) tm_end = time.time() print(f'Decoded in {tm_end - tm_start} seconds.') print("\n".join(textwrap.wrap(response, width=120))) if __name__ == '__main__': main() ``` the cpu backend is used by transformers/accelerate, even though it prints `Accelerator device: mps`. I know this because it's slow (below NVMe bandwidth) and the following is printed: ``` /Users/serge/PycharmProjects/macLLM/venv/lib/python3.9/site-packages/transformers/generation/utils.py:1359: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on mps, whereas the model is on cpu. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cpu') before running `.generate()`. warnings.warn( ``` Environment: transformers v4.26.1 accelerate v0.17.0 PyTorch v1.13.1 MacOS 13.2.1 (22D68) Python 3.9.6
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22122/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22121
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22121/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22121/comments
https://api.github.com/repos/huggingface/transformers/issues/22121/events
https://github.com/huggingface/transformers/issues/22121
1,620,907,145
I_kwDOCUB6oc5gnRSJ
22,121
Can not init BertTokenizerFast
{ "login": "wqh17101", "id": 26429138, "node_id": "MDQ6VXNlcjI2NDI5MTM4", "avatar_url": "https://avatars.githubusercontent.com/u/26429138?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wqh17101", "html_url": "https://github.com/wqh17101", "followers_url": "https://api.github.com/users/wqh17101/followers", "following_url": "https://api.github.com/users/wqh17101/following{/other_user}", "gists_url": "https://api.github.com/users/wqh17101/gists{/gist_id}", "starred_url": "https://api.github.com/users/wqh17101/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wqh17101/subscriptions", "organizations_url": "https://api.github.com/users/wqh17101/orgs", "repos_url": "https://api.github.com/users/wqh17101/repos", "events_url": "https://api.github.com/users/wqh17101/events{/privacy}", "received_events_url": "https://api.github.com/users/wqh17101/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not sure I understand, to use `TokenizerFast` you need the tokenizers llibrary. \r\nEither use:\r\n```python\r\n>>> from transformers import BertTokenizer\r\n>>> BertTokenizer \r\n```\r\nOr run `pip install tokenizers`. \r\nThis is not an issue but expected.", "What is the tokenizers llibrary? and how to install them? @ArthurZucker \r\nI have run `pip install tokenizers` to install tokenizers", "The `tokenizers` library is available [here](https://github.com/huggingface/tokenizers), it implements the backend of fast tokenizers in rust. If it is installed you should be able to import without any issues! Make sure it was installed in the environment you are using. ", "@ArthurZucker \r\n```\r\n[root@localhost home]# pip list |grep tokenizers\r\ntokenizers 0.13.2\r\n```\r\n```\r\n[root@localhost home]# python -c \"import tokenizers\"\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\nModuleNotFoundError: No module named 'tokenizers'\r\n```\r\nWhy @ArthurZucker ", "Sorry , it looks like the problem of my conda env" ]
1,678
1,678
1,678
NONE
null
### System Info Linux python3.7 tokenizers 0.12.1 transformers 4.26.1 @ArthurZucker ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` >>> from transformers import BertTokenizerFast >>> BertTokenizerFast() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/anaconda3//python3.7/site-packages/transformers/utils/dummy_tokenizers_objects.py", line 31, in __init__ requires_backends(self, ["tokenizers"]) File "/home/anaconda3/python3.7/site-packages/transformers/utils/import_utils.py", line 935, in requires_backends raise ImportError("".join(failed)) ImportError: BertTokenizerFast requires the 珞 Tokenizers library but it was not found in your environment. You can install it with: pip install tokenizers In a notebook or a colab, you can install it by executing a cell with !pip install tokenizers ``` ### Expected behavior work normally.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22121/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22120
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22120/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22120/comments
https://api.github.com/repos/huggingface/transformers/issues/22120/events
https://github.com/huggingface/transformers/issues/22120
1,620,757,390
I_kwDOCUB6oc5gmsuO
22,120
`No such file or directory` when setting `cache_dir`
{ "login": "rcalland", "id": 10794485, "node_id": "MDQ6VXNlcjEwNzk0NDg1", "avatar_url": "https://avatars.githubusercontent.com/u/10794485?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcalland", "html_url": "https://github.com/rcalland", "followers_url": "https://api.github.com/users/rcalland/followers", "following_url": "https://api.github.com/users/rcalland/following{/other_user}", "gists_url": "https://api.github.com/users/rcalland/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcalland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcalland/subscriptions", "organizations_url": "https://api.github.com/users/rcalland/orgs", "repos_url": "https://api.github.com/users/rcalland/repos", "events_url": "https://api.github.com/users/rcalland/events{/privacy}", "received_events_url": "https://api.github.com/users/rcalland/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I just tried your code sample and it runs without issue on both main and 4.26.1. Are you sure `\"cache_model\"` is in a folder you have write access to?", "hi @sgugger, you are correct, I just made a fresh virtualenv and my snippet indeed does work. However, it doesn't work in my dev environment, nor in my projects CI, so I suspect theres an issue with a dependency somewhere?\r\n\r\nI dug into what is happening inside `cache_model` in both environments, and found a clue:\r\n\r\nIn my new virtualenv:\r\n```\r\nlrwxr-xr-x 1 richard staff 136B Mar 14 14:38 preprocessor_config.json -> /Users/richard/Projects/huggingface_bug/cache_model/models--openai--clip-vit-base-patch32/blobs/5a12a1eb250987a4eee0e3e7d7338c4b22724be1\r\n```\r\n\r\nin my dev environment:\r\n```\r\nlrwxr-xr-x 1 richard staff 96B Mar 14 15:05 preprocessor_config.json -> cache_model/models--openai--clip-vit-base-patch32/blobs/5a12a1eb250987a4eee0e3e7d7338c4b22724be1\r\n```\r\n\r\nyou can see that the former creates a symlink to an absolute path, whereas the latter uses a relative path, which may explain why it cannot find the actual file.\r\n\r\nDo you have any ideas what could be causing this behaviour?", "Might be a different version of `huggingface_hub`?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
NONE
null
### System Info - `transformers` version: 4.26.1 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.10.9 - Huggingface_hub version: 0.13.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction **Note that `cache_model` directory exists**: ``` from transformers import CLIPProcessor, CLIPModel processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32", cache_dir="cache_model") inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=None, return_tensors="np", padding=True) print(inputs) ``` ### Expected behavior in recent versions of `transformers`, the code snippet above produces the following error: ``` FileNotFoundError: [Errno 2] No such file or directory: 'cache_model/models--openai--clip-vit-base-patch32/snapshots/e6a30b603a447e251fdaca1c3056b2a16cdfebeb/preprocessor_config.json' ``` whereas with `transformers==4.20.1`, the snippet successfully runs
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22120/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22119
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22119/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22119/comments
https://api.github.com/repos/huggingface/transformers/issues/22119/events
https://github.com/huggingface/transformers/issues/22119
1,620,576,694
I_kwDOCUB6oc5gmAm2
22,119
Trainer removes columns before transform is called
{ "login": "davidgilbertson", "id": 4443482, "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidgilbertson", "html_url": "https://github.com/davidgilbertson", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ah, I've re-read through the parameter lists for everything and found `remove_unused_columns=False` in `TrainingArguments`. Setting this resolves the issue, so I guess this won't be considered a bug. I think there's room for improvement in the UX though, perhaps a warning \"After removing unused columns, there were no columns left, this is probably not what you meant to do, right?\"\r\n\r\nLike `if set(dataset.column_names) == set(ignored_columns)`...\r\n", "We could add such a warning yes. Do you want to take a stab at a PR?", "Sorry, I've got a full plate at the moment.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "\r\n> We could add such a warning yes. Do you want to take a stab at a PR?\r\n\r\nJust ran into this issue, would like to create a PR for creating a warning about no columns being left during the `_remove_unused_columns()` call " ]
1,678
1,700
1,682
NONE
null
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.8 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have a text dataset and am attempting to apply a transform to tokenize the contents. I'm using: [with_transform()](https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.Dataset.with_transform) for this and it works fine: the transform removes the `text` column and adds the `input_ids` and `attention_mask` columns. The problem is when combining this with the `Trainer`, it runs `_remove_unused_columns()` _before_ calling the transform, which has the effect of removing the whole dataset, and I get an error as it tries to read the first batch: ``` IndexError: Invalid key: 664 is out of bounds for size 0 ``` ### Expected behavior I should be able to combine `Dataset.with_transform()` and `Trainer`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22119/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22118
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22118/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22118/comments
https://api.github.com/repos/huggingface/transformers/issues/22118/events
https://github.com/huggingface/transformers/issues/22118
1,620,502,739
I_kwDOCUB6oc5glujT
22,118
ImportError: cannot import name 'AlignModel' from 'transformers'
{ "login": "bishmdl76", "id": 68867214, "node_id": "MDQ6VXNlcjY4ODY3MjE0", "avatar_url": "https://avatars.githubusercontent.com/u/68867214?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bishmdl76", "html_url": "https://github.com/bishmdl76", "followers_url": "https://api.github.com/users/bishmdl76/followers", "following_url": "https://api.github.com/users/bishmdl76/following{/other_user}", "gists_url": "https://api.github.com/users/bishmdl76/gists{/gist_id}", "starred_url": "https://api.github.com/users/bishmdl76/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bishmdl76/subscriptions", "organizations_url": "https://api.github.com/users/bishmdl76/orgs", "repos_url": "https://api.github.com/users/bishmdl76/repos", "events_url": "https://api.github.com/users/bishmdl76/events{/privacy}", "received_events_url": "https://api.github.com/users/bishmdl76/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The model is not in Transformers 4.26.1, you need to [install from source](https://huggingface.co/docs/transformers/installation#install-from-source) to get access to it.", "Thanks!" ]
1,678
1,678
1,678
CONTRIBUTOR
null
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AlignModel, AlignProcessor ### Expected behavior The model should be imported without an ImportError
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22118/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22117
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22117/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22117/comments
https://api.github.com/repos/huggingface/transformers/issues/22117/events
https://github.com/huggingface/transformers/issues/22117
1,620,427,570
I_kwDOCUB6oc5glcMy
22,117
wav2vec2 with lm on persian doesn't seem to work
{ "login": "Bagherihaali", "id": 20620634, "node_id": "MDQ6VXNlcjIwNjIwNjM0", "avatar_url": "https://avatars.githubusercontent.com/u/20620634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bagherihaali", "html_url": "https://github.com/Bagherihaali", "followers_url": "https://api.github.com/users/Bagherihaali/followers", "following_url": "https://api.github.com/users/Bagherihaali/following{/other_user}", "gists_url": "https://api.github.com/users/Bagherihaali/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bagherihaali/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bagherihaali/subscriptions", "organizations_url": "https://api.github.com/users/Bagherihaali/orgs", "repos_url": "https://api.github.com/users/Bagherihaali/repos", "events_url": "https://api.github.com/users/Bagherihaali/events{/privacy}", "received_events_url": "https://api.github.com/users/Bagherihaali/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "any help?! @sanchit-gandhi ", "Hey @tntchack,\r\n\r\nNote the adding a LM does not always improve model performance it really depends on what data you train your LM-ngram on. \r\nWhile the LM-ngram will not make it impossible for the model to generate spelling errors / unigram errors it should greatly improve spelling errors. If it does not it is a sign that your LM has not been trained very well. \r\n\r\nA couple of tips from my side:\r\n- 1.) Could you try to also contact the Arabic forum to see if someone can help there? https://discuss.huggingface.co/t/arabic-nlp-introductions/3715\r\n- 2.) We have some strong Wav2Vec2-LMs in Arabic on the Hub, e.g. here: https://huggingface.co/kingabzpro/wav2vec2-large-xlsr-300-arabic . Could you try evaluating your model with the LM from this repo instead and see if it improves your perf? You can just load the processor from this repo and the model from your repo", "Hi @patrickvonplaten thanks for your response.\r\n\r\nI will have a look on those links and will try Arabic LM with my model", "Hey @patrickvonplaten,\r\nquick questions regarding N-gram:\r\n\r\n1. Does N-gram language model for wav2vec2 works upto order of N or sticks to fixed order of N?\r\n2. After the language model is created using the `kenlm` package there's a `unigrams.txt` file inside the directory. Does that files is used by the N-gram model to find appropriate weights while doing the beam search using ctc-decoder?", "$n$-grams typically stay fixed to order $n$ (see https://web.stanford.edu/~jurafsky/slp3/3.pdf). The $n$-gram language model implicitly uses the [pyctcdecode package](https://github.com/kensho-technologies/pyctcdecode) under the hood - the `unigrams.txt` file is used to hold unigram info (see [pyctcdecode/language_model.py#L362-L369](https://github.com/kensho-technologies/pyctcdecode/blob/afecb67622c1395b85b6a55d2902a780c0c927d4/pyctcdecode/language_model.py#L362-L369))." ]
1,678
1,683
1,683
NONE
null
### System Info transformers version : 4.26.1 Platform: colab Python version: 3.9.16 PyTorch version (GPU?): 1.13.1+cu116 (False) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: No (NA) Using distributed or parallel set-up in script?: No ### Who can help? @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction i have build a wav2vec2 with lm model followig this blog post [https://huggingface.co/blog/wav2vec2-with-ngram](https://huggingface.co/blog/wav2vec2-with-ngram) after building i have tested it on my own data and compared it with wav2vec2 without lm but i can't see any improvement in transcription. result with lm : ``` 'سالاموروزوخیر خدمتو دوستان عزیز امروز می خوایم با موادی که به صورت ' 'رایعج در درمانهای دندانپزشکی استفاده میشه آشنا بشیم خب اولین موادها ' 'اسید اچ ها هستن استیج ها در ترم های با کامپوزیت استفاده میشن و به ' 'صورت روتین معمولا اسید فسفری که سی و هفت درصد رو استفاده می کنند' ``` result without lm: ``` 'سالام موروز وخیر خدمتو دوستان ازیز امروز می خوایم با موادی که به صورت رایعج ' 'در درمان های دندانپزشکی استفاده میشه آشنا بشیم خب اولین مواد ها قسید اج ها ' 'استن اسیلش ها در ترمی هایی با کامپوزیت استفاده میشن و به صورت روتین معمولا ' 'اسید فسفری که سی و هفت درصد رو استفاده می کنن' ``` i also tried to apply fixes mentioned in this [issue](https://github.com/huggingface/transformers/issues/15392) but that also doesn't fix the problem notebook to reproduce model : [https://colab.research.google.com/drive/1Y_ESjlLd3cbhpmSvpLuR-rWiQTXdd3kq?usp=share_link](https://colab.research.google.com/drive/1Y_ESjlLd3cbhpmSvpLuR-rWiQTXdd3kq?usp=share_link ) i also wanted to push processor to hub but i got some errors so i uploaded files into google drive: [https://drive.google.com/drive/folders/1SyEIsd1ZQPBPdDrD97XhdDJs3xhMFNrJ?usp=share_link](https://drive.google.com/drive/folders/1SyEIsd1ZQPBPdDrD97XhdDJs3xhMFNrJ?usp=share_link) ### Expected behavior at least i expect that lm would fix unigram errors
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22117/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22116
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22116/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22116/comments
https://api.github.com/repos/huggingface/transformers/issues/22116/events
https://github.com/huggingface/transformers/pull/22116
1,620,421,937
PR_kwDOCUB6oc5L1sdK
22,116
Add pr_checks.mdx Italian translation (#17459)
{ "login": "alexcalabrese", "id": 58480609, "node_id": "MDQ6VXNlcjU4NDgwNjA5", "avatar_url": "https://avatars.githubusercontent.com/u/58480609?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexcalabrese", "html_url": "https://github.com/alexcalabrese", "followers_url": "https://api.github.com/users/alexcalabrese/followers", "following_url": "https://api.github.com/users/alexcalabrese/following{/other_user}", "gists_url": "https://api.github.com/users/alexcalabrese/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexcalabrese/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexcalabrese/subscriptions", "organizations_url": "https://api.github.com/users/alexcalabrese/orgs", "repos_url": "https://api.github.com/users/alexcalabrese/repos", "events_url": "https://api.github.com/users/alexcalabrese/events{/privacy}", "received_events_url": "https://api.github.com/users/alexcalabrese/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@omarespejel @sgugger for me is ok.\nThanks @alexcalabrese " ]
1,678
1,678
1,678
CONTRIBUTOR
null
## What does this PR do? Italian translation of doc related to the checks on a PR of :hugs: Transformers. * updated _toctree.yml * added pr_checks.mdx ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). See issue: [#17459](https://github.com/huggingface/transformers/issues/17459) @omarespejel @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22116/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22116", "html_url": "https://github.com/huggingface/transformers/pull/22116", "diff_url": "https://github.com/huggingface/transformers/pull/22116.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22116.patch", "merged_at": 1678713875000 }
https://api.github.com/repos/huggingface/transformers/issues/22115
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22115/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22115/comments
https://api.github.com/repos/huggingface/transformers/issues/22115/events
https://github.com/huggingface/transformers/pull/22115
1,620,409,371
PR_kwDOCUB6oc5L1qBJ
22,115
Added big_models.mdx italian translation #17600
{ "login": "nickprock", "id": 11136646, "node_id": "MDQ6VXNlcjExMTM2NjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/11136646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nickprock", "html_url": "https://github.com/nickprock", "followers_url": "https://api.github.com/users/nickprock/followers", "following_url": "https://api.github.com/users/nickprock/following{/other_user}", "gists_url": "https://api.github.com/users/nickprock/gists{/gist_id}", "starred_url": "https://api.github.com/users/nickprock/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nickprock/subscriptions", "organizations_url": "https://api.github.com/users/nickprock/orgs", "repos_url": "https://api.github.com/users/nickprock/repos", "events_url": "https://api.github.com/users/nickprock/events{/privacy}", "received_events_url": "https://api.github.com/users/nickprock/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
## What does this PR do? Italian translation of doc related to the preprocessing of :hugs: Transformers. * updated _toctree.yml * added big_models.mdx ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). See issue: [#17459](https://github.com/huggingface/transformers/issues/17459) @omarespejel @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22115/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22115", "html_url": "https://github.com/huggingface/transformers/pull/22115", "diff_url": "https://github.com/huggingface/transformers/pull/22115.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22115.patch", "merged_at": 1678716124000 }
https://api.github.com/repos/huggingface/transformers/issues/22114
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22114/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22114/comments
https://api.github.com/repos/huggingface/transformers/issues/22114/events
https://github.com/huggingface/transformers/issues/22114
1,620,400,990
I_kwDOCUB6oc5glVte
22,114
FastTokenizer for LLaMa
{ "login": "theblackcat102", "id": 13172147, "node_id": "MDQ6VXNlcjEzMTcyMTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/13172147?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theblackcat102", "html_url": "https://github.com/theblackcat102", "followers_url": "https://api.github.com/users/theblackcat102/followers", "following_url": "https://api.github.com/users/theblackcat102/following{/other_user}", "gists_url": "https://api.github.com/users/theblackcat102/gists{/gist_id}", "starred_url": "https://api.github.com/users/theblackcat102/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theblackcat102/subscriptions", "organizations_url": "https://api.github.com/users/theblackcat102/orgs", "repos_url": "https://api.github.com/users/theblackcat102/repos", "events_url": "https://api.github.com/users/theblackcat102/events{/privacy}", "received_events_url": "https://api.github.com/users/theblackcat102/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Let's maybe wait for the LLaMa PR to be merged first?", "it is fix on tokenizers\n\nhttps://github.com/huggingface/tokenizers/pull/1183", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,683
1,683
CONTRIBUTOR
null
### Feature request FastTokenizer support for LLaMa sentencepiece tokenizer. ### Motivation The offset_mapping is only available in FastTokenizer, it would be useful if there's support for this. ### Your contribution I have tried using existing sentencepiece based model as replacement. However hf conversation code means we are missing the byte fallback support ``` The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers ``` Which means out of vocabulary tokens are simply mapped to <unk> instead of using the byte mapping inside the vocab.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22114/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22113
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22113/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22113/comments
https://api.github.com/repos/huggingface/transformers/issues/22113/events
https://github.com/huggingface/transformers/issues/22113
1,620,362,527
I_kwDOCUB6oc5glMUf
22,113
Make GPT2ForSequenceClassification computationllay efficient
{ "login": "Kyeongpil", "id": 6302455, "node_id": "MDQ6VXNlcjYzMDI0NTU=", "avatar_url": "https://avatars.githubusercontent.com/u/6302455?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kyeongpil", "html_url": "https://github.com/Kyeongpil", "followers_url": "https://api.github.com/users/Kyeongpil/followers", "following_url": "https://api.github.com/users/Kyeongpil/following{/other_user}", "gists_url": "https://api.github.com/users/Kyeongpil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kyeongpil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kyeongpil/subscriptions", "organizations_url": "https://api.github.com/users/Kyeongpil/orgs", "repos_url": "https://api.github.com/users/Kyeongpil/repos", "events_url": "https://api.github.com/users/Kyeongpil/events{/privacy}", "received_events_url": "https://api.github.com/users/Kyeongpil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,681
1,681
CONTRIBUTOR
null
### Feature request Compute the logits using only the hidden state for the last position of the input sequence. ### Motivation The current GPT2ForSequenceClassification module computes logits using all hidden states, which computationally cost is proportional to the length of the input sequence. https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/models/gpt2/modeling_gpt2.py#L1384 If we compute the logits using only the hidden state for the last position of the input sequence, the cost is not proportional to the length. It can not be only applied to , but also to other XXXForSequenceClassification. ### Your contribution The followings are the code changes that I suggest: https://github.com/Kyeongpil/transformers/commit/f97f6e38f444522a55f236b37ca70b4e35096e12
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22113/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22112
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22112/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22112/comments
https://api.github.com/repos/huggingface/transformers/issues/22112/events
https://github.com/huggingface/transformers/issues/22112
1,620,238,456
I_kwDOCUB6oc5gkuB4
22,112
Time spent on engine.step() increased strangely
{ "login": "KaiLv69", "id": 39761308, "node_id": "MDQ6VXNlcjM5NzYxMzA4", "avatar_url": "https://avatars.githubusercontent.com/u/39761308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KaiLv69", "html_url": "https://github.com/KaiLv69", "followers_url": "https://api.github.com/users/KaiLv69/followers", "following_url": "https://api.github.com/users/KaiLv69/following{/other_user}", "gists_url": "https://api.github.com/users/KaiLv69/gists{/gist_id}", "starred_url": "https://api.github.com/users/KaiLv69/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KaiLv69/subscriptions", "organizations_url": "https://api.github.com/users/KaiLv69/orgs", "repos_url": "https://api.github.com/users/KaiLv69/repos", "events_url": "https://api.github.com/users/KaiLv69/events{/privacy}", "received_events_url": "https://api.github.com/users/KaiLv69/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @stas00 ", "- The first few steps lead to an OVERFLOW so optimizer didn't run and thus was fast. it then adjusted the scaling factor each step until it reached one that didn't lead to an overflow and thus it did the first optimizer step.\r\n- then you can see from the warning that your setup is misconfigured - you're trying to load too much into your GPU memory and all the optimizations are disabled since there is no gpu memory and it has to do a lot more work to be optimal. As you're already at bs=1 and `gradient_checkpointing=true`, the next thing to do is to either add more gpus or use gpus with more memory (I have no idea which gpus you're using) or enable `offload_param` (but not sure if you have enough cpu memory remain for offloading params):\r\n\r\nYou can follow the guidelines here:\r\nhttps://huggingface.co/docs/transformers/main/main_classes/deepspeed#how-to-choose-which-zero-stage-and-offloads-to-use-for-best-performance\r\n\r\nbut most likely the model you picked is too large for the hardware setup you have chosen.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,685
1,685
NONE
null
### System Info I'm using Deepspeed's zero3 with optimizer offload. Time spent on step() increased from ~100ms to 10,000+ ms after a few steps. The CPU memory in occupied ~350G (500G in total). - `transformers` version: 4.26.1 - Platform: Linux-4.15.0-189-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: True ### Who can help? @sgugger @stas ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. My code ```python from transformers.deepspeed import HfDeepSpeedConfig from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer from transformers.models.codegen.modeling_codegen import CodeGenMLP import argparse import torch import time, datetime import deepspeed from deepspeed.accelerator import get_accelerator from torch.utils.data import Dataset from transformers.activations import ClippedGELUActivation, LinearActivation from lion_pytorch import Lion from datasets import load_dataset import os, sys from transformers import Trainer, TrainingArguments, HfArgumentParser from transformers.integrations import WandbCallback class MyDataset(Dataset): def __init__(self, data, tknz): super().__init__() self.data = data self.tknz = tknz def __len__(self): return len(self.data) def __getitem__(self, idx): tknz_text = self.tknz( self.data[idx]['text'], max_length=args.seq_len, padding='max_length', truncation=True, ) return { 'input_ids': tknz_text['input_ids'], 'attention_mask': tknz_text['attention_mask'], 'labels': tknz_text['input_ids'] } def collate_fn(batch, tknz): tknz_batch = tknz.pad( batch, padding=True, max_length=args.seq_len, pad_to_multiple_of=8, return_tensors='pt' ) return { 'input_ids': tknz_batch['input_ids'], 'attention_mask': tknz_batch['attention_mask'], 'labels': tknz_batch['input_ids'] } def train(): print(f"[{datetime.datetime.today()}] Loading model.") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-16B-mono", use_cache=False) tknz = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-mono") tknz.pad_token = tknz.eos_token print(f"[{datetime.datetime.today()}] Loading dataset.") dataset = load_dataset("NeelNanda/pile-10k")['train'].select(range(args.data_size)) dataset = MyDataset(dataset, tknz) print(f"[{datetime.datetime.today()}] Initializing DeepSpeed Engine.") trainer = Trainer( model=model, args=training_args[0], data_collator=lambda batch: collate_fn(batch, tknz), train_dataset=dataset, tokenizer=tknz, callbacks=[WandbCallback()], ) print(f"[{datetime.datetime.today()}] Entering training loop.") trainer.train() if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--local_rank", type=int, default=-1) parser.add_argument('--project', type=str, default="my_project") parser.add_argument('--name', type=str, default="my_exps") parser.add_argument('--data_size', type=int, default=100) parser.add_argument('--seq_len', type=int, default=300) parser.add_argument("--training_args_file", type=str, default="config/training_args.yml") args = parser.parse_args() training_args = HfArgumentParser(TrainingArguments).parse_yaml_file(args.training_args_file) train() ``` 2. My script to run the Python file ```bash port=$(shuf -i25000-30000 -n1) WANDB_MODE=disabled \ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ deepspeed --master_port "$port" train_ds_zero3.py \ --seq_len 100 ``` 3. My config files - training_args.yml ```yaml output_dir: ./output do_train: true per_device_train_batch_size: 1 gradient_accumulation_steps: 1 num_train_epochs: 3 log_level: info fp16: true gradient_checkpointing: true remove_unused_columns: false #deepspeed: ./config/ds_zero3.json report_to: wandb run_name: ds_zero3_opt_offload_0311 deepspeed: config/ds_zero3_opt_offload.json ``` - ds_zero3_opt_offload.json ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": true } ``` 4. Time spent on step <img width="1440" alt="image" src="https://user-images.githubusercontent.com/39761308/224525584-f91586c5-4e04-4601-bdd6-569d35405aa0.png"> ### Expected behavior The CPU memory is occupied ~350G and I have 500G in total, so the occupation is not that high. I'm confused why the step() get so slow after that certain step. I hope the step() will be as quick as the first few steps (<100ms). Thank you for your kindly help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22112/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22111
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22111/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22111/comments
https://api.github.com/repos/huggingface/transformers/issues/22111/events
https://github.com/huggingface/transformers/issues/22111
1,620,236,723
I_kwDOCUB6oc5gktmz
22,111
error when using from indobenchmark
{ "login": "fendiirfan", "id": 56022249, "node_id": "MDQ6VXNlcjU2MDIyMjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/56022249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fendiirfan", "html_url": "https://github.com/fendiirfan", "followers_url": "https://api.github.com/users/fendiirfan/followers", "following_url": "https://api.github.com/users/fendiirfan/following{/other_user}", "gists_url": "https://api.github.com/users/fendiirfan/gists{/gist_id}", "starred_url": "https://api.github.com/users/fendiirfan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fendiirfan/subscriptions", "organizations_url": "https://api.github.com/users/fendiirfan/orgs", "repos_url": "https://api.github.com/users/fendiirfan/repos", "events_url": "https://api.github.com/users/fendiirfan/events{/privacy}", "received_events_url": "https://api.github.com/users/fendiirfan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! The `tokenizer_class` that was set in the configuration.json is wrong as the `IndoNLGTokenizer` does not exist in transformers. You should try to ask the other of the model on the community tab how to use it, or try to use: \r\n```python \r\nfrom transformers import MBartTokenizer\r\ntokenizer = MBartTokenizer.from_pretrained(\"indobenchmark/indobart-v2\")\r\n```\r\nas it appears that the model is an MBartModel.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
NONE
null
### System Info ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("indobenchmark/indobart-v2") ``` the error `ValueError: Tokenizer class IndoNLGTokenizer does not exist or is not currently imported.` any help guys? ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("indobenchmark/indobart-v2") ``` ### Expected behavior Can used
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22111/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22110
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22110/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22110/comments
https://api.github.com/repos/huggingface/transformers/issues/22110/events
https://github.com/huggingface/transformers/issues/22110
1,620,093,281
I_kwDOCUB6oc5gkKlh
22,110
Overestimated number of training epochs in Trainer
{ "login": "fenchri", "id": 15857706, "node_id": "MDQ6VXNlcjE1ODU3NzA2", "avatar_url": "https://avatars.githubusercontent.com/u/15857706?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fenchri", "html_url": "https://github.com/fenchri", "followers_url": "https://api.github.com/users/fenchri/followers", "following_url": "https://api.github.com/users/fenchri/following{/other_user}", "gists_url": "https://api.github.com/users/fenchri/gists{/gist_id}", "starred_url": "https://api.github.com/users/fenchri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fenchri/subscriptions", "organizations_url": "https://api.github.com/users/fenchri/orgs", "repos_url": "https://api.github.com/users/fenchri/repos", "events_url": "https://api.github.com/users/fenchri/events{/privacy}", "received_events_url": "https://api.github.com/users/fenchri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the report! Would you like to suggest a PR with your fix?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
NONE
null
### System Info - `transformers` version: 4.26.0 - Platform: Linux-5.4.0-136-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Under certain circumstances, given `max_steps` and dataloader size non-divisible by `gradient_accumulation_steps`, the number of epochs printed during model training can be overestimated, even if `dataloader_drop_last` is set to False. On an example run with the following inputs, Trainer calculated 100 training epochs instead of 87. ```bash python run_clm.py \ --model_name_or_path gpt2 \ --dataset_name wikitext \ --dataset_config_name wikitext-103-raw-v1 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 1 \ --do_train \ --output_dir /tmp/test-clm \ --max_train_samples=148 \ --gradient_accumulation_steps=32 \ --overwrite_output_dir \ --max_steps=200 \ --logging_steps=10 \ --dataloader_drop_last=False ``` ``` [INFO|trainer.py:1650] 2023-03-11 15:21:27,133 >> ***** Running training ***** [INFO|trainer.py:1651] 2023-03-11 15:21:27,133 >> Num examples = 148 [INFO|trainer.py:1652] 2023-03-11 15:21:27,133 >> Num Epochs = 100 [INFO|trainer.py:1653] 2023-03-11 15:21:27,133 >> Instantaneous batch size per device = 2 [INFO|trainer.py:1654] 2023-03-11 15:21:27,133 >> Total train batch size (w. parallel, distributed & accumulation) = 64 [INFO|trainer.py:1655] 2023-03-11 15:21:27,133 >> Gradient Accumulation steps = 32 [INFO|trainer.py:1656] 2023-03-11 15:21:27,133 >> Total optimization steps = 200 [INFO|trainer.py:1657] 2023-03-11 15:21:27,133 >> Number of trainable parameters = 124439808 ``` I believe this happens due to the computation [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1654) and consequently [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1659). ### Expected behavior Expected the estimated number of epochs to be closer to the actual number of epochs. Perhaps in that case `num_train_epochs` can be computed as: ``` update_steps_per_epoch = len_dataloader / args.gradient_accumulation_steps num_train_epochs = math.ceil(args.max_steps / update_steps_per_epoch) ``` Thank you in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22110/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22109
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22109/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22109/comments
https://api.github.com/repos/huggingface/transformers/issues/22109/events
https://github.com/huggingface/transformers/pull/22109
1,620,076,570
PR_kwDOCUB6oc5L0sUI
22,109
Add tensor flow whisper model for audio classification
{ "login": "adit299", "id": 43497982, "node_id": "MDQ6VXNlcjQzNDk3OTgy", "avatar_url": "https://avatars.githubusercontent.com/u/43497982?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adit299", "html_url": "https://github.com/adit299", "followers_url": "https://api.github.com/users/adit299/followers", "following_url": "https://api.github.com/users/adit299/following{/other_user}", "gists_url": "https://api.github.com/users/adit299/gists{/gist_id}", "starred_url": "https://api.github.com/users/adit299/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adit299/subscriptions", "organizations_url": "https://api.github.com/users/adit299/orgs", "repos_url": "https://api.github.com/users/adit299/repos", "events_url": "https://api.github.com/users/adit299/events{/privacy}", "received_events_url": "https://api.github.com/users/adit299/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22109). All of your documentation changes will be reflected on that endpoint.", "I just had a few questions on how to proceed with adding the TensorFlow Whisper model, just to make sure I'm on the right track. \r\n\r\n(1) Just so that I am clear on what the task is asking for, I need to recreate what is being done in PR #21754, except in TensorFlow. So, more specifically recreate the WhisperForAudioClassification class in TensorFlow, within the modeling_tf_whisper.py file.\r\n\r\n(2) I see that there are a lot of additional lines of code within PR #21754 in various files that seem to be \"registering\" that the Whisper model now supports audio classification. Would I have to add any lines of code similar to this within my PR? Is there any documentation I can take a look at to learn more about this? (or anything that would help me understand more about this task in general)\r\n\r\n@sanchit-gandhi\r\n", "Hi @adit299 Thanks for opening this PR - excited to have this implemented in TF! \r\n\r\nRegarding your questions:\r\n1) Yes, exactly.\r\n2) Yes, the other (equivalent TF) additions will also need to be added. Some of the additions in #21754 are automatically generated e.g. those in `dummy_pt_objects.py`. There's an in-depth guide to adding TensorFlow models [here](https://huggingface.co/docs/transformers/v4.27.1/en/add_tensorflow_model) which should cover the process. Let us know if there's anything missing or unclear. ", "Super cool @adit299! Feel free to ping us if you have any more questions / queries! More than happy to help with the integration here!", "\r\nHello,\r\n\r\nJust wanted to check in and provide an update. I have finished adding the TFWhisperForAudioClassification class within the modeling_tf_whisper.py file. One question regarding this:\r\n\r\n(1) Within the modeling_tf_auto.py file I don't see any OrderedDict named TF_MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES (or any OrderedDict that is equivalent to the MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES present within the modeling_auto.py file). I was wondering where the TFWhisperForAudioClassification class should go within the modeling_tf_auto.py file. \r\n\r\nI will continue work on developing the model tester, and will post any issues I run into here. \r\n\r\n@sanchit-gandhi ", "@adit299 - that's great news on the update! \r\n\r\nFor the auto mapping, if the tensorflow equivalent `TF_MODEL_FOR_XXX` doesn't exist, then it can be added to `modeling_tf_auto.py`. This means this is the first audio classification model to be added for TensorFlow 🔥🔥🔥", "Recently, we merged TensorFlow Wav2Vec2 For Sequence Classification: https://github.com/huggingface/transformers/pull/22073\r\n\r\nYou could propagate the modelling code changes form this PR onto Whisper as a quick way of getting this working @adit299 (as we do for the PyTorch code)", "By propagate, do you mean just looking at that PR and using the code written for that task as help for this current task? If so, I have already been doing that. If you are referring to some other procedure please do let me know about this as I am not aware. That would certainly help! \r\n\r\nQuestions I had:\r\n\r\n(1) I noticed that within the Pytorch implementation of the whisper tests, it refers to a class `GenerationTesterMixin` which does not seem to have a similarly named Tensorflow equivalent. Would I have to add this class? I am also confused about what these classes are doing (ex. what is TFModelTesterMixin doing, etc.), so any clarification you can provide is appreciated! \r\n\r\nhttps://github.com/huggingface/transformers/blob/d204aea7314217fa8b47e7418ead0d9973f50ccd/tests/models/whisper/test_modeling_tf_whisper.py#L926\r\n\r\n(2) I was having trouble with translating the test_encoder_outputs method in TensorFlow. Mainly these lines:\r\n\r\nhttps://github.com/huggingface/transformers/blob/d204aea7314217fa8b47e7418ead0d9973f50ccd/tests/models/whisper/test_modeling_tf_whisper.py#L963-L966\r\n\r\nAgain, a bit confused about what `model.to(torch_device)` is doing. I will look into this a bit more, but again any clarifications about what this method is doing would help.\r\n\r\nThanks again for the speedy responses!\r\n@sanchit-gandhi @amyeroberts ", "@adit299 By propagate, we mean apply the equivalent changes from the Wav2Vec2 PR to this PR - it won't be a direct copy-paste, but there will be large proportions in common. It's sounds like this is what you're doing, which is great :) \r\n\r\nWith respect to your questions:\r\n\r\n1) GenerationTesterMixin\r\n\r\nYes, I don't think this class exists yet and you wouldn't have to add this class as part of this PR. Is there anything that should be added for the TF model tests @gante ?\r\n\r\nIn terms of what these classes are doing, the mixin classes group together related functionality e.g. common tests that should be added to all models. For example, [TFModelTesterMixin](https://github.com/huggingface/transformers/blob/57ffd8ab4c833e26b2288769f6031f94870a102c/tests/test_modeling_tf_common.py#L164) contains tests for the TensorFlow models. This way we can create other classes using a composition of mixins. \r\n\r\n2) .to and .eval methods\r\n`model.to(...)` is a pytorch specific method. See docs here: https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=#torch.nn.Module.to. It's moving the model onto the specified torch device. `model.eval()` is also a PyTorch method: https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=#torch.nn.Module.to.", "@amyeroberts there is no generation-specific test mixin for TF. `TFModelTesterMixin` has some basic generate checks :)", "Looks cool already @adit299! Let us know if you need a hand with the integration or when you'd like a PR review 🤗", "Thanks for the follow-up @sanchit-gandhi. Currently, I am debugging some of the test failures that I am getting. I also see that 7 more tests within TFModelTesterMixin are failing, but I thought I would resolve the tests failing within the TFWhisperEncoderModelTest class first before moving on to that.\r\n\r\nThis is the error occuring when test_encoder_outputs is run:\r\n\r\n```\r\nself = <tests.models.whisper.test_modeling_tf_whisper.TFWhisperEncoderModelTest testMethod=test_encoder_outputs>\r\n\r\n def test_encoder_outputs(self):\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n\r\n for model_class in self.all_model_classes:\r\n model = model_class(config)\r\n\r\n inputs = copy.deepcopy(self._prepare_for_class(inputs_dict, model_class))\r\n\r\n> with tf.stop_gradient:\r\nE AttributeError: __enter__\r\n\r\ntests/models/whisper/test_modeling_tf_whisper.py:975: AttributeError\r\n```\r\n\r\nI believe this error is occuring since TensorFlow's stop_gradient implementation has no __enter__ method defined (https://stackoverflow.com/questions/51427729/python-error-attributeerror-enter). I figured this is the closest equivalent to torch.no_grad, used in the PyTorch implementation which is why I used it. If you could let me know a little bit more about what this method is testing and how it works, I think I will be able to solve the error.\r\n\r\nOn I sidenote, I also see the methods freeze_encoder, get_input_embeddings, and set_input_embeddings within the Pytorch implementation. Would I have to implement these as well? What are these methods doing?\r\n@amyeroberts ", "@adit299 Yes, these methods should also be implemented for the TF model. You can look at similar TF implementations to see how this was done e.g. [here for freezing a module](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#LL1463C4-L1463C4). ", "I would say probably we don't need freezing since this is only relevant for fine-tuning, and we don't have a seq2seq ASR fine-tuning script in TF (related https://github.com/huggingface/transformers/pull/22109#discussion_r1194040076)", "Hey @adit299 - feel free to comment here when this PR is ready for review and we can take a look! Seems to be close to completion", "Hey @sanchit-gandhi, apologies for the delay! Yes, this PR is ready for review. I haven't had much luck in getting some tests to pass however. I appreciate any help you guys can provide by taking a look. ", "@adit299 Unfortunately, diving into people's PRs to debug isn't something we can do as it's just not a scalable solution with a repo of this size. If you need help from us, then please share a detailed description of the issue, what you've tried already and ideally highlighting any relevant pieces of code. ", "Understandable, @amyeroberts . There are 5 tests failing right now. Here is all the information requested (to the best of my knowledge):\r\n\r\n**FAILED test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_compile_tf_model**\r\n\r\nError - \r\n```\r\nE TypeError: Exception encountered when calling layer 'tf_whisper_for_audio_classification_4' (type TFWhisperForAudioClassification).\r\nE\r\nE call() got an unexpected keyword argument 'decoder_input_ids'\r\nE\r\nE Call arguments received by layer 'tf_whisper_for_audio_classification_4' (type TFWhisperForAudioClassification):\r\nE • input_features={'input_features': 'tf.Tensor(shape=(2, 80, 59), dtype=float32)', 'decoder_input_ids': 'tf.Tensor(shape=(1, 2), dtype=int32)'}\r\nE • head_mask=None\r\nE • encoder_outputs=None\r\nE • labels=None\r\nE • output_attentions=None\r\nE • output_hidden_states=None\r\nE • return_dict=None\r\n\r\n../../../src/transformers/modeling_tf_utils.py:434: TypeError\r\n```\r\n\r\nWhat I tried - \r\n\r\nI suspected it had something to do with:\r\n\r\nhttps://github.com/adit299/transformers/blob/3d3c7d4213e08d69254edb9c04ac28b3dfbd40f4/tests/test_modeling_tf_common.py#L739C4-L819\r\n\r\n But that doesn't seem to be the case. Maybe the Whisper decoder is being mistakenly invoked? I am just not sure.\r\n\r\n**FAILED test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_hidden_states_output - AssertionError: Lists differ: [30, 16] != [60, 16]**\r\n\r\nError - \r\n\r\n```\r\n../../test_modeling_tf_common.py:1002: in check_hidden_states_output\r\n self.assertListEqual(\r\nE AssertionError: Lists differ: [30, 16] != [60, 16]\r\nE\r\nE First differing element 0:\r\nE 30\r\nE 60\r\nE\r\nE - [30, 16]\r\nE ? ^\r\nE\r\nE + [60, 16]\r\nE ? ^\r\n```\r\nThe assertion failing is:\r\n```\r\n self.assertListEqual(\r\n list(hidden_states[0].shape[-2:]),\r\n [self.model_tester.seq_length, self.model_tester.hidden_size],\r\n )\r\n```\r\n\r\nWhat I tried - Not sure about this one.\r\n\r\n**FAILED test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_pt_tf_model_equivalence - AttributeError: tf_whisper_encoder_17.conv1.weight not found in PyTorch model**\r\n\r\nError - \r\n\r\n```\r\nE AttributeError: tf_whisper_encoder_17.conv1.weight not found in PyTorch model\r\n\r\n../../../src/transformers/modeling_tf_pytorch_utils.py:322: AttributeError\r\n```\r\n\r\nWhat I tried - Not sure about this one as well\r\n\r\n**FAILED test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_resize_token_embeddings - NotImplementedError**\r\n\r\nError - \r\n`../../../src/transformers/modeling_tf_utils.py:1343: NotImplementedError`\r\n\r\nWhat I tried - I think this one is out of my control\r\n\r\n**FAILED test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_save_load - TypeError: Exception encountered when calling layer 'tf_whisper_for_audio_classification_20' (type TFWhisperForAudioClassification**\r\n\r\nWhat I tried - connected to the first error, solving that should solve this. \r\n\r\n\r\n\r\nPlease do let me know if any other clarification is needed! Apologies for the long post!\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @adit299, thanks for giving more details about debugging the tests and apologies for the delay in my response. \r\n\r\nI suggest looking through the [artefacts](https://app.circleci.com/pipelines/github/huggingface/transformers/66686/workflows/4a5167ab-3f48-4107-8037-046b9e22c37f/jobs/830952/artifacts) from the CI run, specifically [failure_long.txt](https://circleci-tasks-prod.s3.us-east-1.amazonaws.com/forks/storage/artifacts/d5b57382-7f67-4274-9623-7f238ef4fb6f/584530217/bbf68a84-eca3-4a58-b435-ccf6fe76ee5e/0/~/transformers/reports/tests_tf/failures_long.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQVFQINEOCHM3RUWE%2F20230714%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230714T175646Z&X-Amz-Expires=60&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEJL%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIGcglqOGo%2Fe%2F5mwCVxVr%2F8nn4rzUcn4laRIRVkurIB6aAiEA44PuRr%2FWN1FN9Gf0apo5P4mNPaaCINszHTJTGYQS4BIqqwIIGxADGgwwNDU0NjY4MDY1NTYiDP76zxcgkVLBzjO82CqIAvyVau8QZFaLVnQdP%2FeJu9IBQaqvdkg4zBQZJOv0ZMDaUJUrnMWxpJ20n%2BEBFrs7BrKmSZgJ6imY6gcr%2FSJhEtOpVRvgqbz%2FwoRyQXlD9Ob2D0hPuOilJdab9t7un3XGVwqX8UXs2ui4IM0SNWGsVNhVq9%2B5zCLZQqKiJKeeMfBZPvzzL4W2Z5d8Gf%2FEZIQYCTF9GPD9W7YYJt1sL08GUOITLBUdk7Noplt8kMyqbKY%2F5JdsSG6%2B2xdYuLt%2BLyfCcKfOSGwE0FvB4s2opeg7FRcX%2FLzniNM8zWqVxCwDwnxfeXKwT0mzLAu8WVSaddbp%2FwWbCvEeSuWAGY7csHgkaeC3uD7gA1mY0TCwlsalBjqdAcM0csGm7aIw7slQZJqakKelu4UgawGQbET3HBBG%2FBCInREsvUkcEgRMgwrX%2FQYHPumFhgkeDaGhD7qs4IRZcKppDgPlB8AJxi7LxyriYwKdA0XgVkWgpo3ZMwspHQ%2Fqx%2F159F7cdMOmj3xxJY9JhRV0RdcyMBb%2Bj57eq9fYRL3OkCNoFtWTTnOSs2xB2%2BENfZMIVO%2BEQf1xXe6%2BaXM%3D&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=11d02f07765c93cc347a5aa2ee23a984701b3f8ad5dd446f7a825450947f2c78) as they will give you a more detailed error message an trackback to help figure out the issues. \r\n\r\n**test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_compile_tf_model**\r\nI think your suspicions are correct. You'll need to add a new branch in the if/else logic to create the correct inputs for this model. \r\n\r\n**test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_hidden_states_output**\r\nIn this case it seems the sequence length of the hidden size doesn't match what's expected. I would create a model using the test config and check its architecture and the hidden states outputs when passed a dummy input. \r\n\r\n**test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_pt_tf_model_equivalence** \r\nI looks like a weight is in the TF model and not in the PT model. I'd check the params in each model - looking at `tf_model.trainable_parameters()` and `pt_model.state_dict()` to see if you can identify if this is a case of a weight not being loaded, or name not properly matched. \r\n\r\nIf you create the TF whisper model with pytorch weights, do you get any warnings about weights being randomly initialized? \r\n\r\n**test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_resize_token_embeddings - NotImplementedError**\r\n\r\nThis is raised because the model doesn't have a `get_input_embeddings` method implemented\r\n\r\n**test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_save_load**\r\n\r\nFrom the CI artefacts, it looks like this is failing because of `decoder_input_ids` being in the input\r\n\r\n\r\n", "Hello,\r\n\r\nApologies for the delay. I am attempting to instantiate an instance of the TFWhisperForAudioClassification model to debug some of the issues I'm having. So, I try to run this:\r\n\r\n`>>> from transformers import TFWhisperForAudioClassification `\r\n\r\nI end up getting this error:\r\n\r\n`RecursionError: maximum recursion depth exceeded while calling a Python object`\r\n\r\nWhich stems from these lines of code:\r\n\r\nhttps://github.com/huggingface/transformers/blob/080a97119c0dabfd0fb5c3e26a872ad2958e4f77/src/transformers/models/auto/auto_factory.py#L701-L707\r\n\r\nWhen I run a debugger, the problematic statement is:\r\n\r\nhttps://github.com/huggingface/transformers/blob/080a97119c0dabfd0fb5c3e26a872ad2958e4f77/src/transformers/models/auto/auto_factory.py#L705\r\n\r\nJust executing `self._model_mapping.keys()` on its own results in the RecursionError. \r\n\r\nI have been trying to see what is causing this, but I'm at a loss. Is this why you suggest creating the model using a test config? Could you show how to do that if it is relevant to avoiding this error? I contemplated increasing the Recursion Depth on my machine (its currently at 1000), but I'm hesitant to think that would solve it. \r\n\r\nThanks again for your patience, I realize I'm quite the n00b :sweat_smile:\r\n\r\n@amyeroberts @sanchit-gandhi ", "Hello,\r\n\r\nI am currently attempting to resolve the error:\r\n\r\nError - \r\n```\r\nE TypeError: Exception encountered when calling layer 'tf_whisper_for_audio_classification_4' (type TFWhisperForAudioClassification).\r\nE\r\nE call() got an unexpected keyword argument 'decoder_input_ids'\r\nE\r\nE Call arguments received by layer 'tf_whisper_for_audio_classification_4' (type TFWhisperForAudioClassification):\r\nE • input_features={'input_features': 'tf.Tensor(shape=(2, 80, 59), dtype=float32)', 'decoder_input_ids': 'tf.Tensor(shape=(1, 2), dtype=int32)'}\r\nE • head_mask=None\r\nE • encoder_outputs=None\r\nE • labels=None\r\nE • output_attentions=None\r\nE • output_hidden_states=None\r\nE • return_dict=None\r\n\r\n../../../src/transformers/modeling_tf_utils.py:434: TypeError\r\n```\r\n\r\nSince this error is the root cause of several of the tests failing. I think the issue is that `TFWhisperForAudioClassification` inherits from the class `TFWhisperPreTrainedModel`, which has the following methods:\r\n\r\nhttps://github.com/huggingface/transformers/blob/50573c648ae953dcc1b94d663651f07fb02268f4/src/transformers/models/whisper/modeling_tf_whisper.py#L464-L498\r\n\r\nI believe the `dummy_inputs` method is introducing `decoder_input_ids` into the input. By commenting out a couple of lines: \r\n\r\n```\r\n@property\r\n def dummy_inputs(self) -> Dict[str, tf.Tensor]:\r\n \"\"\"\r\n Dummy inputs to build the network.\r\n\r\n Returns:\r\n `Dict[str, tf.Tensor]`: The dummy inputs.\r\n \"\"\"\r\n return {\r\n self.main_input_name: tf.random.uniform(\r\n [1, self.config.num_mel_bins, self.config.max_source_positions * 2 - 1], dtype=tf.float32\r\n ),\r\n # \"decoder_input_ids\": tf.constant([[1, 3]], dtype=tf.int32),\r\n }\r\n\r\n @property\r\n def input_signature(self):\r\n return {\r\n \"input_features\": tf.TensorSpec((None, self.config.num_mel_bins, None), tf.float32, name=\"input_features\"),\r\n # \"decoder_input_ids\": tf.TensorSpec((None, None), tf.int32, name=\"decoder_input_ids\"),\r\n \"decoder_attention_mask\": tf.TensorSpec((None, None), tf.int32, name=\"decoder_attention_mask\"),\r\n }\r\n```\r\n\r\nThe number of tests failing reduces to 4. Although, obviously, this introduces new errors (I have attached the new errors at the bottom for reference). The pytorch equivalent to this method does not contain the `dummy_inputs` and `input_signature` method :\r\n\r\nhttps://github.com/huggingface/transformers/blob/50573c648ae953dcc1b94d663651f07fb02268f4/src/transformers/models/whisper/modeling_whisper.py#L654-L682 \r\n\r\nMy questions are:\r\n\r\n(1) Should I attempt to change the TensorFlow `PreTrainedMethod` to be similar to the Pytorch implementation?\r\n\r\nor\r\n\r\n(2) Is there some better way to proceed? \r\n\r\nOnce this is resolved, I am very close to finishing with this pull request. Thanks again for your patience! \r\n@amyeroberts @sanchit-gandhi \r\n__________________________________________________________________________________________________________________________\r\nNew Errors:\r\n```\r\nFAILED tests/models/whisper/test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_resize_token_embeddings - ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.\r\nFAILED tests/models/whisper/test_modeling_tf_whisper.py::TFWhisperEncoderModelTest::test_save_load - AssertionError: 5.524128 not less than or equal to 1e-05\r\n```", "@adit299 `dummy_inputs` and `input_signature` are methods unique to the tensorflow models and aren't needed in the pytorch implementation. \r\n\r\n`TFWhisperForAudioClassification` should implement its own `dummy_inputs` and `input_signature` which override the methods it inherits from `TFWhisperPreTrainedModel`. \r\n\r\nI'm going to be away mid-September to mid-October. If you have any other tensorflow specific questions, or questions about the differences between the TF and PT models, please ping @Rocketknight1 in my absence. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,698
1,698
CONTRIBUTOR
null
# What does this PR do? Adding support for audio classification within TensorFlow whisper model <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #21777 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22109/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22109/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22109", "html_url": "https://github.com/huggingface/transformers/pull/22109", "diff_url": "https://github.com/huggingface/transformers/pull/22109.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22109.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22108
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22108/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22108/comments
https://api.github.com/repos/huggingface/transformers/issues/22108/events
https://github.com/huggingface/transformers/pull/22108
1,620,071,528
PR_kwDOCUB6oc5L0rXx
22,108
Trainer: let generate pick its inputs
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
MEMBER
null
# What does this PR do? Trainer's `predict_with_generate` seems to have been designed for an older version of `.generate()`, where manual selection of the inputs was needed. The current version of `.generate()` can do it on its own. This is particularly relevant for multimodal models, which can take more than one modality as input. As such, this PR removes the `.generate()` input selection logic from Trainer. This PR is a requirement for Amazon's [MM-CoT](https://github.com/amazon-science/mm-cot).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22108/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22108", "html_url": "https://github.com/huggingface/transformers/pull/22108", "diff_url": "https://github.com/huggingface/transformers/pull/22108.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22108.patch", "merged_at": 1678734026000 }
https://api.github.com/repos/huggingface/transformers/issues/22107
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22107/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22107/comments
https://api.github.com/repos/huggingface/transformers/issues/22107/events
https://github.com/huggingface/transformers/issues/22107
1,620,053,629
I_kwDOCUB6oc5gkA59
22,107
CLIPTokenizer problems in from pretrained on version 4.25.1
{ "login": "TheImunityGamer", "id": 24777043, "node_id": "MDQ6VXNlcjI0Nzc3MDQz", "avatar_url": "https://avatars.githubusercontent.com/u/24777043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TheImunityGamer", "html_url": "https://github.com/TheImunityGamer", "followers_url": "https://api.github.com/users/TheImunityGamer/followers", "following_url": "https://api.github.com/users/TheImunityGamer/following{/other_user}", "gists_url": "https://api.github.com/users/TheImunityGamer/gists{/gist_id}", "starred_url": "https://api.github.com/users/TheImunityGamer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TheImunityGamer/subscriptions", "organizations_url": "https://api.github.com/users/TheImunityGamer/orgs", "repos_url": "https://api.github.com/users/TheImunityGamer/repos", "events_url": "https://api.github.com/users/TheImunityGamer/events{/privacy}", "received_events_url": "https://api.github.com/users/TheImunityGamer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey!\r\nAs you mention, you are creating an empty directory with the same name. The priority is given to local folders, which explains why you are having this issue. Either delete the empty folder or save a tokenizer inside it 😉 ", "I have tried to delete the empty folder, but it just keeps coming back, even when just running the code I mentioned before that I placed into the console. I also don't know how to put a tokenizer into this folder that can't be deleted. Could you please help with that?", "To put the tokenizer in the folder run: \r\n```python \r\ntokenizer.save_pretrained(\"path_to_folder\")\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I am so sorry that I haven't replied in so long. Work IRL took over my life. Also, transformers 4.19.2 worked well enough because I could just use colab for the khoya scripts.\r\n\r\nThe issue is that the tokenizer won't even be created in the first place. I again tried putting this in the python shell in the webui's virtual env:\r\n```pycon\r\nPython 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] on win32\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import transformers\r\n>>> from transformers import CLIPTokenizer\r\n>>> tokenizer = CLIPTokenizer.from_pretrained(\"openai/clip-vit-large-patch14\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"D:\\Stable Diffusion\\SD 3\\stable-diffusion-webui\\venv\\lib\\site-packages\\transformers\\tokenization_utils_base.py\", line 1785, in from_pretrained\r\n raise EnvironmentError(\r\nOSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.tokenizer.save_pretrained(\"path_to_folder\")\r\n```\r\ntokenizer doesn't even get created, so I can't use ```tokenizer.save_pretrained(\"path_to_folder\")``` on it. I also tried loading 'openai/clip-vit-base-patch16' and it got the same error. Is there another way to save the transformer snapshot into that folder?", "That very strange, I can't reproduce your error at all. If you open any colab, the script that you share will work. \r\nQuick fixes are probably: `pip install --upgrade transformers`. ", "I set the transformers version to 4.28.1 and still got the issue. I also tried version 4.26.0. The latest version, 4.28.1, did not work. Something that I thought was weird was where the files were downloaded to. With version 4.19.2, the files download fine. There are a lot of weirdly named files in ```.cache\\huggingface\\transformers``` one file with a seemingly random extension and random name, followed by a file with the same name, including the extension, plus ```.json```. They all have really small file sizes. ```.cache\\clip``` has the ```ViT-L-14.pt``` file in it. It has the file size I would expect. In versions 4.28.1, 4.26.0, and 4.25.1, the file tries to save in ```.cache\\huggingface\\hub\\models--openai--clip-vit-large-patch14```. The docs say that it should be saved in the transformers folder in the huggingface directory. This is what I found weird. The docs didn't match up with what I was seeing. I also couldn't change the ```.cache``` directory using the environment variable ```HF_HOME```. Maybe I'm using the wrong hf version?\r\n\r\nEdit: I actually think I didn't press apply, so maybe that's why the cache location didn't change." ]
1,678
1,682
1,682
NONE
null
### System Info WARNING:tensorflow:From D:\Stable Diffusion\SD New\stable-diffusion-webui\venv\lib\site-packages\transformers\commands\env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2023-03-11 10:21:20.969146: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.25.1 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.7 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> Yes - Using distributed or parallel set-up in script?: <fill in> I don't know ### Who can help? @amyeroberts @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This is what I placed in the Python shell. ```pycon Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import transformers >>> from transformers import CLIPTokenizer >>> tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\Stable Diffusion\SD New\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1785, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer. ``` Other than that, I'm also getting the same issue in the same place just with a longer stack trace in the AUTOMATIC1111 webui and the kohya scripts. Also, in case this helps, "models--openai--clip-vit-large-patch14" does get created during the running of the scripts, but it is empty. Edit: With the webui, when I set the version to 4.19.2 it works, except for the dreambooth extension. Didn't test with the kohya script and version 4.19.2. ### Expected behavior I would expect that the Tokenizer would load properly and that the scripts I was running would work.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22107/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22106
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22106/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22106/comments
https://api.github.com/repos/huggingface/transformers/issues/22106/events
https://github.com/huggingface/transformers/issues/22106
1,620,015,241
I_kwDOCUB6oc5gj3iJ
22,106
[i18n-<languageCode>] Translating docs to <languageName>
{ "login": "dhckdduf", "id": 127166317, "node_id": "U_kgDOB5RnbQ", "avatar_url": "https://avatars.githubusercontent.com/u/127166317?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhckdduf", "html_url": "https://github.com/dhckdduf", "followers_url": "https://api.github.com/users/dhckdduf/followers", "following_url": "https://api.github.com/users/dhckdduf/following{/other_user}", "gists_url": "https://api.github.com/users/dhckdduf/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhckdduf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhckdduf/subscriptions", "organizations_url": "https://api.github.com/users/dhckdduf/orgs", "repos_url": "https://api.github.com/users/dhckdduf/repos", "events_url": "https://api.github.com/users/dhckdduf/events{/privacy}", "received_events_url": "https://api.github.com/users/dhckdduf/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[]
1,678
1,678
1,678
NONE
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through) - [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx). ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) <!-- Keep on adding more as you go 🔥 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22106/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22105
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22105/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22105/comments
https://api.github.com/repos/huggingface/transformers/issues/22105/events
https://github.com/huggingface/transformers/pull/22105
1,619,963,791
PR_kwDOCUB6oc5L0Wfr
22,105
[WIP] Refactor Deberta/Deberta-v2
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22105). All of your documentation changes will be reflected on that endpoint.", "Hey @ArthurZucker any updates on this? ETA for when it will be merged into main?", "Hey! Just got back from holidays, this should be my main focus in the coming days! ", "Sorry! Seem like I had to postpone this! If anyone want to take over feel free to do it, otherwise will be my priority once https://github.com/huggingface/transformers/pull/23909 is merge!", "Regarding the `z_steps` in `DebertaV2Model`: I think this code is relevant for the [enhanced mask decoder of the generator model](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/apps/models/masked_language_model.py#L51)\r\n\r\n```python\r\nif attention_mask.dim() <= 2:\r\n extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)\r\n att_mask = extended_attention_mask.byte()\r\n attention_mask = att_mask * att_mask.squeeze(-2).unsqueeze(-1)\r\nelif attention_mask.dim() == 3:\r\n attention_mask = attention_mask.unsqueeze(1)\r\ntarget_mask = target_ids > 0\r\nhidden_states = encoder_layers[-2]\r\nif not self.position_biased_input:\r\n layers = [encoder.layer[-1] for _ in range(2)]\r\n z_states += hidden_states\r\n query_states = z_states\r\n query_mask = attention_mask\r\n outputs = []\r\n rel_embeddings = encoder.get_rel_embedding()\r\n\r\n for layer in layers:\r\n # TODO: pass relative pos ids\r\n output = layer(hidden_states, query_mask, return_att=False, query_states=query_states,\r\n relative_pos=relative_pos, rel_embeddings=rel_embeddings)\r\n query_states = output\r\n outputs.append(query_states)\r\nelse:\r\n outputs = [encoder_layers[-1]]\r\n```\r\n\r\nAs far as I can tell, they hardcoded z_steps to 2 here. Although it should still be left as 0 for the discriminator. Adding the z_steps to the config seems like a good idea. \r\n\r\n`z_states` represents [the position embeddings](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/apps/models/masked_language_model.py#L111), which are non-zero if `position_biased_input` is set to `True`. They are passed from the [embedding layer](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/bert.py#L269). So in order to properly implement this, I think we need to return the position embeddings here:\r\n\r\n```python\r\nclass DebertaV2Embeddings(nn.Module):\r\n def forward(self, input_ids=None, token_type_ids=None, position_ids=None, mask=None, inputs_embeds=None):\r\n ...\r\n\r\n return embeddings, position_embeddings\r\n```\r\n\r\nand implement the `z_steps` like this:\r\n\r\n```python\r\nclass DebertaV2Model(DebertaV2PreTrainedModel):\r\n def forward(\r\n self,\r\n input_ids: Optional[torch.Tensor] = None,\r\n attention_mask: Optional[torch.Tensor] = None,\r\n token_type_ids: Optional[torch.Tensor] = None,\r\n position_ids: Optional[torch.Tensor] = None,\r\n inputs_embeds: Optional[torch.Tensor] = None,\r\n output_attentions: Optional[bool] = None,\r\n output_hidden_states: Optional[bool] = None,\r\n return_dict: Optional[bool] = None,\r\n ) -> Union[Tuple, BaseModelOutput]:\r\n ...\r\n\r\n embedding_output, position_embedding_output = self.embeddings(\r\n input_ids=input_ids,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n mask=attention_mask,\r\n inputs_embeds=inputs_embeds,\r\n )\r\n ...\r\n\r\n if self.z_steps > 0:\r\n hidden_states = encoded_layers[-2]\r\n layers = [self.encoder.layer[-1] for _ in range(self.z_steps)]\r\n position_embedding_output += hidden_states\r\n query_states = position_embedding_output\r\n query_mask = self.encoder.get_attention_mask(attention_mask)\r\n rel_embeddings = self.encoder.get_rel_embedding()\r\n rel_pos = self.encoder.get_rel_pos(embedding_output)\r\n for layer in layers:\r\n query_states = layer(\r\n hidden_states,\r\n query_mask,\r\n output_attentions=False,\r\n query_states=query_states,\r\n relative_pos=rel_pos,\r\n rel_embeddings=rel_embeddings,\r\n )\r\n encoded_layers = encoded_layers + (query_states,)\r\n```", "What is the status?\r\nThe logs of the checks are expired.", "#27734 should help with some of the issues in the mean time" ]
1,678
1,707
null
COLLABORATOR
null
# What does this PR do? Refactor both Deberta and DebertaV2 to make them more compatible with the overall transformers library Should fix a bunch of issues related to torch-scripting with Deberta: - #15216 - #15673 - #16456 - #18659 - #21300 - #20815 - #12436 - #18674 - help supporting the Prefix_Tuning PEFT approach
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22105/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22105", "html_url": "https://github.com/huggingface/transformers/pull/22105", "diff_url": "https://github.com/huggingface/transformers/pull/22105.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22105.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22104
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22104/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22104/comments
https://api.github.com/repos/huggingface/transformers/issues/22104/events
https://github.com/huggingface/transformers/pull/22104
1,619,931,715
PR_kwDOCUB6oc5L0Qou
22,104
[Time-Series] time series patching, PatchTST
{ "login": "elisim", "id": 17675462, "node_id": "MDQ6VXNlcjE3Njc1NDYy", "avatar_url": "https://avatars.githubusercontent.com/u/17675462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elisim", "html_url": "https://github.com/elisim", "followers_url": "https://api.github.com/users/elisim/followers", "following_url": "https://api.github.com/users/elisim/following{/other_user}", "gists_url": "https://api.github.com/users/elisim/gists{/gist_id}", "starred_url": "https://api.github.com/users/elisim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elisim/subscriptions", "organizations_url": "https://api.github.com/users/elisim/orgs", "repos_url": "https://api.github.com/users/elisim/repos", "events_url": "https://api.github.com/users/elisim/events{/privacy}", "received_events_url": "https://api.github.com/users/elisim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "comment", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "comment", "PatchTST is in Gluon, thanks to @kashif . Closing here :) https://github.com/awslabs/gluonts/pull/2748" ]
1,678
1,687
1,686
CONTRIBUTOR
null
This PR added a time series patching - PatchTST Fixes https://github.com/huggingface/transformers/issues/22075 @kashif Kashif impl in gluonTS: https://github.com/awslabs/gluonts/pull/2748
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22104/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22104", "html_url": "https://github.com/huggingface/transformers/pull/22104", "diff_url": "https://github.com/huggingface/transformers/pull/22104.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22104.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22103
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22103/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22103/comments
https://api.github.com/repos/huggingface/transformers/issues/22103/events
https://github.com/huggingface/transformers/issues/22103
1,619,921,233
I_kwDOCUB6oc5gjglR
22,103
FLAVA not doing a forward pass
{ "login": "amariucaitheodor", "id": 32778667, "node_id": "MDQ6VXNlcjMyNzc4NjY3", "avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amariucaitheodor", "html_url": "https://github.com/amariucaitheodor", "followers_url": "https://api.github.com/users/amariucaitheodor/followers", "following_url": "https://api.github.com/users/amariucaitheodor/following{/other_user}", "gists_url": "https://api.github.com/users/amariucaitheodor/gists{/gist_id}", "starred_url": "https://api.github.com/users/amariucaitheodor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amariucaitheodor/subscriptions", "organizations_url": "https://api.github.com/users/amariucaitheodor/orgs", "repos_url": "https://api.github.com/users/amariucaitheodor/repos", "events_url": "https://api.github.com/users/amariucaitheodor/events{/privacy}", "received_events_url": "https://api.github.com/users/amariucaitheodor/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Hi @amariucaitheodor . Thank you for reporting the issue!\r\n\r\nCould you also copy-paste the error (traceback) you got to your above PR description? Thanks.", "I tried the colab and found the issue. Specifically, the code which is used for calculating input_ids and input_ids_masked is incorrect as the `torch_mask_tokens` [function](https://github.com/huggingface/transformers/blob/a096eaca6554173ecd4c016eb2b10b8e0b2cb245/src/transformers/data/data_collator.py#L748) returns modified input_ids with masking and the corresponding labels. Since the loss is only calculated on the masked tokens, other tokens are set to -100 in the labels. This causes an \"index out of range\" error down the line in the embeddings' forward.", "Thank you for the reply! I had noticed the same problem. \r\nWhat is then the *correct* way of calculating `input_ids_masked`? The code doesn't work with `DataCollatorForLanguageModeling` for the reasons mentioned above, and there is no other example for doing this.", "Thank you @amariucaitheodor for providing the error log, and thanks @apsdehal for sharing your finding. I will take a look on this issue. But @apsdehal , don't hesitate to share if you have any idea regarding the correct solution ❤️ \r\n", "Hello! After looking into the issue with the notebook, here is my finding:\r\n\r\n- `data_collator.torch_mask_tokens(inputs=inputs['input_ids'], ....)` return two items\r\n - the first item is the input ids being masked\r\n - the second item indicates:\r\n - if a place has value `-100`: it means that places is not masked\r\n - otherwise, it gives the original value of that place in `inputs`\r\n- The `FlavaForPreTraining` model expect `input_ids_masked` to be the masked inputs, which is the first item prepared above. See https://github.com/huggingface/transformers/blob/f7329751fe5c43365751951502c00df5a4654359/src/transformers/models/flava/modeling_flava.py#L803-L805\r\n- However, in the notebook, you do\r\n ```python\r\n inputs['input_ids'], inputs['input_ids_masked'] = data_collator.torch_mask_tokens(...)\r\n ```\r\n which cause `inputs['input_ids_masked']` to be the 2nd item return ed by `torch_mask_tokens` which is incorrect. In particularly, it contains `-100`, which causes the error. Furthermore, `inputs['input_ids']` is also the wrong value, but it doesn't cause the program to crash.\r\n\r\n**The solution is just to prepare the correct inputs for the model**:\r\n\r\n```python\r\ninputs['input_ids_masked'], _ = data_collator.torch_mask_tokens(\r\n inputs=inputs['input_ids'],\r\n special_tokens_mask=inputs['special_tokens_mask']\r\n)\r\n```\r\n\r\nWith this change, I get `loss: 7.162976264953613`.\r\n\r\nLet me know if you have further question 🤗 \r\n", "@ydshieh I don't think this is also correct as `torch_mask_tokens` masks the `input_ids` in place so you will have to clone the `input_ids` before passing them to it.", "@apsdehal Thanks a lot, nice catch! You are 100% correct.\r\n@amariucaitheodor Please see this comment too!", "As it turns out that this is not an issue in modeling code in `transformers`, but the wrong preparation of model inputs, I move forward to close the issue.\r\n\r\n@amariucaitheodor If you still have issues, you can post on [Hugging Face Forums](https://discuss.huggingface.co/).\r\n\r\nHowever, if you find other issue(s) you believe that is/are in modeling code, feel free to continue to leave comments here." ]
1,678
1,678
1,678
NONE
null
### System Info - `transformers` version: 4.27.0.dev0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> **N.B. I do have PyTorch installed, I'm not sure why the tool can't find it:** ``` python -c "import torch; print(torch.__version__)" 2.1.0.dev20230310 ``` ### Who can help? @apsdehal ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior ([also a Colab notebook doing this](https://colab.research.google.com/drive/12f_1jtgJXvk-LT49pWCyUToF6ew70Cel?usp=sharing)): 1. Get a datapoint for a forward pass (`fetch_images` is in the notebook above): ``` pmd = datasets.load_dataset("facebook/pmd", "wit", use_auth_token=True, streaming=True) pmd_train_head = pmd['train'].take(2) pmd_train_head_with_images = pmd_train_head.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": 20}) datapoint = next(iter(pmd_train_head_with_images)) ``` 2. Process the input: ``` from transformers import FlavaProcessor, FlavaForPreTraining processor = FlavaProcessor.from_pretrained("facebook/flava-full") inputs = processor( text=[datapoint['text']], images=[datapoint['image']], return_tensors="pt", padding="max_length", max_length=77, return_codebook_pixels=True, return_image_mask=True, return_attention_mask=True, return_token_type_ids=True, return_special_tokens_mask=True, ) inputs.bool_masked_pos ``` 3. Mask the text input for MLM: ``` from transformers import DataCollatorForLanguageModeling, AutoTokenizer data_collator = DataCollatorForLanguageModeling(processor.tokenizer, mlm=True, mlm_probability=0.4, return_tensors="pt") inputs['input_ids'], inputs['input_ids_masked'] = data_collator.torch_mask_tokens(inputs=inputs['input_ids'], special_tokens_mask=inputs['special_tokens_mask']) del inputs['special_tokens_mask'] ``` 4. Do a forward pass: ``` model = FlavaForPreTraining.from_pretrained("facebook/flava-full") outputs = model(**inputs) loss = outputs.loss print(f"loss: {loss}") ``` ### Expected behavior I would expect the forward pass to not throw errors. ### Actual behavior ``` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-14-b821d73f49e9> in <module> 1 model = FlavaForPreTraining.from_pretrained("facebook/flava-full") 2 ----> 3 outputs = model(**inputs) --------------------------------------------------------------------------- /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.9/dist-packages/transformers/models/flava/modeling_flava.py in forward(self, input_ids, input_ids_masked, pixel_values, codebook_pixel_values, attention_mask, token_type_ids, bool_masked_pos, position_ids, image_attention_mask, skip_unmasked_multimodal_encoder, mlm_labels, mim_labels, itm_labels, output_attentions, output_hidden_states, return_dict, return_loss) 1857 ) 1858 -> 1859 flava_masked_output = self.flava( 1860 input_ids=input_ids_masked, 1861 pixel_values=pixel_values, /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.9/dist-packages/transformers/models/flava/modeling_flava.py in forward(self, input_ids, pixel_values, attention_mask, token_type_ids, bool_masked_pos, position_ids, image_attention_mask, skip_multimodal_encoder, output_attentions, output_hidden_states, return_dict) 1403 text_output = None 1404 if input_ids is not None: -> 1405 text_output = self.text_model( 1406 input_ids=input_ids, 1407 attention_mask=attention_mask, /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.9/dist-packages/transformers/models/flava/modeling_flava.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, output_attentions, output_hidden_states, return_dict) 1061 ) 1062 -> 1063 embedding_output = self.embeddings( 1064 input_ids=input_ids, 1065 token_type_ids=token_type_ids, /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.9/dist-packages/transformers/models/flava/modeling_flava.py in forward(self, input_ids, token_type_ids, position_ids) 417 token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) 418 --> 419 inputs_embeds = self.word_embeddings(input_ids) 420 token_type_embeddings = self.token_type_embeddings(token_type_ids) 421 /usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], [] /usr/local/lib/python3.9/dist-packages/torch/nn/modules/sparse.py in forward(self, input) 158 159 def forward(self, input: Tensor) -> Tensor: --> 160 return F.embedding( 161 input, self.weight, self.padding_idx, self.max_norm, 162 self.norm_type, self.scale_grad_by_freq, self.sparse) /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2208 # remove once script supports set_grad_enabled 2209 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2210 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2211 2212 IndexError: index out of range in self ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22103/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22102
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22102/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22102/comments
https://api.github.com/repos/huggingface/transformers/issues/22102/events
https://github.com/huggingface/transformers/pull/22102
1,619,918,093
PR_kwDOCUB6oc5L0OGb
22,102
[neptune] fix checkpoint bug with relative out_dir
{ "login": "kshitij12345", "id": 19503980, "node_id": "MDQ6VXNlcjE5NTAzOTgw", "avatar_url": "https://avatars.githubusercontent.com/u/19503980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kshitij12345", "html_url": "https://github.com/kshitij12345", "followers_url": "https://api.github.com/users/kshitij12345/followers", "following_url": "https://api.github.com/users/kshitij12345/following{/other_user}", "gists_url": "https://api.github.com/users/kshitij12345/gists{/gist_id}", "starred_url": "https://api.github.com/users/kshitij12345/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kshitij12345/subscriptions", "organizations_url": "https://api.github.com/users/kshitij12345/orgs", "repos_url": "https://api.github.com/users/kshitij12345/repos", "events_url": "https://api.github.com/users/kshitij12345/events{/privacy}", "received_events_url": "https://api.github.com/users/kshitij12345/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger Thanks for taking a look. Will update the PR description in sometime (probably today or early tomorrow) to have more information. Will ping once that is done.", "@sgugger have updated the PR description. Let me know if that gives enough context. Thank you :)", "@sgugger Thanks for the review. @AleksanderWWW is out for this week so we will update the PR once he is back (as he has more context on update for latest neptune version support).\r\n\r\nThank you :)!", "Thanks! I believe now all the failing checks will be solved once you rebase your PR on the main branch.", "> Thanks! I believe now all the failing checks will be solved once you rebase your PR on the main branch.\r\n\r\n@sgugger Thank you for your support" ]
1,678
1,679
1,679
CONTRIBUTOR
null
# What does this PR do? It takes care of the following: * Updates to NeptuneCallback aligned with Neptune 1.0 release : https://docs.neptune.ai/setup/neptune-client_1-0_release_changes/ * It also fixes the case where we silently don't log the model_checkpoints when `output_dir` argument has a relative path of the form `../models`. Ref to the relevant lines of code: https://github.com/huggingface/transformers/blob/3be0e6e4a367dadb453ac31dad46fb665dc28b42/src/transformers/integrations.py#L1352-L1354 https://github.com/huggingface/transformers/blob/3be0e6e4a367dadb453ac31dad46fb665dc28b42/src/transformers/integrations.py#L1296-L1305 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22102/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22102", "html_url": "https://github.com/huggingface/transformers/pull/22102", "diff_url": "https://github.com/huggingface/transformers/pull/22102.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22102.patch", "merged_at": 1679943617000 }
https://api.github.com/repos/huggingface/transformers/issues/22101
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22101/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22101/comments
https://api.github.com/repos/huggingface/transformers/issues/22101/events
https://github.com/huggingface/transformers/issues/22101
1,619,885,644
I_kwDOCUB6oc5gjX5M
22,101
[Benchmark] HF Trainer optimizers (Mar-2023)
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2604155188, "node_id": "MDU6TGFiZWwyNjA0MTU1MTg4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Benchmarks", "name": "Benchmarks", "color": "2DF372", "default": false, "description": "Issues related to Memory regressions in tests and scripts" }, { "id": 2690307185, "node_id": "MDU6TGFiZWwyNjkwMzA3MTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Performance", "name": "Performance", "color": "207F32", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Could you add some Lion benchmarks?", "It's not in the HF Trainer's arsenal of optimizers, if you'd like to make a PR to integrate it then it can be done." ]
1,678
1,683
1,682
CONTRIBUTOR
null
This is a rerun of Adam torch vs. apex vs HF vs adafactor [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1005219385), [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005220263) but added BNB's 8bit Adam optimizer and probably the software has improved/changed since 14 months as well. note: [8-bit Optimizer](https://github.com/TimDettmers/bitsandbytes) Actually this time it was run on a desktop PCIe 80GB A100 - so not the same hardware as the previous benchmark which was an SXM 40GB A100. I'm using the specially written [HF Trainer benchmarking tool](https://github.com/huggingface/transformers/blob/main/scripts/benchmark/trainer-benchmark.py) that I developed specifically to make such benchmarks trivial to run and automatically get report tables. So I'm running: ``` CUDA_VISIBLE_DEVICES=0 python scripts/benchmark/trainer-benchmark.py --base-cmd ' \ examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir \ --do_train --label_smoothing 0.1 --logging_strategy no --save_strategy no --per_device_train_batch_size 32 \ --max_source_length 512 --max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \ --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \ --source_prefix "translate English to Romanian: " --warmup_steps 50 \ --max_train_samples 20000 --dataloader_num_workers 2 \ ' --target-metric-key train_samples_per_second --repeat-times 1 --variations '--optim adamw_torch|--optim adamw_bnb_8bit|--optim adamw_hf|--optim adafactor|--optim adamw_apex_fused' --report-metric-keys train_loss --base-variation '--optim adamw_torch' ``` You can see that I'm telling the tool to compare 5 optimizers: `adamw_torch`, `adamw_bnb_8bit`, `adamw_hf`, `adafactor`, `adamw_apex_fused`. **Memory usage wise we have per parameter:** - 2 bytes: `adamw_bnb_8bit` - 4 bytes: `adafactor` - 8 bytes: `adamw_torch`, `adamw_hf`, `adamw_apex_fused` *** Setup When publishing benchmarks it's crucial to log the versions that were used while running those, so here we go: ``` Datetime : 2023-03-10 20:55:38 Software: transformers: 4.27.0.dev0 torch : 1.13.1 cuda : 11.7 python : 3.8.15 Hardware: 1 GPUs : NVIDIA A100 80GB PCIe, 79.21GB ``` *** Results Last year's benchmark showed that the speed ups percentage was about the same between fp16/bf16/fp32. Let's see what this year brings plus a new optimizer. ### FP32 | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:-------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch | 102.77 | 0 | 2.21 | | --optim adamw_bnb_8bit | 104.99 | 2 | 2.15 | | --optim adamw_hf | 103.64 | 1 | 2.21 | | --optim adafactor | 97.22 | -5 | 2.21 | | --optim adamw_apex_fused | 106.12 | 3 | 2.21 | Observations: - The results are very different from the previous year's benchmark. While Adafactor is still the slowest, the rest are are pretty close by. - Very surprisingly the quantized 8-bit BNB Adam optimizer is faster than pytorch's 8-byte Adam optimizer! While it uses 1/4th of the memory of the latter! And its loss is even better! ### BF16 (added `--bf16` to the base command line) | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:-------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch | 323.18 | 0 | 2.22 | | --optim adamw_bnb_8bit | 348.29 | 8 | 2.16 | | --optim adamw_hf | 333.07 | 3 | 2.22 | | --optim adafactor | 274.36 | -15 | 2.22 | | --optim adamw_apex_fused | 359.46 | 11 | 2.22 | Observations: - Again BNB beats every other optimizer at loss, while being only second to apex in speed. ### FP16 (added `--fp16` to the base command line) | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:-------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch | 370.09 | 0 | 2.55 | | --optim adamw_bnb_8bit | 383.21 | 4 | 2.45 | | --optim adamw_hf | 373.66 | 1 | 2.55 | | --optim adafactor | 356.84 | -4 | 2.53 | | --optim adamw_apex_fused | 380.50 | 3 | 2.55 | Observations: - Here BNB even managed to beat apex. But since I run each only once it's possible that re-running multiple times might show a slightly different outcome. - Somehow BF16 appears to be slower than fp16 but it gives a much better loss (same loss as fp32). I wonder why?! ### new addition! `--adamw_torch_fused` edit: we added `--adamw_torch_fused` to HF Trainer, which runs almost as fast as `--adamw_apex_fused` - this option requires `torch>=2.0` for fp32 and bf16, and `torch>2.0` for fp16 as some bugs were fixed in `torch==2.0` e.g. here is fp16 comparison: | Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | |:--------------------------|------------------------------------:|------------:|----------------:| | --optim adamw_torch_fused | 387.10 | 3 | 2.66 | | --optim adamw_torch | 377.61 | 0 | 2.66 | | --optim adamw_apex_fused | 389.49 | 3 | 2.66 |
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22101/reactions", "total_count": 19, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 10, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22101/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22100
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22100/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22100/comments
https://api.github.com/repos/huggingface/transformers/issues/22100/events
https://github.com/huggingface/transformers/issues/22100
1,619,879,273
I_kwDOCUB6oc5gjWVp
22,100
Transformers cannot recognise `config.json` even though it is in model directory
{ "login": "constantinethegr8", "id": 117062959, "node_id": "U_kgDOBvo9Lw", "avatar_url": "https://avatars.githubusercontent.com/u/117062959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/constantinethegr8", "html_url": "https://github.com/constantinethegr8", "followers_url": "https://api.github.com/users/constantinethegr8/followers", "following_url": "https://api.github.com/users/constantinethegr8/following{/other_user}", "gists_url": "https://api.github.com/users/constantinethegr8/gists{/gist_id}", "starred_url": "https://api.github.com/users/constantinethegr8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/constantinethegr8/subscriptions", "organizations_url": "https://api.github.com/users/constantinethegr8/orgs", "repos_url": "https://api.github.com/users/constantinethegr8/repos", "events_url": "https://api.github.com/users/constantinethegr8/events{/privacy}", "received_events_url": "https://api.github.com/users/constantinethegr8/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`model = AutoModelForCausalLM.from_pretrained(\"models--gptjchatbot_model\")` cannot work, you need to specify the whole path to the folder on your local steup.", "Modified line 8\r\n`model = AutoModelForCausalLM.from_pretrained(\"G:\\.cache\\models--gptjchatbot_model\\\")`\r\n\r\ngot this error\r\n\r\n```\r\n File \"G:\\pytorch_model.bin_float32_(cpu)\\conda-llm\\gptjchatbot.py\", line 8\r\n model = AutoModelForCausalLM.from_pretrained(\"G:\\.cache\\models--gptjchatbot_model\\\")\r\n ^\r\nSyntaxError: unterminated string literal (detected at line 8)\r\n```", "@constantinethegr8 This is a python syntax error resulting from the `\\` at the end of the path. The following should work: \r\n`model = AutoModelForCausalLM.from_pretrained(\"G:\\.cache\\models--gptjchatbot_model\")`", "I used this line:\r\n```\r\nmodel = AutoModelForCausalLM.from_pretrained(\"G:\\.cache\\models--gpt4chan_model\")\r\n```\r\nlike you said but got this new error about dependecies in the conda environment\r\n```\r\nTraceback (most recent call last):\r\n File \"G:\\pytorch_model.bin_float32_(cpu)\\conda-llm\\gptjchatbot.py.py\", line 8, in <module>\r\n model = AutoModelForCausalLM.from_pretrained(\"G:\\.cache\\models--gptjchatbot_model\")\r\n File \"G:\\pytorch_model.bin_float32_(cpu)\\conda-llm\\lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 434, in from_pretrained\r\n config, kwargs = AutoConfig.from_pretrained(\r\n File \"G:\\pytorch_model.bin_float32_(cpu)\\conda-llm\\lib\\site-packages\\transformers\\models\\auto\\configuration_auto.py\", line 874, in from_pretrained\r\n return config_class.from_dict(config_dict, **unused_kwargs)\r\n File \"G:\\pytorch_model.bin_float32_(cpu)\\conda-llm\\lib\\site-packages\\transformers\\configuration_utils.py\", line 688, in from_dict\r\n config = cls(**config_dict)\r\n File \"G:\\pytorch_model.bin_float32_(cpu)\\conda-llm\\lib\\site-packages\\transformers\\models\\gptj\\configuration_gptj.py\", line 139, in __init__\r\n super().__init__(\r\n File \"G:\\pytorch_model.bin_float32_(cpu)\\conda-llm\\lib\\site-packages\\transformers\\configuration_utils.py\", line 332, in __init__\r\n import torch\r\n File \"G:\\pytorch_model.bin_float32_(cpu)\\conda-llm\\lib\\site-packages\\torch\\__init__.py\", line 128, in <module>\r\n raise err\r\nOSError: [WinError 182] The operating system cannot run %1. Error loading \"G:\\pytorch_model.bin_float32_(cpu)\\conda-llm\\lib\\site-packages\\torch\\lib\\shm.dll\" or one of its dependencies.\r\n```", "@constantinethegr8 The new error being raised is showing that `torch` cannot be imported, which isn't a transformers issue. I suggest trying to reinstall torch in your environment. You can check whether torch is importable and its version by running `python -c \"import torch; print(torch.__version__)\"` in the terminal. ", "so i should not `import torch as pytorch`?", "and I got torch version 1.13.1", "Hi @constantinethegr8, \r\n\r\n> so i should not `import torch as pytorch`?\r\n\r\n`import x as y` is just a renaming of the module `x` to `y` in the scope. Although is generally advised against importing with non-canonical names, there's nothing stopping you and I doubt it is the cause of the issue here. \r\n\r\n> and I got torch version 1.13.1\r\n\r\nOK, this means torch is install and in your path. As you mentioned, there's likely some misconfiguration in the conda environment between the dependencies. This isn't a transformers issue. I would recommend creating a new conda environment or searching online to see if there's other people who have encountered the same error message and the solutions they have found. ", "Thank you. Should I avoid using a conda environment then as I have used regular python with transformers?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,682
1,682
NONE
null
I ran this code: ``` import os os.environ['TRANSFORMERS_CACHE'] = 'G:\.cache' from transformers import AutoModelForCausalLM, AutoTokenizer, PretrainedConfig #config = PretrainedConfig('G:\.cache\models--gptjchatbot_model\config.json') tried this but doesn't work well model = AutoModelForCausalLM.from_pretrained("models--gptjchatbot_model") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") prompt = ( "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " "previously unexplored valley, in the Andes Mountains. Even more surprising to the " "researchers was the fact that the unicorns spoke perfect English." ) input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.8, top_p=0.9, max_length=100, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] ``` for one of my finetuned custom models and I got this debug prompt (and I'm running a virtual python environment) ``` Traceback (most recent call last): File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\configuration_utils.py", line 620, in _get_config_dict resolved_config_file = cached_file( File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\utils\hub.py", line 409, in cached_file resolved_file = hf_hub_download( File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\huggingface_hub\utils\_validators.py", line 112, in _inner_fn validate_repo_id(arg_value) File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\huggingface_hub\utils\_validators.py", line 173, in validate_repo_id raise HFValidationError(f"Cannot have -- or .. in repo_id: '{repo_id}'.") huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'models--gptjchatbot_model'. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\gptjchatbot.py", line 8, in <module> model = AutoModelForCausalLM.from_pretrained("models--gptjchatbot_model") File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\models\auto\auto_factory.py", line 434, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\models\auto\configuration_auto.py", line 852, in from_pretrained config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\configuration_utils.py", line 565, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "G:\pytorch_model.bin_float32_(cpu)\conda-llm\lib\site-packages\transformers\configuration_utils.py", line 641, in _get_config_dict raise EnvironmentError( OSError: Can't load the configuration of 'models--gptjchatbot_model'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'models--gptjchatbot_model' is the correct path to a directory containing a config.json file ``` I ran an Anaconda environment on a separate drive and went through the work of changing the cache directory because I have no space on `C:`. Is there a way for config.json to be recognized. It is in the actual folder and I even tried making a subdirectory called `config`. Please help. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22100/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22099
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22099/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22099/comments
https://api.github.com/repos/huggingface/transformers/issues/22099/events
https://github.com/huggingface/transformers/pull/22099
1,619,877,067
PR_kwDOCUB6oc5L0GOM
22,099
[deepspeed docs] Activation Checkpointing
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
Add a section on Activation Checkpointing. Even though we don't support Deepspeed Activation Checkpointing API nevertheless document it and clarify what's what to help the user achieve clarity and make the right choices (and not file Issues ;)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22099/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22099", "html_url": "https://github.com/huggingface/transformers/pull/22099", "diff_url": "https://github.com/huggingface/transformers/pull/22099.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22099.patch", "merged_at": 1678737162000 }
https://api.github.com/repos/huggingface/transformers/issues/22098
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22098/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22098/comments
https://api.github.com/repos/huggingface/transformers/issues/22098/events
https://github.com/huggingface/transformers/pull/22098
1,619,796,207
PR_kwDOCUB6oc5Lz1je
22,098
[trainer] fix bug in grad accum with multiple epochs
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
CONTRIBUTOR
null
Please see https://github.com/huggingface/transformers/issues/22082 for the analysis printout of the problem. But basically we have a bug in grad accum machinery when `steps_in_epoch % gradient_accumulation_steps != 0` we always check for `step+1 % gradient_accumulation_steps != 0` and when we hit the epoch boundary we end up running more than `gradient_accumulation_steps` in that iteration. I proposed a fix using a total step counter - please feel free to suggest a different fix. I left the debug prints if you'd like to validate the situation yourself. will remove when happy. Fixes: https://github.com/huggingface/transformers/issues/22082
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22098/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22098", "html_url": "https://github.com/huggingface/transformers/pull/22098", "diff_url": "https://github.com/huggingface/transformers/pull/22098.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22098.patch", "merged_at": 1678737101000 }
https://api.github.com/repos/huggingface/transformers/issues/22097
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22097/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22097/comments
https://api.github.com/repos/huggingface/transformers/issues/22097/events
https://github.com/huggingface/transformers/pull/22097
1,619,707,817
PR_kwDOCUB6oc5Lzi9r
22,097
t5 remove data dependency
{ "login": "prathikr", "id": 31260940, "node_id": "MDQ6VXNlcjMxMjYwOTQw", "avatar_url": "https://avatars.githubusercontent.com/u/31260940?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prathikr", "html_url": "https://github.com/prathikr", "followers_url": "https://api.github.com/users/prathikr/followers", "following_url": "https://api.github.com/users/prathikr/following{/other_user}", "gists_url": "https://api.github.com/users/prathikr/gists{/gist_id}", "starred_url": "https://api.github.com/users/prathikr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prathikr/subscriptions", "organizations_url": "https://api.github.com/users/prathikr/orgs", "repos_url": "https://api.github.com/users/prathikr/repos", "events_url": "https://api.github.com/users/prathikr/events{/privacy}", "received_events_url": "https://api.github.com/users/prathikr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Cool! Thanks for this contribution! Pretty sure that this can also be applied to `SwitchTransformers` (as it implements the similar procedure) and ~`MT5`~ LongT5", "Let's maybe address this in a follow up PR no? Btw this PR includes the changes for `mt5` (EDIT: you meant `LongT5`)", "@prathikr Would you mind changing the two other models in this PR or would you prefer we followup in a separate PR?", "@sgugger I think a separate PR would be best. Thank you" ]
1,678
1,678
1,678
CONTRIBUTOR
null
# What does this PR do? Prevents clamping logic from being skipped by torch.onnx tracer by moving data-dependent inf check into fp16-specific training code. Unclamped inf results in NaN being returned during loss calculation and eventually will crash with the following error: ```bash Traceback (most recent call last): File "/home/prathikrao/optimum/examples/onnxruntime/training/translation/run_translation.py", line 680, in <module> main() File "/home/prathikrao/optimum/examples/onnxruntime/training/translation/run_translation.py", line 588, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 373, in train return inner_training_loop( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 658, in _inner_training_loop self.deepspeed.step() File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 2169, in step self._take_model_step(lr_kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 2071, in _take_model_step self.optimizer.step() File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1759, in step self._update_scale(self.overflow) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2016, in _update_scale self.loss_scaler.update_scale(has_overflow) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 156, in update_scale raise Exception( Exception: Current loss scale already at minimum - cannot decrease scale anymore. Exiting run. ``` ## Who can review? Models: - text models: @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22097/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22097", "html_url": "https://github.com/huggingface/transformers/pull/22097", "diff_url": "https://github.com/huggingface/transformers/pull/22097.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22097.patch", "merged_at": 1678911075000 }
https://api.github.com/repos/huggingface/transformers/issues/22096
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22096/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22096/comments
https://api.github.com/repos/huggingface/transformers/issues/22096/events
https://github.com/huggingface/transformers/issues/22096
1,619,694,010
I_kwDOCUB6oc5gipG6
22,096
Use torch.TensorDicts: The output of tokenizers.batch_encode_plus/__call__ could be made to inherit from torch TensorDicts
{ "login": "JulesGM", "id": 3231217, "node_id": "MDQ6VXNlcjMyMzEyMTc=", "avatar_url": "https://avatars.githubusercontent.com/u/3231217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JulesGM", "html_url": "https://github.com/JulesGM", "followers_url": "https://api.github.com/users/JulesGM/followers", "following_url": "https://api.github.com/users/JulesGM/following{/other_user}", "gists_url": "https://api.github.com/users/JulesGM/gists{/gist_id}", "starred_url": "https://api.github.com/users/JulesGM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesGM/subscriptions", "organizations_url": "https://api.github.com/users/JulesGM/orgs", "repos_url": "https://api.github.com/users/JulesGM/repos", "events_url": "https://api.github.com/users/JulesGM/events{/privacy}", "received_events_url": "https://api.github.com/users/JulesGM/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The result of the tokenizer calls can already interact with the `to` method (note that batch_encode_plus will be deprecated sometime soon) but I agree it could be interesting to look at this! The main challenge I see is that it's not packaged in PyTorch main so would require an extra dep...", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,678
1,681
1,681
NONE
null
### Feature request Tensor dicts have recently come out as a way to manipulate dicts of tenors in a way that is analogous to pandas, eg to make it easy to work on columns on tensors that share some property and a batch dimension or set of dimensions https://pytorch.org/rl/tensordict/ An obvious application of this is the output of `tokenizer.batch_encode_plus`. ### Motivation Being able to do a bunch of things on all the subtensors at once would be cool, like `.to` and `.cat`, etc. Having a common interface with tensordict could be fun. ### Your contribution I don't have the bandwidth to handle this myself.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22096/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22095
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22095/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22095/comments
https://api.github.com/repos/huggingface/transformers/issues/22095/events
https://github.com/huggingface/transformers/pull/22095
1,619,666,068
PR_kwDOCUB6oc5LzaIA
22,095
Fix big model inference for T5 models in float16
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678
1,678
1,678
COLLABORATOR
null
# What does this PR do? This PR fixes big model inference for large T5 models. The problem is that T5 models have some weights kept in float32, which interferes with the computation of `infer_auto_device_map`. Accelerate adds the functionality to deal with this in [this PR](https://github.com/huggingface/accelerate/pull/1179), and a patch release will be out soon with the fix in a release. When this is done, this PR can be merged so the fix can be used. With this I can do ```py from transformers import T5ForConditionalGeneration, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained('google/flan-ul2') model = T5ForConditionalGeneration.from_pretrained('google/flan-ul2', device_map = 'auto', torch_dtype=torch.float16) input_string = 'Answer the following question by reasoning step by step. I start with 10 bananas. A monkey eats three of them, and then gives me an avocado. How many bananas do I have left?' inputs = tokenizer(input_string, return_tensors = 'pt').to('cuda:0') outputs = model.generate(inputs['input_ids'], max_length = 200) print(tokenizer.decode(outputs[0])) ``` whereas before this went OOM.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22095/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22095", "html_url": "https://github.com/huggingface/transformers/pull/22095", "diff_url": "https://github.com/huggingface/transformers/pull/22095.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22095.patch", "merged_at": 1678800017000 }
https://api.github.com/repos/huggingface/transformers/issues/22094
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22094/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22094/comments
https://api.github.com/repos/huggingface/transformers/issues/22094/events
https://github.com/huggingface/transformers/issues/22094
1,619,640,331
I_kwDOCUB6oc5gicAL
22,094
SearchSummarizationPipeline - 'object has no attribute' error
{ "login": "arun-ar", "id": 15058867, "node_id": "MDQ6VXNlcjE1MDU4ODY3", "avatar_url": "https://avatars.githubusercontent.com/u/15058867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arun-ar", "html_url": "https://github.com/arun-ar", "followers_url": "https://api.github.com/users/arun-ar/followers", "following_url": "https://api.github.com/users/arun-ar/following{/other_user}", "gists_url": "https://api.github.com/users/arun-ar/gists{/gist_id}", "starred_url": "https://api.github.com/users/arun-ar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arun-ar/subscriptions", "organizations_url": "https://api.github.com/users/arun-ar/orgs", "repos_url": "https://api.github.com/users/arun-ar/repos", "events_url": "https://api.github.com/users/arun-ar/events{/privacy}", "received_events_url": "https://api.github.com/users/arun-ar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,678
1,678
1,678
NONE
null
I am facing the below error while initializing SearchSummarizationPipeline. I am using latest haystack version and python 3.10. ``` pipe3 = SearchSummarizationPipeline(summarizer=summarizer,retriever=retriever,generate_single_summary=False,return_in_answer_format=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\wing\AppData\Local\Programs\Python\Python310\lib\site-packages\haystack\pipelines\standard_pipelines.py", line 424, in __init__ self.pipeline.add_node(component=summarizer, name="Summarizer", inputs=["Retriever"]) File "C:\Users\wing\AppData\Local\Programs\Python\Python310\lib\site-packages\haystack\pipelines\base.py", line 424, in add_node component_definitions[name] = component._component_config AttributeError: 'SummarizationPipeline' object has no attribute '_component_config' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22094/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22093
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22093/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22093/comments
https://api.github.com/repos/huggingface/transformers/issues/22093/events
https://github.com/huggingface/transformers/pull/22093
1,619,613,684
PR_kwDOCUB6oc5LzPPq
22,093
Revert "[GPT2] Propose fix for #21080"
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Tested locally - all good now. Will merge once CI is green too (already discussed offline)" ]
1,678
1,678
1,678
COLLABORATOR
null
Reverts huggingface/transformers#21853 A few PT/TF and PT/Flax cross tests started to fail after #21853. Revert that PR for now. We need to look what went wrong (and why it is not reported in the PR CI) ``` FAILED tests/models/gpt2/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_pt_tf_model_equivalence - AssertionError: 3.225274 not less than or equal to 1e-05 : outputs.last_hidden_state: Difference between torch and tf is 3.225274085998535 (>= 1e-05). FAILED tests/models/encoder_decoder/test_modeling_tf_encoder_decoder.py::TFGPT2EncoderDecoderModelTest::test_pt_tf_model_equivalence - AssertionError: 0.3753911 not less than or equal to 1e-05 : outputs.logits: Difference between torch and tf is 0.3753910958766937 (>= 1e-05). FAILED tests/models/vision_encoder_decoder/test_modeling_tf_vision_encoder_decoder.py::TFViT2GPT2EncoderDecoderModelTest::test_pt_tf_model_equivalence - AssertionError: 0.42845678 not less than or equal to 1e-05 : outputs.logits: Difference between torch and tf is 0.42845678329467773 (>= 1e-05). ``` A job run with failed tests is [here](https://app.circleci.com/pipelines/github/huggingface/transformers/59606/workflows/fd5cb028-f3df-4ac6-83ae-b0cc68396af8/jobs/728991)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22093/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22093", "html_url": "https://github.com/huggingface/transformers/pull/22093", "diff_url": "https://github.com/huggingface/transformers/pull/22093.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22093.patch", "merged_at": 1678482502000 }
https://api.github.com/repos/huggingface/transformers/issues/22092
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22092/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22092/comments
https://api.github.com/repos/huggingface/transformers/issues/22092/events
https://github.com/huggingface/transformers/issues/22092
1,619,603,433
I_kwDOCUB6oc5giS_p
22,092
New metrics and different Loss in TF version of Segformer
{ "login": "ebgoldstein", "id": 5330599, "node_id": "MDQ6VXNlcjUzMzA1OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5330599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ebgoldstein", "html_url": "https://github.com/ebgoldstein", "followers_url": "https://api.github.com/users/ebgoldstein/followers", "following_url": "https://api.github.com/users/ebgoldstein/following{/other_user}", "gists_url": "https://api.github.com/users/ebgoldstein/gists{/gist_id}", "starred_url": "https://api.github.com/users/ebgoldstein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ebgoldstein/subscriptions", "organizations_url": "https://api.github.com/users/ebgoldstein/orgs", "repos_url": "https://api.github.com/users/ebgoldstein/repos", "events_url": "https://api.github.com/users/ebgoldstein/events{/privacy}", "received_events_url": "https://api.github.com/users/ebgoldstein/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Interesting! Can you document exactly what error you got with the compile step and what code you ran to cause them?", "Hi @Rocketknight1 \r\nI misremembered- the error is not after `model.compile()` - compiling a model with a different loss function, added metrics, a custom loss, or custom metrics all compile w/ no error. The errors appear with `model.fit()` .\r\n\r\nSo far I have tried to fit a model with a range of things :\r\n- a custom loss (my own version of Dice Loss)\r\n- added metrics (`tf.keras.metrics.MeanIoU()` and/or a (custom) Dice metric)\r\n- using KLDivergence loss (`tf.keras.losses.KLDivergence()`)\r\n\r\nAll produce errors during `model.fit()`, and all produce their own sets of errors.. All of them seem to me to be some type of tensorshape issue, but the Tracebacks are all different. To make sure its not just me, my colleague @dbuscombe-usgs has also tried, and also reported similar issues (with different datasets, different number of classes, different TF versions, different machines, etc.). I can provide a reference dataset and the scripts I am working with, if needed...", "Yes please! Ideally if you could give us some minimal code that reproduces the issue, that would make it much easier for us to track it down.\r\n\r\nAlso, sorry for the delay in replying here - I was away on St. Patrick's Day so I'm only getting to my GitHub backlog now!", "I have done it myself.  Lol to much green beer!\n\n\nSent from Yahoo Mail for iPhone\n\n\nOn Monday, March 20, 2023, 2:40 PM, Matt ***@***.***> wrote:\n\n\n\n\nYes please! Ideally if you could give us some minimal code that reproduces the issue, that would make it much easier for us to track it down.\n\nAlso, sorry for the delay in replying here - I was away on St. Patrick's Day so I'm only getting to my GitHub backlog now!\n\n—\nReply to this email directly, view it on GitHub, or unsubscribe.\nYou are receiving this because you are subscribed to this thread.Message ID: ***@***.***>\n\n\n\n", "Hi @Rocketknight1 , sorry for the delay.\r\n\r\nAttached below is some code and example image & label pairs all zipped up. Let me know if you prefer another format/delivery mechanism\r\n\r\n```\r\n|- TFSegFormerExample.py\r\nL ExampleData\r\n | - images\r\n L labels\r\n```\r\non L165 of the code is the compile step, and different versions of the model can be commented/uncommented to see the various error codes:\r\n\r\nL168 is the base case, where no loss function is defined - this works\r\nL171 defines SparseCatLoss - this does not work\r\nL174 defines KLD loss - this does not work\r\nL171 defines no loss but uses meanIoU as a metric - this does not work\r\n\r\n(These look like tensorshape issues to me, and typically i would debug it by looking at the last layer shape of `model.summary()`.. but the output of `model.summary()` for this model is not super expressive for this model, I'm not quite sure why - but maybe that is a whole different question) \r\n\r\n\r\n[TFSegformerExample.zip](https://github.com/huggingface/transformers/files/11056233/TFSegformerExample.zip)\r\n\r\n", "Ah, I see! The issue here is caused by some specific behaviour of the SegFormer models when using inputs of this resolution. The model outputs are actually at a lower resolution than the inputs - you can check this by manually passing in a batch. The output logits come out at 128x128, whereas the input is 512x512. This results in the loss computation failing because the logit and label tensors can't be aligned with each other.\r\n\r\nIf you use the model's [internal loss computation](https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/models/segformer/modeling_tf_segformer.py#L793-L811) by not passing any loss argument to `compile()`, then logits are upscaled before applying the cross-entropy loss and training works correctly. If you want to use your own custom loss function you'll have to do something similar.\r\n\r\nI'm not sure exactly why the output resolution for SegFormer is different from the input resolution, but it's not a bug in the Hugging Face TensorFlow implementation because the original model and our PyTorch implementation do this as well. @sayakpaul do you know why the model does that?", "thx for that code highlight @Rocketknight1 , super helpful and i understand it now -- i would need a similar upsampling routine.\r\n\r\nrelated Q to finding the output resolution - is there a reason that `summary()` does not provide info on all the layers/internal architecture of the model? ", "> @sayakpaul do you know why the model does that?\r\n\r\nIt's very likely because of how the model is designed and how it accumulates the multiple-resolution features and decodes them into a segmentation map. @NielsRogge might have better inputs. \r\n\r\n> i would need a similar upsampling routine.\r\n\r\nYou can check out [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb) that has this. \r\n\r\n```py\r\ndef compute_metrics(eval_pred):\r\n logits, labels = eval_pred\r\n # logits are of shape (batch_size, num_labels, height, width), so\r\n # we first transpose them to (batch_size, height, width, num_labels)\r\n logits = tf.transpose(logits, perm=[0, 2, 3, 1])\r\n # scale the logits to the size of the label\r\n logits_resized = tf.image.resize(\r\n logits,\r\n size=tf.shape(labels)[1:],\r\n method=\"bilinear\",\r\n )\r\n # compute the prediction labels and compute the metric\r\n pred_labels = tf.argmax(logits_resized, axis=-1)\r\n metrics = metric.compute(\r\n predictions=pred_labels,\r\n references=labels,\r\n num_labels=num_labels,\r\n ignore_index=-1,\r\n reduce_labels=image_processor.do_reduce_labels,\r\n )\r\n # add per category metrics as individual key-value pairs\r\n per_category_accuracy = metrics.pop(\"per_category_accuracy\").tolist()\r\n per_category_iou = metrics.pop(\"per_category_iou\").tolist()\r\n\r\n metrics.update(\r\n {f\"accuracy_{id2label[i]}\": v for i, v in enumerate(per_category_accuracy)}\r\n )\r\n metrics.update({f\"iou_{id2label[i]}\": v for i, v in enumerate(per_category_iou)})\r\n return {\"val_\" + k: v for k, v in metrics.items()}\r\n```\r\n\r\n> related Q to finding the output resolution - is there a reason that summary() does not provide info on all the layers/internal architecture of the model?\r\n\r\nThat is because we wrap everything as layers, and that has a limitation like this one. We do this to support cross-loading from PyTorch (because of variable naming). @Rocketknight1 might have more to add to this. \r\n\r\n", "Yeah, refactoring our TF models to make `summary()` more usable is absolutely on the list! Unfortunately it's quite a big list, but it's definitely there.", "Awesome, thanks so much for all the helpful info @Rocketknight1 & @sayakpaul . I can close this issue now as i understand the landscape much better and it seems the requested feature is already on your list!\r\nthanks again - i really appreciate it!" ]
1,678
1,680
1,680
NONE
null
### Feature request I am playing with the awesome [Segformer finetuning example on the Keras website](https://keras.io/examples/vision/segformer/) made by @sayakpaul that relies on HF Transformers. In this example no loss function nor any metrics are specified in `model.compile()` . I would like to be able to add metrics (e.g., IoU, Dice, etc), and potentially change the loss for the Segformer model. When I tried to make these additions, the compile step failed. (From reading the Segformer paper and original code it seems like all metrics and losses need to have some form of masking?). Any advice or info on how to implement these changes would be awesome (and i apologize in advance if I have missed the relevant docs (i did look!). (based on comms with @sayakpaul , i am also cc:ing @Rocketknight1 ) ### Motivation Track various metrics during the fine-tuning of the Segformer model. ### Your contribution I think once i understand the solution steps i would be able to determine if i could contribute
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22092/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22092/timeline
completed
null
null