url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/22694
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22694/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22694/comments
https://api.github.com/repos/huggingface/transformers/issues/22694/events
https://github.com/huggingface/transformers/issues/22694
1,661,122,696
I_kwDOCUB6oc5jAriI
22,694
Training Evaluation Display on VSCode
{ "login": "sciencecw", "id": 10662708, "node_id": "MDQ6VXNlcjEwNjYyNzA4", "avatar_url": "https://avatars.githubusercontent.com/u/10662708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sciencecw", "html_url": "https://github.com/sciencecw", "followers_url": "https://api.github.com/users/sciencecw/followers", "following_url": "https://api.github.com/users/sciencecw/following{/other_user}", "gists_url": "https://api.github.com/users/sciencecw/gists{/gist_id}", "starred_url": "https://api.github.com/users/sciencecw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sciencecw/subscriptions", "organizations_url": "https://api.github.com/users/sciencecw/orgs", "repos_url": "https://api.github.com/users/sciencecw/repos", "events_url": "https://api.github.com/users/sciencecw/events{/privacy}", "received_events_url": "https://api.github.com/users/sciencecw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We had specifically excluded VSCode in the past as the widgets were not properly working there. Could you try to install from source and see if commenting out those [two lines](https://github.com/huggingface/transformers/blob/151425ddb29d4ad1a121e8cce62000a2ac52d3ba/src/transformers/utils/import_utils.py#L619) result in a nice training?", "What do you mean by install from source?\r\n", "I installed the package from source. I can see the table formatted correctly now, but it stops updating after the first evaluation\r\n![Screenshot 2023-04-11 at 8 40 34 PM](https://user-images.githubusercontent.com/10662708/231318144-2e548e30-dc23-4455-a528-5cddbb5d2607.png)\r\n\r\nI guess that is the widget problem you're referring to. Is there a workaround for people on VSCode so it doesn't print out a thousand lines of evaluation? Like hiding the printout and retrieving evaluation stats after training is done?\r\n", "You can filter the log level of printed informations with `transformers.utils.set_verbosity_warning()` (to avoid all infos like the logs of the evaluation results).", "I have also encountered this problem, and for procedural reasons, I cannot install from source.\r\nIt would be very helpful if this issue could be addressed, please :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "<img width=\"1652\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/24773652/33a317eb-a5d9-400f-8df2-f7b67bb9492f\">\r\nmy trainer output looks very bad\r\n\r\n```python\r\nargs = TrainingArguments(\r\n \"pokemon-habitat\",\r\n evaluation_strategy=\"epoch\",\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n num_train_epochs=num_epochs,\r\n use_mps_device=True,\r\n)\r\n\r\n# Trainer\r\ntrainer = Trainer(\r\n model,\r\n args,\r\n train_dataset=dataset[\"train\"],\r\n eval_dataset=dataset[\"test\"],\r\n compute_metrics=compute_metrics,\r\n)\r\ntrainer.train()\r\n```\r\n\r\ntransfomers: 4.30.2", "I am having the exact same issues as @lainisourgod " ]
1,681
1,707
1,685
NONE
null
### System Info 1. OSX Ventura 13.2 1. VSCode 1.77.1 - Chromium 102.0.5005.196 - Jupyter extension v2023.3.1000892223 3. Transformers 4.26.1 ### Who can help? Not sure. Please let me know if it is a VSCode issue ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://github.com/huggingface/notebooks/blob/main/examples/text_classification.ipynb Run the notebook (I commented out the parts pushing to hub) ### Expected behavior The table of metrics during evaluation phase in training fail to show up as html object in VSCode. There seems to be no similar issue on colab or AWS Currently, the output looks like this (repeated by the number of times evaluation is run during training) ``` 0.3564084804084804 {'eval_loss': 1.6524937152862549, 'eval_f1': 0.3564084804084804, 'eval_accuracy': 0.36, 'eval_runtime': 4.6151, 'eval_samples_per_second': 10.834, 'eval_steps_per_second': 1.517, 'epoch': 0.26} ***** Running Evaluation ***** Num examples = 50 Batch size = 8 {'loss': 1.6389, 'learning_rate': 3.611111111111111e-05, 'epoch': 0.28} ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22694/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22693
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22693/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22693/comments
https://api.github.com/repos/huggingface/transformers/issues/22693/events
https://github.com/huggingface/transformers/pull/22693
1,661,121,911
PR_kwDOCUB6oc5N88gS
22,693
Replace -100s in predictions by the pad token
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
COLLABORATOR
null
# What does this PR do? This PR fixes the seq2seq examples with the Trainer on datasets with small samples. The problem is that the results on those samples get padded with a -100 by the Trainer, and this in turns gets an index error in the tokenizer decode method. Fixes part of #22634
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22693/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22693", "html_url": "https://github.com/huggingface/transformers/pull/22693", "diff_url": "https://github.com/huggingface/transformers/pull/22693.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22693.patch", "merged_at": 1681219940000 }
https://api.github.com/repos/huggingface/transformers/issues/22692
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22692/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22692/comments
https://api.github.com/repos/huggingface/transformers/issues/22692/events
https://github.com/huggingface/transformers/issues/22692
1,660,982,409
I_kwDOCUB6oc5jAJSJ
22,692
Offline mode not working for remote code?
{ "login": "1049451037", "id": 15194939, "node_id": "MDQ6VXNlcjE1MTk0OTM5", "avatar_url": "https://avatars.githubusercontent.com/u/15194939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/1049451037", "html_url": "https://github.com/1049451037", "followers_url": "https://api.github.com/users/1049451037/followers", "following_url": "https://api.github.com/users/1049451037/following{/other_user}", "gists_url": "https://api.github.com/users/1049451037/gists{/gist_id}", "starred_url": "https://api.github.com/users/1049451037/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/1049451037/subscriptions", "organizations_url": "https://api.github.com/users/1049451037/orgs", "repos_url": "https://api.github.com/users/1049451037/repos", "events_url": "https://api.github.com/users/1049451037/events{/privacy}", "received_events_url": "https://api.github.com/users/1049451037/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This has just been fixed on the main branch (by #22661) so make sure to use the latest!", "It works for normal cases. However, when I set my customized cache dir, it still report error:\r\n\r\n```\r\n 445 f\" cached files and it looks like {path_or_repo_id} is not the path to a directory containing \r\na file named\" \r\n 446 f\" {full_filename}.\\nCheckout your internet connection or see how to run the library in offlin\r\ne mode at\" \r\n 447 \" 'https://huggingface.co/docs/transformers/installation#offline-mode'.\" \r\n 448 ) \r\n 449 except EntryNotFoundError: \r\n 450 if not _raise_exceptions_for_missing_entries: \r\n \r\nOSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached fil\r\nes and it looks like THUDM/chatglm-6b is not the path to a directory containing a file named config.json. \r\nCheckout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. \r\n```\r\n\r\nI just set `export TRANSFORMERS_CACHE=/my/cache/dir`.", "Please give us a reproducible code example as well as the full traceback.", "First, run the following code to download cache:\r\n\r\n```python\r\nfrom transformers import AutoConfig\r\nconfig = AutoConfig.from_pretrained('THUDM/chatglm-6b', trust_remote_code=True, revision=\"aa51e62ddc9c9f334858b0af44cf59b05c70148a\")\r\n```\r\n\r\nThen, run the same code with `TRANSFORMERS_OFFLINE=1 TRANSFORMERS_CACHE=~/.cache/huggingface` environment variables. Things will go wrong.", "That's probably because you are not using the right folder. The default cache folder is in `~/.cache/huggingface/hub` so executing the lines above with `TRANSFORMERS_OFFLINE=1 TRANSFORMERS_CACHE=~/.cache/huggingface` doesn't work on my side but `TRANSFORMERS_OFFLINE=1 TRANSFORMERS_CACHE=~/.cache/huggingface/hub` does.", "wow, that's cool. It works now. Thank you so much!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,684
1,684
NONE
null
### System Info I want to run remote code offline and the revision is in my cache dir. For example, ```python from transformers import AutoConfig config = AutoConfig.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, revision="cde457b39fe0670b10dd293909aab17387ea2c80", local_files_only=True) ``` However, it still reports that connection error. ``` ConnectionError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/THUDM/chatglm-6b/revision/cde457b39fe0670b10dd293909aab17387ea2c80 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f6f8f9887f0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')) ``` Is there anything wrong for offline mode with remote code? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run this piece of code offline: ```python from transformers import AutoConfig config = AutoConfig.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, revision="cde457b39fe0670b10dd293909aab17387ea2c80", local_files_only=True) ``` ### Expected behavior run remote code offline with local cache
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22692/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22691
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22691/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22691/comments
https://api.github.com/repos/huggingface/transformers/issues/22691/events
https://github.com/huggingface/transformers/pull/22691
1,660,980,793
PR_kwDOCUB6oc5N8eOY
22,691
Model parallelism: Moving labels to same devices as the logits are
{ "login": "shahad-mahmud", "id": 29411624, "node_id": "MDQ6VXNlcjI5NDExNjI0", "avatar_url": "https://avatars.githubusercontent.com/u/29411624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shahad-mahmud", "html_url": "https://github.com/shahad-mahmud", "followers_url": "https://api.github.com/users/shahad-mahmud/followers", "following_url": "https://api.github.com/users/shahad-mahmud/following{/other_user}", "gists_url": "https://api.github.com/users/shahad-mahmud/gists{/gist_id}", "starred_url": "https://api.github.com/users/shahad-mahmud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shahad-mahmud/subscriptions", "organizations_url": "https://api.github.com/users/shahad-mahmud/orgs", "repos_url": "https://api.github.com/users/shahad-mahmud/repos", "events_url": "https://api.github.com/users/shahad-mahmud/events{/privacy}", "received_events_url": "https://api.github.com/users/shahad-mahmud/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks a lot for your contribution!\r\n\r\nIt's a pleasure. I would love to contribute more and expecting some guidance! " ]
1,681
1,681
1,681
CONTRIBUTOR
null
As suggested in the https://github.com/huggingface/transformers/issues/22561, moving the labels to the same device as the logits are for the Data2Vec Text, ESM, Longformer and LongT5 models. @sgugger Can you please review?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22691/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22691", "html_url": "https://github.com/huggingface/transformers/pull/22691", "diff_url": "https://github.com/huggingface/transformers/pull/22691.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22691.patch", "merged_at": 1681143773000 }
https://api.github.com/repos/huggingface/transformers/issues/22690
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22690/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22690/comments
https://api.github.com/repos/huggingface/transformers/issues/22690/events
https://github.com/huggingface/transformers/issues/22690
1,660,837,505
I_kwDOCUB6oc5i_l6B
22,690
Error SIGABRT when running esmfold_v1 on TPU
{ "login": "conchaeloko", "id": 73343743, "node_id": "MDQ6VXNlcjczMzQzNzQz", "avatar_url": "https://avatars.githubusercontent.com/u/73343743?v=4", "gravatar_id": "", "url": "https://api.github.com/users/conchaeloko", "html_url": "https://github.com/conchaeloko", "followers_url": "https://api.github.com/users/conchaeloko/followers", "following_url": "https://api.github.com/users/conchaeloko/following{/other_user}", "gists_url": "https://api.github.com/users/conchaeloko/gists{/gist_id}", "starred_url": "https://api.github.com/users/conchaeloko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/conchaeloko/subscriptions", "organizations_url": "https://api.github.com/users/conchaeloko/orgs", "repos_url": "https://api.github.com/users/conchaeloko/repos", "events_url": "https://api.github.com/users/conchaeloko/events{/privacy}", "received_events_url": "https://api.github.com/users/conchaeloko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Rocketknight1 ", "I'm not an expert on torch XLA, but I think the problem here is that TPUs do not support `float16`, only `bfloat16`. `model.half()` converts the model parameters to `float16`, and the error you're seeing is caused by TPUs not having a division operation that can work on `float16` inputs.\r\n\r\nYou could try removing the `model.half()` line, and/or using some of the PyTorch environment variables for downcasting to BF16 on TPU instead, such as `XLA_USE_BF16`. Please see the docs [here](https://pytorch.org/xla/release/2.0/index.html#xla-tensors-and-bfloat16).", "Thank you for your suggestion @Rocketknight1. I got to step further by adding : \r\n```\r\nmodel = model.half()\r\nmodel = model.to(dtype=torch.bfloat16)\r\nmodel = model.to(device)\r\n```\r\nHowever I run into some memory issues. Two options for me : put my hand on a more powerful accelerator or try model parallelism. Trying my luck, how much have you guys played with model parallelism on TPUs ? \r\n\r\nThanks again for the help", "That's interesting - I admit I haven't tried it on PyTorch + TPU! However, in our testing, we were able to get ESMFold to run on a GPU with 16-24GB of memory. This meant we were able to generate protein folds fine using our [ESMFold Colab notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb), even with the free GPU, and if you want to do longer or batch predictions then the premium GPUs should be more than enough. Have you tried running on Colab already?", "@Rocketknight1, I tried but got issues with GPUs with vram<14GB (free colab and free Kaggle notebooks). I think I'll do it on a GPU V100 32GB.\r\n\r\nThanks again for the help" ]
1,681
1,681
1,681
NONE
null
### System Info tpu-vm-pt-2.0 (torch ; torchvision ; torch-xla : 2.0) accelerator : v2-8 transformers: 4.27.4 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Create a TPUvm and connect to it : ``` gcloud compute tpus tpu-vm create ${TPU_NAME} --project=${PROJECT_ID} --zone=${ZONE} --accelerator-type=v2-8 --version=tpu-vm-pt-2.0 gcloud compute tpus tpu-vm ssh ${TPU_NAME} --zone=${ZONE} --project=${PROJECT_ID} ``` Load the model on the TPU : ``` import torch import torch_xla.core.xla_model as xm device = xm.xla_device() from transformers import AutoTokenizer, EsmForProteinFolding from transformers.models.esm.openfold_utils.protein import to_pdb, Protein as OFProtein from transformers.models.esm.openfold_utils.feats import atom14_to_atom37 tokenizer = AutoTokenizer.from_pretrained("facebook/esmfold_v1") model = EsmForProteinFolding.from_pretrained("facebook/esmfold_v1") import gc gc.collect() model = model.half() model = model.to(device) model.trunk.set_chunk_size(64) ``` Try running the first part of the esmfold_v1 script : ``` def esmfold_prediction(tokenized_sequences , path_out) : """ The function takes as an input : - 'tokenized_sequences', the output of the function tokenize_fasta - 'path_out', the path of the directory when the pdb files are to be written The function generates the pdb files in the path_out """ for protein in tokenized_sequences : pdb_files = [] with torch.no_grad(): prot_to_pred = protein[1].to(device) output = model(prot_to_pred) ``` Error : ``` src/tcmalloc.cc:332] Attempt to free invalid pointer 0x7ffc17c89fc0 https://symbolize.stripped_domain/r/?trace=7f99dc4ff00b,7f99dc4ff08f,fffffffffb6affff,e900000002bffe88&map= *** SIGABRT received by PID 132988 (TID 132988) on cpu 36 from PID 132988; stack trace: *** PC: @ 0x7f99dc4ff00b (unknown) raise @ 0x7f988e574a1a 1152 (unknown) @ 0x7f99dc4ff090 (unknown) (unknown) @ 0xfffffffffb6b0000 (unknown) (unknown) @ 0xe900000002bffe89 (unknown) (unknown) https://symbolize.stripped_domain/r/?trace=7f99dc4ff00b,7f988e574a19,7f99dc4ff08f,fffffffffb6affff,e900000002bffe88&map=ceee8fa20ddf9c34af43f587221e91de:7f988164c000-7f988e78b840 E0410 13:36:27.439930 132988 coredump_hook.cc:414] RAW: Remote crash data gathering hook invoked. E0410 13:36:27.439944 132988 coredump_hook.cc:453] RAW: Skipping coredump since rlimit was 0 at process start. E0410 13:36:27.439952 132988 client.cc:278] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec. E0410 13:36:27.439956 132988 coredump_hook.cc:512] RAW: Sending fingerprint to remote end. E0410 13:36:27.439962 132988 coredump_socket.cc:120] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket E0410 13:36:27.439970 132988 coredump_hook.cc:518] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running? E0410 13:36:27.439974 132988 coredump_hook.cc:580] RAW: Dumping core locally. E0410 13:36:27.833706 132988 process_state.cc:784] RAW: Raising signal 6 with default behavior Aborted (core dumped) ``` Disabling tcmalloc that way : `export LD_PRELOAD=""` Running the script again returns the error : ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 14, in esmfold_prediction File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/robbyconchaeloko/.local/lib/python3.8/site-packages/transformers/models/esm/modeling_esmfold.py", line 2154, in forward structure: dict = self.trunk(s_s_0, s_z_0, aa, position_ids, attention_mask, no_recycles=num_recycles) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/robbyconchaeloko/.local/lib/python3.8/site-packages/transformers/models/esm/modeling_esmfold.py", line 1965, in forward structure = self.structure_module( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/robbyconchaeloko/.local/lib/python3.8/site-packages/transformers/models/esm/modeling_esmfold.py", line 1782, in forward rigids = rigids.compose_q_update_vec(self.bb_update(s)) File "/home/robbyconchaeloko/.local/lib/python3.8/site-packages/transformers/models/esm/openfold_utils/rigid_utils.py", line 917, in compose_q_update_vec new_rots = self._rots.compose_q_update_vec(q_vec) File "/home/robbyconchaeloko/.local/lib/python3.8/site-packages/transformers/models/esm/openfold_utils/rigid_utils.py", line 518, in compose_q_update_vec return Rotation( File "/home/robbyconchaeloko/.local/lib/python3.8/site-packages/transformers/models/esm/openfold_utils/rigid_utils.py", line 289, in __init__ quats = quats / torch.linalg.norm(quats, dim=-1, keepdim=True) RuntimeError: Error while lowering: [] aten::div, xla_shape=f32[1,214,2560]{2,1,0} Error: /pytorch/xla/torch_xla/csrc/convert_ops.cpp:86 : Unsupported XLA type 10 Frames: ``` ### Expected behavior Generates the input for the next function : ``` def convert_outputs_to_pdb(outputs): final_atom_positions = atom14_to_atom37(outputs["positions"][-1], outputs) outputs = {k: v.to("cpu").numpy() for k, v in outputs.items()} final_atom_positions = final_atom_positions.cpu().numpy() final_atom_mask = outputs["atom37_atom_exists"] pdbs = [] for i in range(outputs["aatype"].shape[0]): aa = outputs["aatype"][i] pred_pos = final_atom_positions[i] mask = final_atom_mask[i] resid = outputs["residue_index"][i] + 1 pred = OFProtein( aatype=aa, atom_positions=pred_pos, atom_mask=mask, residue_index=resid, b_factors=outputs["plddt"][i], chain_index=outputs["chain_index"][i] if "chain_index" in outputs else None, ) pdbs.append(to_pdb(pred)) return pdbs ``` And writes the pdb file
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22690/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22689
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22689/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22689/comments
https://api.github.com/repos/huggingface/transformers/issues/22689/events
https://github.com/huggingface/transformers/issues/22689
1,660,701,565
I_kwDOCUB6oc5i_Et9
22,689
Multiple Node Training Log
{ "login": "MikeDean2367", "id": 65744560, "node_id": "MDQ6VXNlcjY1NzQ0NTYw", "avatar_url": "https://avatars.githubusercontent.com/u/65744560?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MikeDean2367", "html_url": "https://github.com/MikeDean2367", "followers_url": "https://api.github.com/users/MikeDean2367/followers", "following_url": "https://api.github.com/users/MikeDean2367/following{/other_user}", "gists_url": "https://api.github.com/users/MikeDean2367/gists{/gist_id}", "starred_url": "https://api.github.com/users/MikeDean2367/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MikeDean2367/subscriptions", "organizations_url": "https://api.github.com/users/MikeDean2367/orgs", "repos_url": "https://api.github.com/users/MikeDean2367/repos", "events_url": "https://api.github.com/users/MikeDean2367/events{/privacy}", "received_events_url": "https://api.github.com/users/MikeDean2367/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "By default `--log_on_each_node` is `True` but you can set it to `False` to avoid the duplicate logs. I don't know if DeepSpeed does anything to the default log levels, the progress bar should be there on the two main nodes by default (every process that has a log level high enough).", "> By default `--log_on_each_node` is `True` but you can set it to `False` to avoid the duplicate logs. I don't know if DeepSpeed does anything to the default log levels, the progress bar should be there on the two main nodes by default (every process that has a log level high enough).\r\n\r\nThank you for your reply. Your answer made me want to know the answer to the first question. For the second question, when the training was shuted down, the progress bar appeared. I don't know why. In my experiment, all outputs were only on the master node, and non master nodes had no outputs. For the last question, I think @stas00 can help.", "> By default `--log_on_each_node` is `True` but you can set it to `False` to avoid the duplicate logs. I don't know if DeepSpeed does anything to the default log levels, the progress bar should be there on the two main nodes by default (every process that has a log level high enough).\r\n\r\nHi, I tried adding some arguments to the original command to prevent this redundant output. Here are my two ways:\r\n`--log_on_each_node False` `--log_level warning --log_level_replica error --log_on_each_node 0`. It doesn't seem to have much effect because redundant output is still generated on one node.", "For the third question, I found the output in `tensorboard`. I tried to change the master node, so I solved the third problem. For the first two problems, I think the first one still needs to be solved urgently because I have tried various solutions but have not been able to solve them. For the second question, it is possible to estimate the training time in Tensorboard, but I still hope to improve this issue. I guess if the first problem is solved, then the second problem can also be solved.", "As you closed the issue I'm not sure if there is anything remaining to address here.\r\n\r\nThe integration code does propagate the log-level setting here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/9858195481e0d29e9b720705d359f98620680a06/src/transformers/deepspeed.py#L350\r\n\r\nbut the outputs you shared in the OP come from HF Trainer and not Deepspeed.", "Thank you for your reply. So does this mean that the first two issues I mentioned are from Huggingface's Trainer and not Deepspeed?", "The first log is coming from HF Trainer - if you're not sure what comes from where it's very simple to test. Turn deepspeed off and see what you get as a baseline. If the model is too big, swap in a tiny model - we have one for each arch here: https://huggingface.co/hf-internal-testing\r\n\r\nWrt tqdm the training progress bar is there with deepspeed, at least on a single node. I have just tested it.\r\n\r\nBut let's first align so that we are testing the same code, please use this example (from `transformers` git clone top level dir)\r\n\r\n```\r\n$ PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 --num_nodes 1 \\\r\nexamples/pytorch/translation/run_translation.py --model_name_or_path \\\r\npatrickvonplaten/t5-tiny-random --output_dir /tmp/zero3 --overwrite_output_dir \\\r\n--max_train_samples 40 --max_eval_samples 40 --max_source_length 128 \\\r\n--max_target_length 128 --val_max_target_length 128 --do_train --do_eval \\\r\n--num_train_epochs 1 --per_device_train_batch_size 1 \\\r\n--per_device_eval_batch_size 1 --learning_rate 3e-3 --warmup_steps 500 \\\r\n--predict_with_generate --save_steps 0 --eval_steps 0 --group_by_length \\\r\n--dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \\\r\n--source_prefix 'translate English to Romanian: ' --deepspeed \\\r\ntests/deepspeed/ds_config_zero3.json --logging_steps 5 --logging_strategy \\\r\nsteps\r\n```\r\n\r\nand check that it logs correctly \r\n\r\nthen try with `deepspeed --num_gpus 2 --num_nodes 2` and check again. If something doesn't look right, we will sort it out.\r\n\r\nAs Sylvain mentioned you will probably need to set `--log_on_each_node 0` in that multi-node experiment.\r\n\r\nAnd to run the same w/o deepspeed, so that you could check the baseline:\r\n\r\n```\r\n$ PYTHONPATH=src USE_TF=0 torchrun --nproc-per-node 2 --nnodes 2 \\\r\nexamples/pytorch/translation/run_translation.py --model_name_or_path \\\r\npatrickvonplaten/t5-tiny-random --output_dir /tmp/zero3 --overwrite_output_dir \\\r\n--max_train_samples 40 --max_eval_samples 40 --max_source_length 128 \\\r\n--max_target_length 128 --val_max_target_length 128 --do_train --do_eval \\\r\n--num_train_epochs 1 --per_device_train_batch_size 1 \\\r\n--per_device_eval_batch_size 1 --learning_rate 3e-3 --warmup_steps 500 \\\r\n--predict_with_generate --save_steps 0 --eval_steps 0 --group_by_length \\\r\n--dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \\\r\n--source_prefix 'translate English to Romanian: ' --logging_steps 5 --logging_strategy \\\r\nsteps\r\n```", "> The first log is coming from HF Trainer - if you're not sure what comes from where it's very simple to test. Turn deepspeed off and see what you get as a baseline. If the model is too big, swap in a tiny model - we have one for each arch here: https://huggingface.co/hf-internal-testing\r\n> \r\n> Wrt tqdm the training progress bar is there with deepspeed, at least on a single node. I have just tested it.\r\n> \r\n> But let's first align so that we are testing the same code, please use this example (from `transformers` git clone top level dir)\r\n> \r\n> ```\r\n> $ PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 --num_nodes 1 \\\r\n> examples/pytorch/translation/run_translation.py --model_name_or_path \\\r\n> patrickvonplaten/t5-tiny-random --output_dir /tmp/zero3 --overwrite_output_dir \\\r\n> --max_train_samples 40 --max_eval_samples 40 --max_source_length 128 \\\r\n> --max_target_length 128 --val_max_target_length 128 --do_train --do_eval \\\r\n> --num_train_epochs 1 --per_device_train_batch_size 1 \\\r\n> --per_device_eval_batch_size 1 --learning_rate 3e-3 --warmup_steps 500 \\\r\n> --predict_with_generate --save_steps 0 --eval_steps 0 --group_by_length \\\r\n> --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \\\r\n> --source_prefix 'translate English to Romanian: ' --deepspeed \\\r\n> tests/deepspeed/ds_config_zero3.json --logging_steps 5 --logging_strategy \\\r\n> steps\r\n> ```\r\n> \r\n> and check that it logs correctly\r\n> \r\n> then try with `deepspeed --num_gpus 2 --num_nodes 2` and check again. If something doesn't look right, we will sort it out.\r\n> \r\n> As Sylvain mentioned you will probably need to set `--log_on_each_node 0` in that multi-node experiment.\r\n> \r\n> And to run the same w/o deepspeed, so that you could check the baseline:\r\n> \r\n> ```\r\n> $ PYTHONPATH=src USE_TF=0 torchrun --nproc-per-node 2 --nnodes 2 \\\r\n> examples/pytorch/translation/run_translation.py --model_name_or_path \\\r\n> patrickvonplaten/t5-tiny-random --output_dir /tmp/zero3 --overwrite_output_dir \\\r\n> --max_train_samples 40 --max_eval_samples 40 --max_source_length 128 \\\r\n> --max_target_length 128 --val_max_target_length 128 --do_train --do_eval \\\r\n> --num_train_epochs 1 --per_device_train_batch_size 1 \\\r\n> --per_device_eval_batch_size 1 --learning_rate 3e-3 --warmup_steps 500 \\\r\n> --predict_with_generate --save_steps 0 --eval_steps 0 --group_by_length \\\r\n> --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \\\r\n> --source_prefix 'translate English to Romanian: ' --logging_steps 5 --logging_strategy \\\r\n> steps\r\n> ```\r\n\r\nThank you for your reply. Our current machine is undergoing model training. This is expected to take 3 days. As I currently do not have any additional machines to test, I will try them in 3 days.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey @MikeDean2367, just wanted to know if and how you solved the tqdm/progress bar issue. I am getting the same issue under similar setup where the progress bar shows up after the model has finished training." ]
1,681
1,691
1,684
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-4.15.0-175-generic-x86_64-with-glibc2.27 - Python version: 3.10.10 - Huggingface_hub version: 0.13.4 - Safetensors version: not installed - PyTorch version (GPU?): 1.12.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger @stas00 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am training GPT on 2 nodes, each with 8 GPUs currently. I used the `Trainer` API provided by huggingface for training. In addition, I used Deepspeed's `ZeRO3` strategy. I have successfully start training, but there are the following issues in the output log: 1. Due to the presence of two nodes, there will be two rows of output at a time, each from a different node. I observed that the output of two nodes is the same, and the following is an example. Do I only need to focus on one line of output?(in other words, one of these two lines is redundant.) Or do these two lines mean that two machines have fed the same data and print the same output? ```shell node1: {'loss': 1.4406, 'learning_rate': 2e-05, 'epoch': 0.17} node2: {'loss': 1.4406, 'learning_rate': 2e-05, 'epoch': 0.17} node1: {'loss': 1.4457, 'learning_rate': 2e-05, 'epoch': 0.18} node2: {'loss': 1.4457, 'learning_rate': 2e-05, 'epoch': 0.18} ``` 2. There will be a progress bar during single node training, which should be performed using `tqdm`. However, there is no progress bar during the training. 3. In the training command, I used `--report_to "tensorboard"`, but I did not find any output in `tensorboard`. Here is my command to start training. ```shell deepspeed --num_gpus 8 --num_nodes 2 --hostfile=host.txt train.py \ --model_name_or_path /path/to/model \ --data_path /path/to/data \ --output_dir /path/to/output \ --num_train_epochs 1 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 1 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 100 \ --save_total_limit 2 \ --learning_rate 2e-5 \ --logging_steps 1 \ --report_to "tensorboard" \ --gradient_checkpointing True \ --deepspeed configs/deepspeed_config.json \ --fp16 True ``` ### Expected behavior I hope you can provide answers to the above three questions. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22689/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22688
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22688/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22688/comments
https://api.github.com/repos/huggingface/transformers/issues/22688/events
https://github.com/huggingface/transformers/issues/22688
1,660,674,190
I_kwDOCUB6oc5i--CO
22,688
Reporting a vulnerability
{ "login": "igibek", "id": 4621646, "node_id": "MDQ6VXNlcjQ2MjE2NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/4621646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/igibek", "html_url": "https://github.com/igibek", "followers_url": "https://api.github.com/users/igibek/followers", "following_url": "https://api.github.com/users/igibek/following{/other_user}", "gists_url": "https://api.github.com/users/igibek/gists{/gist_id}", "starred_url": "https://api.github.com/users/igibek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/igibek/subscriptions", "organizations_url": "https://api.github.com/users/igibek/orgs", "repos_url": "https://api.github.com/users/igibek/repos", "events_url": "https://api.github.com/users/igibek/events{/privacy}", "received_events_url": "https://api.github.com/users/igibek/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We are using another platform for vulnerability reporting. @Michellehbn can tell you more.", "Hi @igibek ! Thanks for reaching out to us! 🤗 We have a bug bounty program with HackerOne and would love for you to submit security vulnerability reports to our private program at https://hackerone.com/hugging_face. Will it be possible to send us your H1 username or email address so that we can invite you to our program please, either here or to security@huggingface.co? Thanks again!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,683
1,683
NONE
null
Hello! I hope you are doing well! We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called **Private vulnerability reporting**, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository. Can you enable it, so that we can report it? Thanks in advance! PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22688/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22687
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22687/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22687/comments
https://api.github.com/repos/huggingface/transformers/issues/22687/events
https://github.com/huggingface/transformers/issues/22687
1,660,647,692
I_kwDOCUB6oc5i-3kM
22,687
Vicuna 13B forward method is very slow in FSDP mode.
{ "login": "yurkoff-mv", "id": 82467993, "node_id": "MDQ6VXNlcjgyNDY3OTkz", "avatar_url": "https://avatars.githubusercontent.com/u/82467993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yurkoff-mv", "html_url": "https://github.com/yurkoff-mv", "followers_url": "https://api.github.com/users/yurkoff-mv/followers", "following_url": "https://api.github.com/users/yurkoff-mv/following{/other_user}", "gists_url": "https://api.github.com/users/yurkoff-mv/gists{/gist_id}", "starred_url": "https://api.github.com/users/yurkoff-mv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yurkoff-mv/subscriptions", "organizations_url": "https://api.github.com/users/yurkoff-mv/orgs", "repos_url": "https://api.github.com/users/yurkoff-mv/repos", "events_url": "https://api.github.com/users/yurkoff-mv/events{/privacy}", "received_events_url": "https://api.github.com/users/yurkoff-mv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I also want to attach a link to the discussion topic of the [**generate** method in **FSDP** mode.](https://discuss.huggingface.co/t/feature-request-gradient-checkpointing-for-encoderdecodermodel/25278)", "cc @pacman100 ", "I forgot to mention that I'm running the model on **two RTX 3090 GPUs**.", "Here is a working example you can try:\r\n\r\n```python\r\nfrom functools import partial\r\n\r\nimport torch\r\nfrom torch.distributed.fsdp import FullyShardedDataParallel as FSDP\r\nfrom torch.distributed.fsdp.wrap import transformer_auto_wrap_policy\r\n\r\nfrom transformers import LlamaTokenizer, LlamaForCausalLM\r\nfrom transformers.models.llama.modeling_llama import LlamaDecoderLayer\r\n\r\nmodel_dir = \"<insert your path to model here>\"\r\n\r\nimport os\r\nfrom time import perf_counter\r\n\r\nlocal_rank = int(os.environ[\"LOCAL_RANK\"])\r\nlocal_world_size = int(os.environ[\"LOCAL_WORLD_SIZE\"])\r\n\r\ntorch.cuda.set_device(torch.device(f\"cuda:{local_rank}\"))\r\n\r\ntorch.distributed.init_process_group(\r\n \"nccl\",\r\n rank=local_rank,\r\n world_size=local_world_size,\r\n)\r\nllama_auto_wrap_policy = partial(\r\n transformer_auto_wrap_policy,\r\n transformer_layer_cls={\r\n LlamaDecoderLayer,\r\n },\r\n)\r\n\r\nprint(torch.cuda.current_device())\r\n\r\ntokenizer = LlamaTokenizer.from_pretrained(model_dir)\r\nmodel = LlamaForCausalLM.from_pretrained(model_dir, torch_dtype=torch.float16, low_cpu_mem_usage=True)\r\n\r\nmodel = FSDP(\r\n model,\r\n auto_wrap_policy=llama_auto_wrap_policy,\r\n device_id=torch.device(f\"cuda:{local_rank}\"),\r\n # sharding_strategy=sharding_strategy,\r\n)\r\ninputs = tokenizer([\"Who is Dalai?\"], return_tensors=\"pt\")\r\n\r\nprint(inputs)\r\nt1_start = perf_counter()\r\nlogits = model(**inputs).logits[:, -1, :]\r\nt1_stop = perf_counter()\r\nprint(\"forward time:\", t1_stop - t1_start)\r\nprint(torch.cuda.max_memory_allocated() / 1e9)\r\n\r\n```\r\n\r\nRun with `torchrun --nproc_per_node=2 --master_port=56718 run_forward.py`.\r\n\r\nFor me this prints a forward runtime of ~0.8 sec on 2 A100 gpus and a peak GPU memory of ~14.5 GB (using llama-13b, current transformers main branch). ", "I think that you have such good performance because the model is placed on one GPU.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,687
1,687
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger, @ArthurZucke, @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from functools import partial import torch from torch.distributed.fsdp import FullyShardedDataParallel as FSDP from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy from transformers import LlamaTokenizer, LlamaForCausalLM from transformers.models.llama.modeling_llama import LlamaDecoderLayer torch.distributed.init_process_group("nccl", rank=WORLD_RANK, world_size=WORLD_SIZE, ) llama_auto_wrap_policy = partial(transformer_auto_wrap_policy, transformer_layer_cls={ LlamaDecoderLayer, }, ) tokenizer = LlamaTokenizer.from_pretrained(model_dir) model = LlamaForCausalLM.from_pretrained(model_dir, torch_dtype=torch.float16, low_cpu_mem_usage=True) model = FSDP(model, auto_wrap_policy=llama_auto_wrap_policy, device_id=torch.cuda.current_device(), # sharding_strategy=sharding_strategy, ) inputs = tokenizer(['Who is Dalai?']) logits = model.forward(inputs).logits[:, -1, :] ``` The execution time of the forward method is a more than a minute. ### Expected behavior The execution time of the forward method is a few seconds.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22687/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22686
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22686/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22686/comments
https://api.github.com/repos/huggingface/transformers/issues/22686/events
https://github.com/huggingface/transformers/pull/22686
1,660,564,385
PR_kwDOCUB6oc5N7Ea0
22,686
Add swiftformer
{ "login": "shehanmunasinghe", "id": 5057255, "node_id": "MDQ6VXNlcjUwNTcyNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/5057255?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shehanmunasinghe", "html_url": "https://github.com/shehanmunasinghe", "followers_url": "https://api.github.com/users/shehanmunasinghe/followers", "following_url": "https://api.github.com/users/shehanmunasinghe/following{/other_user}", "gists_url": "https://api.github.com/users/shehanmunasinghe/gists{/gist_id}", "starred_url": "https://api.github.com/users/shehanmunasinghe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shehanmunasinghe/subscriptions", "organizations_url": "https://api.github.com/users/shehanmunasinghe/orgs", "repos_url": "https://api.github.com/users/shehanmunasinghe/repos", "events_url": "https://api.github.com/users/shehanmunasinghe/events{/privacy}", "received_events_url": "https://api.github.com/users/shehanmunasinghe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Are there plans for tensorflow version? I am interested in having the tf model available for use in a downstream task in my work/research", "Hi @shehanmunasinghe, thanks for opening this PR! \r\n\r\nCould you make sure to fill out all the necessary documentation for the model in the README and `swiftformer.mdx` file? \r\n\r\nQuick question about the modeling code - it seems that all of the model components are copied from ViT i.e. their architecture and forward pass are exactly the same - is this correct? \r\n\r\n@D-Roberts I don't know of anyone working on the TensorFlow version of this, looking through the [open PRs](https://github.com/huggingface/transformers/pulls?q=is%3Apr+is%3Aopen+swiftformer) or [issues](https://github.com/huggingface/transformers/issues?q=is%3Aissue+is%3Aopen+swiftformer). @shehanmunasinghe - do you know of anyone who is working on a TF port? \r\n", "Hi @amyeroberts, Thanks for your response.\r\n\r\nPlease note that this is a Work In Progress (WIP) pull request. The changes to the modeling code and the documentation will be reflected once I push them. \r\n\r\nHi @D-Roberts, currently I'm not aware of anyone working on the TensorFlow version of this. \r\n\r\n", "@shehanmunasinghe OK - sounds good :) Let us know when the PR is ready to review. In the meantime, please don't hesitate if there are any questions. \r\n\r\n@D-Roberts - would you be interested in porting this model once the pytorch version is merged in? ", "@amyeroberts I am still working on porting the Efficientformer; I am interested in having both in tf to train in some downstream tasks / research... I would like to do the port for swiftformer too but can't commit to it right now due to time constraints (I do this in my spare time).. I'll revisit after I am done with the efficientformer and after the swiftformer torch pr here is merged too.", "@D-Roberts Of course, no worries, and thank you for your work on adding EfficientFormer :) I've opened an issue - #22771 - to add the TF version of this model where future discussions on how, who and when can be organised. ", "_The documentation is not available anymore as the PR was closed or merged._", "Hi @amyeroberts , this is now ready for your review.", "Hi @amyeroberts , \r\n\r\nThere is one test case failing (_examples_torch_). This happened only after I merged recent changes from the main branch. Could you please help me identify what's causing this issue?\r\n\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/62802/workflows/2558056a-de51-44b6-9c22-8ba3b67d127a/jobs/774714?invite=true#step-111-2647 ", "Hi @amyeroberts , I have resolved the issues that were raised during the code review. Please take a look.", "@shehanmunasinghe Great! I'm away for a few days, but will re-review when I'm back at my computer at the start of next week. ", "Hi @amyeroberts, thanks for your time and effort in reviewing this. This is my first pull request on this repo and I'm glad to hear your constructive comments. I have applied the suggestions you made and updated the code again. ", "Hi @amyeroberts , I have fixed those issues and pushed the updated code. \r\n\r\nHowever, as indicated [here](https://app.circleci.com/pipelines/github/huggingface/transformers/64108/workflows/8a8e0c78-afcd-44ab-9079-393eb6abc14f/jobs/792496?invite=true#step-113-8379) one test is failing, though this has nothing to do with `tests/models/whisper/test_modeling_whisper.py`.\r\n\r\n`\r\nFAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelTest::test_pt_tf_model_equivalence - AssertionError: 1.04904175e-05 not less than or equal to 1e-05 : outputs.encoder_hidden_states_0: Difference between PyTorch and TF is 1.049041748046875e-05 (>= 1e-05).\r\n`\r\n\r\n", "> Hi @amyeroberts , I have fixed those issues and pushed the updated code.\r\n> \r\n> However, as indicated [here](https://app.circleci.com/pipelines/github/huggingface/transformers/64108/workflows/8a8e0c78-afcd-44ab-9079-393eb6abc14f/jobs/792496?invite=true#step-113-8379) one test is failing, though this has nothing to do with `tests/models/whisper/test_modeling_whisper.py`.\r\n> \r\n> `FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelTest::test_pt_tf_model_equivalence - AssertionError: 1.04904175e-05 not less than or equal to 1e-05 : outputs.encoder_hidden_states_0: Difference between PyTorch and TF is 1.049041748046875e-05 (>= 1e-05).`\r\n\r\nAll checks are passing now.", "Hi @amyeroberts , thanks for approving this PR. I have updated everything and now I think it can be merged." ]
1,681
1,683
1,683
CONTRIBUTOR
null
# What does this PR do? Adds 'SwiftFormer' into huggingface/transformers <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # ([issue](https://github.com/huggingface/transformers/issues/22685)) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/22685 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @NielsRogge @alaradirik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22686/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22686", "html_url": "https://github.com/huggingface/transformers/pull/22686", "diff_url": "https://github.com/huggingface/transformers/pull/22686.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22686.patch", "merged_at": 1683888752000 }
https://api.github.com/repos/huggingface/transformers/issues/22685
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22685/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22685/comments
https://api.github.com/repos/huggingface/transformers/issues/22685/events
https://github.com/huggingface/transformers/issues/22685
1,660,533,075
I_kwDOCUB6oc5i-blT
22,685
Add SwiftFormer
{ "login": "shehanmunasinghe", "id": 5057255, "node_id": "MDQ6VXNlcjUwNTcyNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/5057255?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shehanmunasinghe", "html_url": "https://github.com/shehanmunasinghe", "followers_url": "https://api.github.com/users/shehanmunasinghe/followers", "following_url": "https://api.github.com/users/shehanmunasinghe/following{/other_user}", "gists_url": "https://api.github.com/users/shehanmunasinghe/gists{/gist_id}", "starred_url": "https://api.github.com/users/shehanmunasinghe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shehanmunasinghe/subscriptions", "organizations_url": "https://api.github.com/users/shehanmunasinghe/orgs", "repos_url": "https://api.github.com/users/shehanmunasinghe/repos", "events_url": "https://api.github.com/users/shehanmunasinghe/events{/privacy}", "received_events_url": "https://api.github.com/users/shehanmunasinghe/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[]
1,681
1,681
null
CONTRIBUTOR
null
### Model description 'SwiftFormer' paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called 'SwiftFormer' is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2. I would like to add this model to Huggingface. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Paper: https://arxiv.org/abs/2303.15446 Original code and weights: https://github.com/Amshaker/SwiftFormer Author: @Amshaker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22685/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22684
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22684/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22684/comments
https://api.github.com/repos/huggingface/transformers/issues/22684/events
https://github.com/huggingface/transformers/pull/22684
1,660,500,109
PR_kwDOCUB6oc5N62lY
22,684
Clarify stride option
{ "login": "luccailliau", "id": 74506016, "node_id": "MDQ6VXNlcjc0NTA2MDE2", "avatar_url": "https://avatars.githubusercontent.com/u/74506016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/luccailliau", "html_url": "https://github.com/luccailliau", "followers_url": "https://api.github.com/users/luccailliau/followers", "following_url": "https://api.github.com/users/luccailliau/following{/other_user}", "gists_url": "https://api.github.com/users/luccailliau/gists{/gist_id}", "starred_url": "https://api.github.com/users/luccailliau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luccailliau/subscriptions", "organizations_url": "https://api.github.com/users/luccailliau/orgs", "repos_url": "https://api.github.com/users/luccailliau/repos", "events_url": "https://api.github.com/users/luccailliau/events{/privacy}", "received_events_url": "https://api.github.com/users/luccailliau/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@Narsil, I just added a sentence in the doc to avoid confusion about the naming.\r\n\r\nHave a good day" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Clarify the `stride` option which refers to the number of overlapping tokens between chunks. Fixes #22391 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22684/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22684", "html_url": "https://github.com/huggingface/transformers/pull/22684", "diff_url": "https://github.com/huggingface/transformers/pull/22684.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22684.patch", "merged_at": 1681218414000 }
https://api.github.com/repos/huggingface/transformers/issues/22683
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22683/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22683/comments
https://api.github.com/repos/huggingface/transformers/issues/22683/events
https://github.com/huggingface/transformers/issues/22683
1,660,258,848
I_kwDOCUB6oc5i9Yog
22,683
Performance Regression from commit 7dcd870
{ "login": "fpgaminer", "id": 1585817, "node_id": "MDQ6VXNlcjE1ODU4MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1585817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fpgaminer", "html_url": "https://github.com/fpgaminer", "followers_url": "https://api.github.com/users/fpgaminer/followers", "following_url": "https://api.github.com/users/fpgaminer/following{/other_user}", "gists_url": "https://api.github.com/users/fpgaminer/gists{/gist_id}", "starred_url": "https://api.github.com/users/fpgaminer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fpgaminer/subscriptions", "organizations_url": "https://api.github.com/users/fpgaminer/orgs", "repos_url": "https://api.github.com/users/fpgaminer/repos", "events_url": "https://api.github.com/users/fpgaminer/events{/privacy}", "received_events_url": "https://api.github.com/users/fpgaminer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante and @ArthurZucker ", "@fpgaminer commit 7dcd870 fixes generation when there is padding in the input (which is almost always the case for `batch_size>1`). It's natural that it introduces slowdowns, as the correct behavior implies changing to the tensor gathering you mentioned :)\r\n\r\nWe don't optimize for performance but rather for correctness. To skip this gathering while remaining correct, `.generate()` would need to be rewritten to dynamically squeeze padding and evict completed rows, which is something we have in our plans for the next months.\r\n\r\nMeanwhile, is there anything else we can help you with?", "That's fair, though a 10% performance hit is rather painful.\r\n\r\nTo that end, here's my attempt to optimize `apply_rotary_pos_emb`:\r\n\r\n```\r\ndef ref_apply_rotary_pos_emb(q, k, cos, sin, position_ids):\r\n\tgather_indices = position_ids[:, None, :, None] # [bs, 1, seq_len, 1]\r\n\tgather_indices = gather_indices.repeat(1, cos.shape[1], 1, cos.shape[3])\r\n\tcos = torch.gather(cos.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices)\r\n\tsin = torch.gather(sin.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices)\r\n\tq_embed = (q * cos) + (rotate_half(q) * sin)\r\n\tk_embed = (k * cos) + (rotate_half(k) * sin)\r\n\treturn q_embed, k_embed\r\n\r\ndef fast_apply_rotary_pos_emb(q, k, cos, sin, position_ids):\r\n\tcos = cos.squeeze((0, 1)) # [seq_len, dim]\r\n\tsin = sin.squeeze((0, 1)) # [seq_len, dim]\r\n\tcos = cos[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]\r\n\tsin = sin[position_ids].unsqueeze(1) # [bs, 1, seq_len, dim]\r\n\tq_embed = (q * cos) + (rotate_half(q) * sin)\r\n\tk_embed = (k * cos) + (rotate_half(k) * sin)\r\n\treturn q_embed, k_embed\r\n\r\ndef test_foo(B, L):\r\n\tcos = torch.randn(1, 1, 2048, 128, dtype=torch.float16, device='cuda')\r\n\tsin = torch.randn(1, 1, 2048, 128, dtype=torch.float16, device='cuda')\r\n\tposition_ids = torch.randint(0, 2048, (B, L), dtype=torch.int64, device='cuda')\r\n\r\n\tq = torch.randn(B, 32, L, 128, dtype=torch.float16, device='cuda')\r\n\tk = torch.randn(B, 32, L, 128, dtype=torch.float16, device='cuda')\r\n\r\n\t# Verify\r\n\tref = ref_apply_rotary_pos_emb(q, k, cos, sin, position_ids)\r\n\tfast = fast_apply_rotary_pos_emb(q, k, cos, sin, position_ids)\r\n\tassert torch.equal(ref[0], fast[0])\r\n\tassert torch.equal(ref[1], fast[1])\r\n\r\n\t# Benchmark\r\n\tref_ms, ref_min_ms, ref_max_ms = triton.testing.do_bench(lambda: ref_apply_rotary_pos_emb(q, k, cos, sin, position_ids))\r\n\tfast_ms, fast_min_ms, fast_max_ms = triton.testing.do_bench(lambda: fast_apply_rotary_pos_emb(q, k, cos, sin, position_ids))\r\n\r\n\tspeedup = ref_ms * 100 / fast_ms\r\n\tprint(f'{B} | {L:3d} | {ref_ms:.6f} | {fast_ms:.6f} | {speedup:.2f}%')\r\n\r\n\r\nprint('B | L | ref | fast | speedup')\r\nfor B in [1, 2, 4, 8]:\r\n\tfor L in [1, 2, 4, 8, 10, 100]:\r\n\t\ttest_foo(B, L)\r\n```\r\n\r\nOutput:\r\n\r\n```\r\nB | L | ref | fast | speedup\r\n1 | 1 | 0.043008 | 0.035840 | 120.00%\r\n1 | 2 | 0.044032 | 0.036864 | 119.44%\r\n1 | 4 | 0.047104 | 0.038912 | 121.05%\r\n1 | 8 | 0.046080 | 0.039936 | 115.38%\r\n1 | 10 | 0.048128 | 0.039936 | 120.51%\r\n1 | 100 | 0.058368 | 0.052224 | 111.76%\r\n2 | 1 | 0.047104 | 0.036864 | 127.78%\r\n2 | 2 | 0.049152 | 0.039936 | 123.08%\r\n2 | 4 | 0.050176 | 0.040960 | 122.50%\r\n2 | 8 | 0.050176 | 0.041984 | 119.51%\r\n2 | 10 | 0.050176 | 0.041984 | 119.51%\r\n2 | 100 | 0.079872 | 0.070656 | 113.04%\r\n4 | 1 | 0.051200 | 0.039936 | 128.21%\r\n4 | 2 | 0.053248 | 0.040960 | 130.00%\r\n4 | 4 | 0.054272 | 0.041984 | 129.27%\r\n4 | 8 | 0.057344 | 0.045056 | 127.27%\r\n4 | 10 | 0.057344 | 0.045056 | 127.27%\r\n4 | 100 | 0.130048 | 0.119808 | 108.55%\r\n8 | 1 | 0.057344 | 0.040960 | 140.00%\r\n8 | 2 | 0.059392 | 0.041984 | 141.46%\r\n8 | 4 | 0.062464 | 0.045056 | 138.64%\r\n```\r\n\r\nFor reference, the pre 7dc870 function runs in 0.030ms on 1x1, so this isn't quite as fast but gets closer.\r\n\r\nWould a pull request with this change be welcome? I've done my best to verify its correctness with the above code.", "@fpgaminer that is great! Absolutely, a PR would be very welcome 🙌 \r\n\r\n(We'd be happy to integrate other optimization opportunities if you spot them, we rarely have the bandwidth to optimize our modeling code)", "> @fpgaminer commit [7dcd870](https://github.com/huggingface/transformers/commit/7dcd8703ef904adc3ac19b47f769879221c33849) fixes generation when there is padding in the input (which is almost always the case for `batch_size>1`). It's natural that it introduces slowdowns, as the correct behavior implies changing to the tensor gathering you mentioned :)\r\n\r\nMaybe there's something I'm not seeing here but Llama uses rotary positional embeddings so left padding should have no effect on the result? \r\n\r\nSure, the intermediate result from `apply_rotary_pos_emb` changes if you shift all tokens left or right, but the whole point of using relative embeddings is that they're invariant to the absolute position in terms of the final attention weight. So you can shift all tokens 50 positions to the right and the attention score between *pairs of tokens* will be the same, modulus any rounding errors.\r\n\r\nOr are you saying there are cases when padding is literally inserted *inside* of the sequence, therefore changing the relative distances between tokens, @gante?", "@aljungberg I agree with everything you wrote, rotary positional embeddings should be position-invariant. In practice, the small rounding errors compound over autoregressive text generation, leading greedy decoding (which is normally invariant wrt small fluctuations) to produce different text.\r\n\r\nWith the right position index, the error becomes much smaller, and the results become more stable regardless of padding. That's why [we also added it to our high-performance text generation repo](https://github.com/huggingface/text-generation-inference/pull/126), despite the difference being quite small.\r\n\r\nOut of curiosity, [this test](https://github.com/huggingface/transformers/blob/main/tests/generation/test_utils.py#L1602) was failing on GPTNeoX and Llama before we added this change. In theory, it shouldn't have failed at all!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,685
1,685
CONTRIBUTOR
null
### System Info - `transformers` version: 4.28.0.dev0 (656e869a4523f6a0ce90b3aacbb05cc8fb5794bb) - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.4 - Safetensors version: 0.3.0 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have a benchmark script which benchmarks the generation speed of different LLaMA models. Before commit 7dcd870 my generation speed averaged around 48 tokens/s in ideal cases, RTX 3090. After that commit the average speed is 43 tokens/s. The specific issue seems to be the change to `apply_rotary_pos_emb`. My guess is the change from a rather simple slicing of two Tensors to a scatter-gather. To test my theory I patched `apply_rotary_pos_emb` to its pre 7dcd870 state, and minimally modified `LlamaAttention` accordingly. No other modifications. Speed jumped back to 48 tokens/s. The problem should apply generally, but the specific script I'm using is: https://github.com/fpgaminer/GPTQ-triton/blob/99ec4a3adb7fad9de33ff026bbfb64cbb3bab2f8/benchmark_generate.py ### Expected behavior I would not expect a 10% drop in performance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22683/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/22683/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22682
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22682/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22682/comments
https://api.github.com/repos/huggingface/transformers/issues/22682/events
https://github.com/huggingface/transformers/issues/22682
1,660,184,095
I_kwDOCUB6oc5i9GYf
22,682
whisper recognition error!
{ "login": "xyx361100238", "id": 19569322, "node_id": "MDQ6VXNlcjE5NTY5MzIy", "avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xyx361100238", "html_url": "https://github.com/xyx361100238", "followers_url": "https://api.github.com/users/xyx361100238/followers", "following_url": "https://api.github.com/users/xyx361100238/following{/other_user}", "gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}", "starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions", "organizations_url": "https://api.github.com/users/xyx361100238/orgs", "repos_url": "https://api.github.com/users/xyx361100238/repos", "events_url": "https://api.github.com/users/xyx361100238/events{/privacy}", "received_events_url": "https://api.github.com/users/xyx361100238/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "What is the value of model_path in the above code?", "fine-tune model based on whisper-base use wenetspeech datasets", "use huggingface model “whisper-base”,test file [common_voice_zh-CN_18662117.mp3](https://huggingface.co/corner/whisper-base-zh/blob/main/common_voice_zh-CN_18662117.mp3),got the same error", "When I had this error, limiting the max_new_tokens specified to the amount the model can generate per chunk fixed it for me (see the [generation_config.json](https://huggingface.co/openai/whisper-base/blob/main/generation_config.json)'s max_length). Looks like that might be the case here since the max is 448 for whisper-base and 32767 is given. Maybe a nice error message for when max_new_tokens is > max_length would be wanted?", "Hey @xyx361100238! In this case, you can probably simplify how you're transcribing the audio file to simply:\r\n```python\r\nasr_pipeline = pipeline(task=\"automatic-speech-recognition\", model=model_path, device=\"cpu\")\r\ntranscription = processor.batch_decode(\"path/to/audio/file\", generate_kwargs={\"language\": lang, \"task\": \"transcribe\"})\r\n```\r\n\r\nThis looks like quite a strange error for Whisper - in most cases you can specify `max_new_tokens` as some arbitrary value (e.g. for LLMs this is just the number of new tokens generated, which doesn't depend on our max length).", "` processor = WhisperProcessor.from_pretrained(model_path)\r\n asr_pipeline = pipeline(task=\"automatic-speech-recognition\", model=model_path, device=\"cpu\")\r\n transcription = processor.batch_decode(\"common_voice_zh-CN_18524189.wav\", generate_kwargs={\"language\": lang, \"task\": \"transcribe\"})\r\n`\r\ntips error:\r\n![image](https://user-images.githubusercontent.com/19569322/233816764-f1435abb-5faa-4e17-ab2c-f0c515e9b37e.png)\r\n", "Sorry, I rushed my code snippet! It should have been:\r\n```python\r\nfrom transformers import pipeline\r\n\r\nasr_pipeline = pipeline(task=\"automatic-speech-recognition\", model=model_path, device=\"cpu\") # change device to \"cuda:0\" to run on GPU\r\ntranscription = asr_pipeline(\"path/to/audio/file\", chunk_length_s=30, generate_kwargs={\"language\": \"<|zh|>\", \"task\": \"transcribe\"}) # change language as required - I've set it to Chinese\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,684
1,684
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.4.0-144-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.16 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sanchit-gandhi @Narsil @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction i was fine-tune whisper-base model with wenetspeech datasets,need to verify effectiveness use pipeline: ``` processor = WhisperProcessor.from_pretrained(model_path) asr_pipeline = pipeline(task="automatic-speech-recognition", model=model_path, device="cpu") asr_pipeline.model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language=lang, task="transcribe") ds = load_dataset("audiofolder", data_dir=wav_path) ds = ds.cast_column("audio", Audio(sampling_rate=16000)) audio = ds['train'][0]['audio'] inputs = processor(audio["array"], sampling_rate=audio["sampling_rate"], language=lang, task="transcribe", return_tensors="pt") input_features = inputs.input_features generated_ids = asr_pipeline.model.generate(inputs=input_features, max_new_tokens=32767) transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` frist of all ,this script can works,but in some mp3 file it tips error: > ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ │ │ /home/youxixie/008-Whisper-Pro/005-whisper-fineturn-pro/transformers-main/ex │ │ amples/pytorch/speech-recognition/run_whisper_speech_recognition.py:45 in │ │ <module> │ │ │ │ 42 │ print("test model:{} ".format(args.model)) │ │ 43 │ print("test wav path:{} ".format(args.path)) │ │ 44 │ print("test language:{} ".format(args.lang)) │ │ ❱ 45 │ eval_whisper(args.model, args.path, args.lang) │ │ 46 │ │ /home/youxixie/008-Whisper-Pro/005-whisper-fineturn-pro/transformers-main/ex │ │ amples/pytorch/speech-recognition/run_whisper_speech_recognition.py:25 in │ │ eval_whisper │ │ │ │ 22 │ │ audio = ds['train'][i]['audio'] │ │ 23 │ │ inputs = processor(audio["array"], sampling_rate=audio["samplin │ │ 24 │ │ input_features = inputs.input_features │ │ ❱ 25 │ │ generated_ids = asr_pipeline.model.generate(inputs=input_featur │ │ 26 │ │ │ │ 27 │ │ transcription = processor.batch_decode(generated_ids, skip_spec │ │ 28 │ │ #print(transcription) │ │ │ │ /home/youxixie/008-Whisper-Pro/005-whisper-fineturn-pro/transformers-main/sr │ │ c/transformers/models/whisper/modeling_whisper.py:1613 in generate │ │ │ │ 1610 │ │ │ stopping_criteria, │ │ 1611 │ │ │ prefix_allowed_tokens_fn, │ │ 1612 │ │ │ synced_gpus, │ │ ❱ 1613 │ │ │ **kwargs, │ │ 1614 │ │ ) │ │ 1615 │ │ │ 1616 │ def prepare_inputs_for_generation( │ │ │ │ /home/youxixie/anaconda3/envs/Huggingface-Whisper/lib/python3.7/site-package │ │ s/torch/autograd/grad_mode.py:27 in decorate_context │ │ │ │ 24 │ │ @functools.wraps(func) │ │ 25 │ │ def decorate_context(*args, **kwargs): │ │ 26 │ │ │ with self.clone(): │ │ ❱ 27 │ │ │ │ return func(*args, **kwargs) │ │ 28 │ │ return cast(F, decorate_context) │ │ 29 │ │ │ 30 │ def _wrap_generator(self, func): │ │ │ │ /home/youxixie/008-Whisper-Pro/005-whisper-fineturn-pro/transformers-main/sr │ │ c/transformers/generation/utils.py:1415 in generate │ │ │ │ 1412 │ │ │ │ output_scores=generation_config.output_scores, │ │ 1413 │ │ │ │ return_dict_in_generate=generation_config.return_dict │ │ 1414 │ │ │ │ synced_gpus=synced_gpus, │ │ ❱ 1415 │ │ │ │ **model_kwargs, │ │ 1416 │ │ │ ) │ │ 1417 │ │ │ │ 1418 │ │ elif is_contrastive_search_gen_mode: │ │ │ │ /home/youxixie/008-Whisper-Pro/005-whisper-fineturn-pro/transformers-main/sr │ │ c/transformers/generation/utils.py:2211 in greedy_search │ │ │ │ 2208 │ │ │ if synced_gpus and this_peer_finished: │ │ 2209 │ │ │ │ continue # don't waste resources running the code we │ │ 2210 │ │ │ │ │ ❱ 2211 │ │ │ next_token_logits = outputs.logits[:, -1, :] │ │ 2212 │ │ │ │ │ 2213 │ │ │ # pre-process distribution │ │ 2214 │ │ │ next_tokens_scores = logits_processor(input_ids, next_tok │ ╰──────────────────────────────────────────────────────────────────────────────╯ IndexError: index -1 is out of bounds for dimension 1 with size 0 ### Expected behavior if some file can't have result, just give empty resulte or special symbols
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22682/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22681
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22681/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22681/comments
https://api.github.com/repos/huggingface/transformers/issues/22681/events
https://github.com/huggingface/transformers/issues/22681
1,660,167,628
I_kwDOCUB6oc5i9CXM
22,681
Donut model.generate is extremely slow when run inference
{ "login": "MS1908", "id": 38152758, "node_id": "MDQ6VXNlcjM4MTUyNzU4", "avatar_url": "https://avatars.githubusercontent.com/u/38152758?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MS1908", "html_url": "https://github.com/MS1908", "followers_url": "https://api.github.com/users/MS1908/followers", "following_url": "https://api.github.com/users/MS1908/following{/other_user}", "gists_url": "https://api.github.com/users/MS1908/gists{/gist_id}", "starred_url": "https://api.github.com/users/MS1908/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MS1908/subscriptions", "organizations_url": "https://api.github.com/users/MS1908/orgs", "repos_url": "https://api.github.com/users/MS1908/repos", "events_url": "https://api.github.com/users/MS1908/events{/privacy}", "received_events_url": "https://api.github.com/users/MS1908/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante and @younesbelkada ", "@gante and @younesbelkada can you guys look into this issue? Thank you very much.", "Hey @MS1908 👋 \r\n\r\nText generation can be quite slow. 6-7s is within the expected time for large-ish models, and the generation time greatly depends on the length of the output. As a reference, [the example in the documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut#inference) takes ~1s on `.generate()` on an nvidia 3090, and the output is very short.\r\n\r\nMy top recommendation would be to batch your inputs, as opposed to calling `.generate()` with one example at a time. The execution time of generate grows very slowly with the batch size -- the biggest limitation is GPU memory, which you have plenty :D\r\n\r\nOn the `generate` side, we have some speedup tricks like using smaller variable representation. Sadly, as far as I know, most of them don't work out of the box with multimodal models like Donut (is this correct, @younesbelkada?). The only option that I see is to use [PT2.0+dynamo](https://pytorch.org/docs/stable/dynamo/index.html) to compile `.generate()`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,684
1,684
NONE
null
I trained a Donut model for document classification using my custom dataset (format similar to RVL-CDIP). However, when I run inference, the model.generate() run extremely slow (5.9s ~ 7s). Inference device: NVIDIA A100 40GB. Requirements: CUDA 11.7 torch==1.13.1+cu117 torchvision==0.14.1+cu117 datasets==2.10.1 transformers==4.26.1 sentencepiece==0.1.97 onnx==1.12.0 protobuf==3.20.0 Here is the GPU when I run inference: ![image](https://user-images.githubusercontent.com/38152758/230818381-d2489865-78b2-410f-b2d7-1a487a98f3eb.png) This is my code for inference ``` model = VisionEncoderDecoderModel.from_pretrained(CKPT_PATH, config=config) device = 'cuda' if torch.cuda.is_available() else 'cpu' model.to(device) accs = [] model.eval() for i, sample in tqdm(enumerate(val_ds), total=len(val_ds)): pixel_values = sample["pixel_values"] pixel_values = torch.unsqueeze(pixel_values, 0) pixel_values = pixel_values.to(device) start = time.time() task_prompt = "<s_fci>" decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids decoder_input_ids = decoder_input_ids.to(device) print(f"Tokenize time: {time.time() - start:.4f}s") start = time.time() outputs = model.generate( pixel_values, decoder_input_ids=decoder_input_ids, max_length=model.decoder.config.max_position_embeddings, early_stopping=True, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, num_beams=1, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) print(f"Inference time: {time.time() - start:.4f}s") # turn into JSON start = time.time() seq = processor.batch_decode(outputs.sequences)[0] seq = seq.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") seq = re.sub(r"<.*?>", "", seq, count=1).strip() # remove first task start token seq = processor.token2json(seq) if "class" not in seq.keys(): seq["class"] = "other" print(f"Decoding time: {time.time() - start:.4f}s") gt = sample["labels"] score = float(seq["class"] == gt["class"]) accs.append(score) acc_score = np.mean(accs) print(f"Accuracy: {acc_score * 100:.4f}%") ``` Can someone look into this issue? Thank you very much.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22681/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22681/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22680
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22680/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22680/comments
https://api.github.com/repos/huggingface/transformers/issues/22680/events
https://github.com/huggingface/transformers/issues/22680
1,659,806,991
I_kwDOCUB6oc5i7qUP
22,680
Adding progress-bars to pipelines
{ "login": "ntakouris", "id": 5436722, "node_id": "MDQ6VXNlcjU0MzY3MjI=", "avatar_url": "https://avatars.githubusercontent.com/u/5436722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ntakouris", "html_url": "https://github.com/ntakouris", "followers_url": "https://api.github.com/users/ntakouris/followers", "following_url": "https://api.github.com/users/ntakouris/following{/other_user}", "gists_url": "https://api.github.com/users/ntakouris/gists{/gist_id}", "starred_url": "https://api.github.com/users/ntakouris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ntakouris/subscriptions", "organizations_url": "https://api.github.com/users/ntakouris/orgs", "repos_url": "https://api.github.com/users/ntakouris/repos", "events_url": "https://api.github.com/users/ntakouris/events{/privacy}", "received_events_url": "https://api.github.com/users/ntakouris/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil ", "Hey, you can already to that:\r\n\r\n```python\r\nimport tqdm\r\n\r\nfor out in tqdm.tqdm(pipe(dataset)):\r\n pass\r\n```\r\n\r\nWhen using an iterating dataset instead of a real dataset you can add (`total=total` to get the \"correct\" progressbar).\r\n\r\nAdvantage of having the progressbar in usercode is that we don't have to choose your favorite progress bar or handle colab+jupyter weirdness here.", "@Narsil I am referring to passing in a progress bar argument into the pipeline's `__call__` function, in order to accomplish \r\n https://github.com/huggingface/evaluate/issues/442, not to adding progress bar to dataset iteration.", "This can be done in `evaluate` directly is what I was saying." ]
1,681
1,681
1,681
NONE
null
### Feature request In order to implement https://github.com/huggingface/evaluate/issues/442, in order to provide progress bars while using `evaluator_instance.compute(..., progress_bar=True)`, we would have to update the `base.py` pipeline in order to support this. Quoting the referenced issue from evaluate: After doing some digging, It's a matter of if the dataset+pipeline can support progress bars. For example, on the [call](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L1046) pipeline function, we can see that the actual pipeline could be many things, including but not limited to a GeneratorType (which does not advertise a __len__, a Dataset or a list (which typically have __len__), so the worse-case progress bar you can get would be a tqdm "X iterations / s" dialogue. ### Motivation Progress bars are always nice and they are relatively simple to implement: just wrap an iterator. It gives us a qualitative sense of what we're doing. If the underlying unit supports `__len__`, it's extra useful for debugging or giving a rough processing estimate without having to run through everything. ### Your contribution I'm willing to contribute given some guidance from the hf team.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22680/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22680/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22679
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22679/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22679/comments
https://api.github.com/repos/huggingface/transformers/issues/22679/events
https://github.com/huggingface/transformers/pull/22679
1,659,806,483
PR_kwDOCUB6oc5N4pyv
22,679
(feat): Moving labels to same device as logits for Deit
{ "login": "xssChauhan", "id": 9297805, "node_id": "MDQ6VXNlcjkyOTc4MDU=", "avatar_url": "https://avatars.githubusercontent.com/u/9297805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xssChauhan", "html_url": "https://github.com/xssChauhan", "followers_url": "https://api.github.com/users/xssChauhan/followers", "following_url": "https://api.github.com/users/xssChauhan/following{/other_user}", "gists_url": "https://api.github.com/users/xssChauhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/xssChauhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xssChauhan/subscriptions", "organizations_url": "https://api.github.com/users/xssChauhan/orgs", "repos_url": "https://api.github.com/users/xssChauhan/repos", "events_url": "https://api.github.com/users/xssChauhan/events{/privacy}", "received_events_url": "https://api.github.com/users/xssChauhan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Add model parallelism for `Deit`. <!-- Remove if not applicable --> Related to #22561 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22679/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22679", "html_url": "https://github.com/huggingface/transformers/pull/22679", "diff_url": "https://github.com/huggingface/transformers/pull/22679.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22679.patch", "merged_at": 1681128297000 }
https://api.github.com/repos/huggingface/transformers/issues/22678
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22678/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22678/comments
https://api.github.com/repos/huggingface/transformers/issues/22678/events
https://github.com/huggingface/transformers/pull/22678
1,659,799,378
PR_kwDOCUB6oc5N4olL
22,678
[WIP] 🌐 [i18n-KO] Translated `tasks/translation.mdx` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22678). All of your documentation changes will be reflected on that endpoint.", "Closed in favor of https://github.com/huggingface/transformers/pull/22805" ]
1,681
1,682
1,681
CONTRIBUTOR
null
<!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.mdx` to Korean" 으로 부탁드립니당 --> # What does this PR do? Translated the `tasks/translation.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 <!-- 메인 이슈에 기록이 남아요! 가짜연구소 리포를 사용해 연습하실때는 제거해주시면 감사하겠습니다! :smile: --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제출 전 체크리스트로, 가짜연구소만의 체크리스트도 <details>로 감싸서 만들어두면 더 좋을 것 같아요. --> ## Who can review? <!-- 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22678/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22678", "html_url": "https://github.com/huggingface/transformers/pull/22678", "diff_url": "https://github.com/huggingface/transformers/pull/22678.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22678.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22677
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22677/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22677/comments
https://api.github.com/repos/huggingface/transformers/issues/22677/events
https://github.com/huggingface/transformers/issues/22677
1,659,770,753
I_kwDOCUB6oc5i7heB
22,677
Where does hugging face's transformers save models?
{ "login": "pure-rgb", "id": 45315076, "node_id": "MDQ6VXNlcjQ1MzE1MDc2", "avatar_url": "https://avatars.githubusercontent.com/u/45315076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pure-rgb", "html_url": "https://github.com/pure-rgb", "followers_url": "https://api.github.com/users/pure-rgb/followers", "following_url": "https://api.github.com/users/pure-rgb/following{/other_user}", "gists_url": "https://api.github.com/users/pure-rgb/gists{/gist_id}", "starred_url": "https://api.github.com/users/pure-rgb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pure-rgb/subscriptions", "organizations_url": "https://api.github.com/users/pure-rgb/orgs", "repos_url": "https://api.github.com/users/pure-rgb/repos", "events_url": "https://api.github.com/users/pure-rgb/events{/privacy}", "received_events_url": "https://api.github.com/users/pure-rgb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is described in the [installation page of the doc](https://huggingface.co/docs/transformers/installation#cache-setup).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,681
1,684
1,684
NONE
null
### Feature request A clear instruction of receving the downloaded weight in system. ### Motivation Running the below code downloads a model - does anyone know what folder it downloads it to? ``` !pip install -q transformers from transformers import pipeline model = pipeline('fill-mask') ``` ### Your contribution Same issue with diffusers.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22677/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22676
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22676/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22676/comments
https://api.github.com/repos/huggingface/transformers/issues/22676/events
https://github.com/huggingface/transformers/pull/22676
1,659,752,698
PR_kwDOCUB6oc5N4gr_
22,676
Model parallelism: Moving labels to the same device as logits for BridgeTower models
{ "login": "shahad-mahmud", "id": 29411624, "node_id": "MDQ6VXNlcjI5NDExNjI0", "avatar_url": "https://avatars.githubusercontent.com/u/29411624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shahad-mahmud", "html_url": "https://github.com/shahad-mahmud", "followers_url": "https://api.github.com/users/shahad-mahmud/followers", "following_url": "https://api.github.com/users/shahad-mahmud/following{/other_user}", "gists_url": "https://api.github.com/users/shahad-mahmud/gists{/gist_id}", "starred_url": "https://api.github.com/users/shahad-mahmud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shahad-mahmud/subscriptions", "organizations_url": "https://api.github.com/users/shahad-mahmud/orgs", "repos_url": "https://api.github.com/users/shahad-mahmud/repos", "events_url": "https://api.github.com/users/shahad-mahmud/events{/privacy}", "received_events_url": "https://api.github.com/users/shahad-mahmud/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,681
1,681
1,681
CONTRIBUTOR
null
As suggested in the https://github.com/huggingface/transformers/issues/22561, moving the labels to the same device as the logits are for the BridgeTower models @sgugger Can you please review?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22676/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22676", "html_url": "https://github.com/huggingface/transformers/pull/22676", "diff_url": "https://github.com/huggingface/transformers/pull/22676.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22676.patch", "merged_at": 1681128254000 }
https://api.github.com/repos/huggingface/transformers/issues/22675
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22675/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22675/comments
https://api.github.com/repos/huggingface/transformers/issues/22675/events
https://github.com/huggingface/transformers/issues/22675
1,659,655,816
I_kwDOCUB6oc5i7FaI
22,675
Going above version 4.21.3 gives UnicodeDecodeError
{ "login": "emidio90", "id": 15109972, "node_id": "MDQ6VXNlcjE1MTA5OTcy", "avatar_url": "https://avatars.githubusercontent.com/u/15109972?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emidio90", "html_url": "https://github.com/emidio90", "followers_url": "https://api.github.com/users/emidio90/followers", "following_url": "https://api.github.com/users/emidio90/following{/other_user}", "gists_url": "https://api.github.com/users/emidio90/gists{/gist_id}", "starred_url": "https://api.github.com/users/emidio90/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emidio90/subscriptions", "organizations_url": "https://api.github.com/users/emidio90/orgs", "repos_url": "https://api.github.com/users/emidio90/repos", "events_url": "https://api.github.com/users/emidio90/events{/privacy}", "received_events_url": "https://api.github.com/users/emidio90/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts and @younesbelkada ", "Hi @emidio90, thanks for raising this issue. \r\n\r\nCould you share some more information about how to reproduce this error? In particular, is this the [Stable Diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) repo being referred to? What model and settings are being used? And which task is being run? ", "Hello @amyeroberts , I confirm that is the repo i'm referring to.\r\nThis happens with any model, when starting WebUI locally on my pc by clicking on webui-user.bat. The console arguments are only --xformers.\r\nI'm sorry i don't know how to find the specific task being run, but I suppose it's the model loading that happens when launching webui.\r\nThis started to happen when I updated to a commit that changed the required version of Transformers to 4.25. So now to make SD work I have to manually change the requirements_version.txt transformers==4.21.3", "@emidio90 Great, thanks for the additional info! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,693
1,693
NONE
null
### System Info - `transformers` version: 4.27.2 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.10.6 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. update stable diffusion webui to a version that requires Transformers 4.22.0 or above 2. start stable diffusion 3. when loading the model, this error appears ![Screenshot 2023-04-08 232922](https://user-images.githubusercontent.com/15109972/230743894-dbc3598c-15b9-4222-ad80-51cef9eb4fb8.png) ### Expected behavior load the model normally like it does for version 4.21.3 and below
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22675/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22674
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22674/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22674/comments
https://api.github.com/repos/huggingface/transformers/issues/22674/events
https://github.com/huggingface/transformers/pull/22674
1,659,652,441
PR_kwDOCUB6oc5N4O_k
22,674
fix bug of CLAP dataloader
{ "login": "lukewys", "id": 28220671, "node_id": "MDQ6VXNlcjI4MjIwNjcx", "avatar_url": "https://avatars.githubusercontent.com/u/28220671?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lukewys", "html_url": "https://github.com/lukewys", "followers_url": "https://api.github.com/users/lukewys/followers", "following_url": "https://api.github.com/users/lukewys/following{/other_user}", "gists_url": "https://api.github.com/users/lukewys/gists{/gist_id}", "starred_url": "https://api.github.com/users/lukewys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lukewys/subscriptions", "organizations_url": "https://api.github.com/users/lukewys/orgs", "repos_url": "https://api.github.com/users/lukewys/repos", "events_url": "https://api.github.com/users/lukewys/events{/privacy}", "received_events_url": "https://api.github.com/users/lukewys/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @younesbelkada and @ArthurZucker ", "Just an FYI that the slow tests aren't actually being run for CLAP. See also https://github.com/huggingface/transformers/pull/22834" ]
1,680
1,682
1,682
CONTRIBUTOR
null
Fix the bug of the CLAP data loader (https://github.com/LAION-AI/CLAP/issues/62) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22674/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22674", "html_url": "https://github.com/huggingface/transformers/pull/22674", "diff_url": "https://github.com/huggingface/transformers/pull/22674.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22674.patch", "merged_at": 1682084490000 }
https://api.github.com/repos/huggingface/transformers/issues/22673
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22673/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22673/comments
https://api.github.com/repos/huggingface/transformers/issues/22673/events
https://github.com/huggingface/transformers/issues/22673
1,659,589,820
I_kwDOCUB6oc5i61S8
22,673
Loading FlaxHybridCLIP trained model
{ "login": "alhuri", "id": 46427957, "node_id": "MDQ6VXNlcjQ2NDI3OTU3", "avatar_url": "https://avatars.githubusercontent.com/u/46427957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alhuri", "html_url": "https://github.com/alhuri", "followers_url": "https://api.github.com/users/alhuri/followers", "following_url": "https://api.github.com/users/alhuri/following{/other_user}", "gists_url": "https://api.github.com/users/alhuri/gists{/gist_id}", "starred_url": "https://api.github.com/users/alhuri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alhuri/subscriptions", "organizations_url": "https://api.github.com/users/alhuri/orgs", "repos_url": "https://api.github.com/users/alhuri/repos", "events_url": "https://api.github.com/users/alhuri/events{/privacy}", "received_events_url": "https://api.github.com/users/alhuri/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @alhuri! Sorry for the late reply here! It looks like the clip-italian repo assumes an older version of transformers. The modelling code would need to be updated to use transformers==4.27.4, namely the [`FlaxHybridCLIP`](https://github.com/clip-italian/clip-italian/blob/8c75204be0d747c0ab150973fd8cd8556ca2f444/hybrid_clip/modeling_hybrid_clip.py#L133) class.\r\n\r\nThe required changes can be found in this PR: https://github.com/huggingface/transformers/pull/16148\r\n\r\nMy recommendation would be reaching out to the clip-italian authors here via GitHub and discussing this with them!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closing since this issue is related to the Italian CLIP repo (not transformers!)" ]
1,680
1,684
1,684
NONE
null
### System Info - `transformers` version: 4.27.4 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.4 - PyTorch version (GPU?): 1.9.0+cpu (False) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.8 (cpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> Models: FlaxHybridCLIP ### Who can help? @sanchit-gandhi @patrickvonplaten, @patil-suraj ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm wondering how to import a trained FlaxHybridCLIP model from a folder that contains the following files - config.json - flax_model.msgpack I attempted to load it using the below: ``` if args.run_from_checkpoint is not None: with open(f"{args.run_from_checkpoint}/config.json", "r") as fp: config_dict = json.load(fp) config_dict["vision_config"]["model_type"] = "clip" config = HybridCLIPConfig(**config_dict) model = FlaxHybridCLIP.from_pretrained( args.run_from_checkpoint, seed=training_args.seed, dtype=getattr(jnp, model_args.dtype), config=config, freeze_backbones=args.freeze_backbones ) ``` But, I encountered the following error: ``` `text_config` is `None`. Initializing the `CLIPTextConfig` with default values. `vision_config` is `None`. initializing the `CLIPVisionConfig` with default values. loading weights file freeze/18/flax_model.msgpack Traceback (most recent call last): File "run_hybrid_clip.py", line 831, in <module> main() File "run_hybrid_clip.py", line 528, in main model = FlaxHybridCLIP.from_pretrained( File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/modeling_flax_utils.py", line 807, in from_pretrained model = cls(config, *model_args, _do_init=_do_init, **model_kwargs) File "/home/ubuntu/modeling_hybrid_clip.py", line 148, in __init__ module = self.module_class(config=config, dtype=dtype, **kwargs) TypeError: __init__() got an unexpected keyword argument '_do_init' ``` I used the modified Italian hybrid CLIP scripts [here](https://github.com/clip-italian/clip-italian/tree/master/hybrid_clip) ### Expected behavior to load successfully and fine-tune with unfrozen backbone Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22673/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22672
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22672/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22672/comments
https://api.github.com/repos/huggingface/transformers/issues/22672/events
https://github.com/huggingface/transformers/pull/22672
1,659,585,959
PR_kwDOCUB6oc5N4DvD
22,672
add `**kwargs` argument in some functions in `tokenization_utils.py`
{ "login": "yusuke1997", "id": 63439062, "node_id": "MDQ6VXNlcjYzNDM5MDYy", "avatar_url": "https://avatars.githubusercontent.com/u/63439062?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yusuke1997", "html_url": "https://github.com/yusuke1997", "followers_url": "https://api.github.com/users/yusuke1997/followers", "following_url": "https://api.github.com/users/yusuke1997/following{/other_user}", "gists_url": "https://api.github.com/users/yusuke1997/gists{/gist_id}", "starred_url": "https://api.github.com/users/yusuke1997/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yusuke1997/subscriptions", "organizations_url": "https://api.github.com/users/yusuke1997/orgs", "repos_url": "https://api.github.com/users/yusuke1997/repos", "events_url": "https://api.github.com/users/yusuke1997/events{/privacy}", "received_events_url": "https://api.github.com/users/yusuke1997/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22672). All of your documentation changes will be reflected on that endpoint.", "Can you elaborate and give us a sample of code that fails before this PR? Thanks.", "Thanks for the reply! \r\n\r\nIn my case, I need to consider adding custom special tags within the `def build_inputs_with_special_tokens(...)`, for example, whether to add `<Language ID>` in addition to `<eos>` and `<bos>` tags. As I need to switch frequently, I decided to control this by an argument like the following code.\r\n```python\r\n def build_inputs_with_special_tokens(\r\n self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, prepend: bool = True\r\n ) -> List[int]:\r\n```\r\n```python\r\ntokenizer('Hi world.', add_special_tokens=True, prepend=False)\r\n# {'input_ids': [4218, 1381, 6, 2], 'attention_mask': [1, 1, 1, 1]}\r\ntokenizer('Hi world.', add_special_tokens=True, prepend=True)\r\n# {'input_ids': [806, 4218, 1381, 6, 2], 'attention_mask': [1, 1, 1, 1, 1]}\r\n```\r\nThe reason why the `def prepare_for_tokenization(...)` function did not handle this is that these tags need to be treated as special_tokens, just like `<eos>` and `<bos>` tags.\r\n\r\n`def build_inputs_with_special_tokens(...)` is executed by `def prepare_for_model(...)`, so I rewrite that function slightly.\r\n```python\r\ndef prepare_for_model(\r\n ...\r\n prepend: bool = True, #add this line\r\n **kwargs,\r\n ) -> BatchEncoding:\r\n\r\n ...\r\n\r\n # Add special tokens\r\n if add_special_tokens:\r\n sequence = self.build_inputs_with_special_tokens(ids, pair_ids, prepend = prepend)\r\n token_type_ids = self.create_token_type_ids_from_sequences(ids, pair_ids, prepend = prepend)\r\n\r\n ...\r\n```\r\nI would like it to work with this modification only, but the functions using `def prepare_for_model(...)` ( `def _encode_plus(...)` and `def _batch_prepare_for_model(...)` and `def _batch_encode_plus(...)` in `tokenisation_utils.py`), `** kwargs` are not passed on, which is causing it not to work.\r\n\r\nThe following code is an example of this issue in `def _encode_plus(...)`\r\n```python\r\n def _encode_plus(\r\n self,\r\n text: Union[TextInput, PreTokenizedInput, EncodedInput],\r\n text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None,\r\n add_special_tokens: bool = True,\r\n padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,\r\n truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE,\r\n max_length: Optional[int] = None,\r\n stride: int = 0,\r\n is_split_into_words: bool = False,\r\n pad_to_multiple_of: Optional[int] = None,\r\n return_tensors: Optional[Union[str, TensorType]] = None,\r\n return_token_type_ids: Optional[bool] = None,\r\n return_attention_mask: Optional[bool] = None,\r\n return_overflowing_tokens: bool = False,\r\n return_special_tokens_mask: bool = False,\r\n return_offsets_mapping: bool = False,\r\n return_length: bool = False,\r\n verbose: bool = True,\r\n **kwargs,\r\n ) -> BatchEncoding:\r\n\r\n ...\r\n\r\n return self.prepare_for_model(\r\n first_ids,\r\n pair_ids=second_ids,\r\n add_special_tokens=add_special_tokens,\r\n padding=padding_strategy.value,\r\n truncation=truncation_strategy.value,\r\n max_length=max_length,\r\n stride=stride,\r\n pad_to_multiple_of=pad_to_multiple_of,\r\n return_tensors=return_tensors,\r\n prepend_batch_axis=True,\r\n return_attention_mask=return_attention_mask,\r\n return_token_type_ids=return_token_type_ids,\r\n return_overflowing_tokens=return_overflowing_tokens,\r\n return_special_tokens_mask=return_special_tokens_mask,\r\n return_length=return_length,\r\n verbose=verbose, \r\n **kwargs, # ADD THIS LINE!!!!!\r\n )\r\n\r\n```\r\nTherefore, by adding this PR fix, `**kwargs` will be passed to `def prepare_for_model(...)` and solve this problem.\r\n\r\nIn addition, although `**kwargs` are set to `def prepare_for_model(...)` in the code, there was no mechanism in place to use `**kwargs` simply, which is resolved in this PR.\r\n\r\n\r\nOverall, this PR resolves the issue of `**kwargs` not being propagated in some functions within `tokenization_utils.py`, providing a more efficient and streamlined way to customize the `def prepare_for_model(...)` function.\r\n\r\nThank you for reviewing this PR!!!!", "I'm a bit wary about passing all those kwargs since there could be unrelated/mispelled arguments that the user wouldn't get an error about then. The easiest is probably for you copy paste those methods and do the change in your subclass of the tokenizer, since, if I understand correctly, you are writing your own subclass of the tokenizer with an overloaded method.", "Thank you for your comments!\r\nI explain that there is no need to worry about this PR.\r\n> I'm a bit wary about passing all those kwargs since there could be unrelated/mispelled arguments that the user wouldn't get an error about then.\r\n\r\nAs all the functions fixed in this PR already have the `**kwargs` argument, the worry is a problem that could well occur in existing systems. This is not a new problem caused by this PR.\r\n**It is not an attempt to set new `**kwargs` in `def prepare_for_model(...)`, but a PR to leverage the `**kwargs` already set in `def prepare_for_model(...)`.**\r\n\r\n> The easiest is probably for you copy paste those methods and do the change in your subclass of the tokenizer\r\n\r\nIf only a few functions with few codes need to be edited, it would certainly be the easiest way. \r\nHowever, in this case, to utilize the `**kwargs` in `def prepare_for_model(...)`, it would require copy-pasting hundreds of lines of code with minimal modifications. From code maintenance, it is important to avoid such duplications as much as possible.\r\n\r\nThe changes made by this PR are not breaking changes, so they will work on all existing models. Additionally, the worry you mentioned is not specific to this PR, as it is a potential issue that could also occur in existing systems.\r\n\r\nFor the above reasons, respecting your worry, but I still believe that the benefits of utilizing `**kwargs` in `def prepare_for_model(...)` outweigh the risks.\r\n\r\nI hope that this PR will contribute to the continued improvement and maintenance of the system. \r\nThank you for your comments and looking forward to your positive feedback.", "cc @LysandreJik if you have a different opinion.", "Hey @yusuke1997, thank you for your PR!\r\n\r\nI would err on the side of caution here with having `**kwargs` on every method. When we add `\"**kwargs` to a method, we're forfeiting argument validation in favor of convenience, which is likely to bite us later. \r\n\r\nA very simple example of a bug this PR would introduce can be seen with the following script, where I misspelled `return_token_type_ids`:\r\n\r\n```python\r\nIn [1]: from transformers import GPT2TokenizerFast\r\n\r\nIn [2]: tok = GPT2TokenizerFast.from_pretrained('gpt2')\r\n\r\nIn [3]: tok(\"hey\", return_token_type_id=False)\r\n```\r\n\r\nOn `main`, this errors with `TypeError: PreTrainedTokenizerFast._batch_encode_plus() got an unexpected keyword argument 'return_token_type_id'`.\r\n\r\nOn this PR, this doesn't error and returns \r\n```\r\nOut[3]: {'input_ids': [20342], 'attention_mask': [1]}\r\n```\r\nwhich is false.\r\n\r\n\r\n---\r\n\r\nYour `prepend` approach can still work, but we would strongly recommend that you do it on a per-architecture approach than to try and patch a global method.\r\n\r\nIf you're using GPT2 for example, then I would advise overriding the `build_inputs_with_special_tokens` within this tokenizer directly.", "Thank you @LysandreJik for a very helpful review!!\r\n\r\nI checked. Certainly, on `main` `PreTrainedTokenizerFast` has a mechanism to raise an error when input misspelled arguments.\r\nThis PR eliminates such errors and it is able to handle any type of input.\r\nI would like to reiterate that the purpose of this PR was to utilize `def prepare_for_model(...)`, and upon further review of the code, I found that `def prepare_for_model(...)` is not being utilized within `PreTrainedTokenizerFast`.\r\nThe modification made for `PreTrainedTokenizerFast` was superfluous. I retract the changes.\r\n\r\nHowever, I still believe that the changes made to `tokenization_utils.py` are meaningful, non-breaking changes, and safe.\r\nThis is because even in the `main` `PretrainedTokenizer`, misspelled or invalid arguments will pass with only a warning raised.\r\n```python\r\nfrom transformers import GPT2Tokenizer\r\ntok = GPT2Tokenizer.from_pretrained('gpt2')\r\nprint(tok(\"hey\", return_token_type_id=False))\r\n# Keyword arguments {'return_token_type_id': False} not recognized.\r\n# {'input_ids': [20342], 'attention_mask': [1]}\r\n```\r\n**This warning is no different in `main` and in this PR.**\r\nTo avoid this warning, we need to pop the desired argument to use in `def prepare_for_tokenization(...)`.\r\nThis part is also no different in the current `main` and after PR.\r\n\r\n**The purpose of this PR is to utilize the `**kwargs` in `def prepare_for_model(...)` function.**\r\nIf it works by simply overriding `def build_inputs_with_special_tokens(...)`, it couldn't be easier than that.\r\nHowever, currently, just adding a new argument to `def build_inputs_with_special_tokens(...)` requires a number of function changes.\r\n\r\nThis restriction of not being able to utilize the `def prepare_for_model(...)` arguments apply not only to this function but also to all the functions executed by `def prepare_for_model(...)`, I think it is causing potential drawbacks.\r\n\r\n**Furthermore, I confirmed that there are no destructive elements, such as error displays or other potentially harmful changes, involved in implementing such a mechanism. And any input to the already set `**kwargs` in `def prepare_for_model(...)` will not have any adverse effects.**\r\nSo, I believe it is reasonable to establish a mechanism for passing `**kwargs` in order to utilize `def prepare_for_model(...)` (and functions executed by it). \r\nTherefore, I think there is no need to fear the part that you are cautious about.\r\n\r\nIf any concerns have been resolved, I would sincerely hope you will think about again it.\r\nThank you very much for your attention and time!!\r\n\r\n\r\n\r\n\r\n", "In addition,\r\nYour point about the error not being raised is actually due to the different handling of unrelated/misspelled arguments between `PretrainedTokenizer` and `PretrainedTokenizerFast`.\r\nI am of the opinion that **if `PretrainedTokenizer` is set up to generate warnings instead of errors, then `PretrainedTokenizerFast` should also generate warnings instead of errors**, and vice versa.\r\nIn other words, the behavior of both is different, and this is what the PR has revealed.\r\n\r\nWhat do you think about this point? What would be the best solution?\r\nFor the purpose of furthering my understanding of this matter, would you kindly explain it to me? I am genuinely curious.", "Sorry (miss-clicked the tag)! A small part of my answer is that this looks good to me, but at the same time I think that we are at a point where we want to refactor a lot of the API regarding the **kwargs that are scattered everywhere. The second main concern can come from this, and also maintenance of this in the long run. But both @sgugger and @LysandreJik have a better understanding on the specifics, will let them decide 🤗 ", " @sgugger and @LysandreJik, I'm sorry to bother you.\r\nWhat do you think about it? If the above discussion has cleared up any concerns, then I sincerely hope that you will consider making a positive decision.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,685
1,685
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Despite `**kwargs` being set in the `prepare_for_model` function in `tokenization_utils_base.py`, there was an issue where `**kwargs` were not being set in some functions within `tokenization_utils.py`, resulting in `**kwargs` not being propagated to the `prepare_for_model` function. This PR eliminates the need to copy multiple functions to set `**kwargs` when customizing the `prepare_for_model function, we can make a more concise code. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22672/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22672", "html_url": "https://github.com/huggingface/transformers/pull/22672", "diff_url": "https://github.com/huggingface/transformers/pull/22672.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22672.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22671
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22671/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22671/comments
https://api.github.com/repos/huggingface/transformers/issues/22671/events
https://github.com/huggingface/transformers/pull/22671
1,659,544,818
PR_kwDOCUB6oc5N38th
22,671
add GPTNeoXForSequenceClassification
{ "login": "Asugawara", "id": 47840708, "node_id": "MDQ6VXNlcjQ3ODQwNzA4", "avatar_url": "https://avatars.githubusercontent.com/u/47840708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Asugawara", "html_url": "https://github.com/Asugawara", "followers_url": "https://api.github.com/users/Asugawara/followers", "following_url": "https://api.github.com/users/Asugawara/following{/other_user}", "gists_url": "https://api.github.com/users/Asugawara/gists{/gist_id}", "starred_url": "https://api.github.com/users/Asugawara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Asugawara/subscriptions", "organizations_url": "https://api.github.com/users/Asugawara/orgs", "repos_url": "https://api.github.com/users/Asugawara/repos", "events_url": "https://api.github.com/users/Asugawara/events{/privacy}", "received_events_url": "https://api.github.com/users/Asugawara/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? This PR adds the GPTNeoX Seq Classification. Would you be able to check them? In advance, thank you for this cool OSS! ref: https://github.com/huggingface/transformers/pull/11906 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @sgugger @ArthurZucker @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22671/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22671", "html_url": "https://github.com/huggingface/transformers/pull/22671", "diff_url": "https://github.com/huggingface/transformers/pull/22671.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22671.patch", "merged_at": 1681141943000 }
https://api.github.com/repos/huggingface/transformers/issues/22670
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22670/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22670/comments
https://api.github.com/repos/huggingface/transformers/issues/22670/events
https://github.com/huggingface/transformers/pull/22670
1,659,513,550
PR_kwDOCUB6oc5N33Ua
22,670
🌐 [i18n-KO] Translated `training.mdx` to Korean
{ "login": "gabrielwithappy", "id": 102908949, "node_id": "U_kgDOBiJEFQ", "avatar_url": "https://avatars.githubusercontent.com/u/102908949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gabrielwithappy", "html_url": "https://github.com/gabrielwithappy", "followers_url": "https://api.github.com/users/gabrielwithappy/followers", "following_url": "https://api.github.com/users/gabrielwithappy/following{/other_user}", "gists_url": "https://api.github.com/users/gabrielwithappy/gists{/gist_id}", "starred_url": "https://api.github.com/users/gabrielwithappy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gabrielwithappy/subscriptions", "organizations_url": "https://api.github.com/users/gabrielwithappy/orgs", "repos_url": "https://api.github.com/users/gabrielwithappy/repos", "events_url": "https://api.github.com/users/gabrielwithappy/events{/privacy}", "received_events_url": "https://api.github.com/users/gabrielwithappy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Team PseudoLab, may you please review this PR?\r\n@0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd\r\n- fixed some ambiguous translations of the autoclass_tutorial doc. \r\n(from https://github.com/huggingface/transformers/pull/22533)\r\n- translated training doc.", "fixed review comments :-) \r\nThank you your kind reivews.\r\nBRs\r\n@HanNayeoniee @sim-so @wonhyeongseo ", "May you please review this PR?\r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,680
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Translated the `training.mdx` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR? @sgugger, @ArthurZucker, @eunseojo <!-- @sgugger, @ArthurZucker, @eunseojo May you please review this PR? --> <!-- Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22670/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22670/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22670", "html_url": "https://github.com/huggingface/transformers/pull/22670", "diff_url": "https://github.com/huggingface/transformers/pull/22670.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22670.patch", "merged_at": 1681398289000 }
https://api.github.com/repos/huggingface/transformers/issues/22669
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22669/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22669/comments
https://api.github.com/repos/huggingface/transformers/issues/22669/events
https://github.com/huggingface/transformers/issues/22669
1,659,509,571
I_kwDOCUB6oc5i6htD
22,669
New LlamaTokenizer "fast" version takes 90s to load on 5900x with nvme
{ "login": "Qubitium", "id": 417764, "node_id": "MDQ6VXNlcjQxNzc2NA==", "avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Qubitium", "html_url": "https://github.com/Qubitium", "followers_url": "https://api.github.com/users/Qubitium/followers", "following_url": "https://api.github.com/users/Qubitium/following{/other_user}", "gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}", "starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions", "organizations_url": "https://api.github.com/users/Qubitium/orgs", "repos_url": "https://api.github.com/users/Qubitium/repos", "events_url": "https://api.github.com/users/Qubitium/events{/privacy}", "received_events_url": "https://api.github.com/users/Qubitium/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is to convert the slow tokenizer to the fast format (which the conversion script should do once and for all @ArthurZucker ). You should do `tokenizer.save_pretrained(some_path)` and then copy the fast tokenizer file in the folder where you have your converted LLaMA model to avoid having the slowdown more than once as a workaround @diegomontoya until we fix the conversion script.", "Yep the llama conversion script reactor happened before the llama fast tokenizer was around. Will open a PR to save the fast ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
### System Info Transformer [head] Cuda 11.8 Pytorch nightly 2.1 Ubuntu 22.04 ### Reproduction Is it normal for the new default "faster" LlamaTokenizer to load so slowly on a fairly new cpu? Imagine the load time on a 2018 Intel xeon. Model is a llama 7b converted to HF using latest script within transformer/model/llama head. Each cold load takes ~90s. ``` # this will take 90s to load on 5900x + nvme # load llama 7b model converted from facebook to hf tokenizer = AutoTokenizer.from_pretrained(base_model, use_fast=True) ``` ### Expected behavior Load in seconds.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22669/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22668
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22668/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22668/comments
https://api.github.com/repos/huggingface/transformers/issues/22668/events
https://github.com/huggingface/transformers/issues/22668
1,659,468,403
I_kwDOCUB6oc5i6Xpz
22,668
Why run_t5_mlm_flax.py does not produces model weight file etc?
{ "login": "gundalav", "id": 8143832, "node_id": "MDQ6VXNlcjgxNDM4MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/8143832?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gundalav", "html_url": "https://github.com/gundalav", "followers_url": "https://api.github.com/users/gundalav/followers", "following_url": "https://api.github.com/users/gundalav/following{/other_user}", "gists_url": "https://api.github.com/users/gundalav/gists{/gist_id}", "starred_url": "https://api.github.com/users/gundalav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gundalav/subscriptions", "organizations_url": "https://api.github.com/users/gundalav/orgs", "repos_url": "https://api.github.com/users/gundalav/repos", "events_url": "https://api.github.com/users/gundalav/events{/privacy}", "received_events_url": "https://api.github.com/users/gundalav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @gundalav, it looks like your number of training steps is less than your number of save steps (10000). We save the model and tokenizer every `save_steps` training steps:\r\nhttps://github.com/huggingface/transformers/blob/aec10d162f59d809ead3990ef78c51918b622f38/examples/flax/language-modeling/run_t5_mlm_flax.py#L949\r\n\r\nSince `save_steps` is greater than your total number of training steps, we never hit this threshold. If you reduce your number of `save_steps` to 10, you'll see that the weights file, config and tokenizer are saved every 10 steps. You can then change your `save_steps` based on your total number of training steps for an appropriate value (e.g. set save steps to ~10% of your total train steps, so that you save 10 checkpoints during training)", "@sanchit-gandhi Thanks so much! It works like charm!", "Glad to hear that @gundalav! " ]
1,680
1,681
1,681
NONE
null
### System Info - `transformers` version: 4.27.4 - Platform: Linux-5.15.0-1020-aws-x86_64-with-glibc2.10 - Python version: 3.8.16 - Huggingface_hub version: 0.13.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.3 (gpu) - Jax version: 0.3.25 - JaxLib version: 0.3.25 - Using GPU in script?: YES - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger @patrickvonplaten @sanchit-gandhi ### Information - [X] The official example scripts - [x] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I was trying to reproduce this tutorial on [**T5-like span masked-language-modeling**](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#t5-like-span-masked-language-modeling). I have the following code `tokenizing_and_configing.py`: ``` import datasets from t5_tokenizer_model import SentencePieceUnigramTokenizer from transformers import T5Config vocab_size = 32_000 input_sentence_size = None # Calculate the total number of samples in the dataset total_samples = datasets.load_dataset( "nthngdy/oscar-mini", name="unshuffled_deduplicated_no", split="train" ).num_rows # Calculate one thirtieth of the total samples subset_samples = total_samples // 30 # Load one thirtieth of the dataset dataset = datasets.load_dataset( "nthngdy/oscar-mini", name="unshuffled_deduplicated_no", split=f"train[:{subset_samples}]", ) tokenizer = SentencePieceUnigramTokenizer( unk_token="<unk>", eos_token="</s>", pad_token="<pad>" ) # Build an iterator over this dataset def batch_iterator(input_sentence_size=None): if input_sentence_size is None: input_sentence_size = len(dataset) batch_length = 100 for i in range(0, input_sentence_size, batch_length): yield dataset[i : i + batch_length]["text"] print("Train Tokenizer") # Train tokenizer tokenizer.train_from_iterator( iterator=batch_iterator(input_sentence_size=input_sentence_size), vocab_size=vocab_size, show_progress=True, ) # Save files to disk tokenizer.save("./models/norwegian-t5-base/tokenizer.json") print("DONE TOKENIZING ") # CONFIG config = T5Config.from_pretrained( "google/t5-v1_1-small", vocab_size=tokenizer.get_vocab_size() # "google/t5-v1_1-base", vocab_size=tokenizer.get_vocab_size() ) config.save_pretrained("./models/norwegian-t5-base") print("DONE SAVING TOKENIZER ") ``` The dependency can be found here: - 📗 [`t5_tokenizer_model.py`](https://raw.githubusercontent.com/huggingface/transformers/main/examples/flax/language-modeling/t5_tokenizer_model.py) After `tokenizing_and_configing.py` is completed. I run this code: ``` python run_t5_mlm_flax.py \ --output_dir="./models/norwegian-t5-base" \ --model_type="t5" \ --config_name="./models/norwegian-t5-base" \ --tokenizer_name="./models/norwegian-t5-base" \ --dataset_name="nthngdy/oscar-mini" \ --dataset_config_name="unshuffled_deduplicated_no" \ --max_seq_length="512" \ --per_device_train_batch_size="32" \ --per_device_eval_batch_size="32" \ --adafactor \ --learning_rate="0.005" \ --weight_decay="0.001" \ --warmup_steps="2000" \ --overwrite_output_dir \ --logging_steps="500" \ --save_steps="10000" \ --eval_steps="2500" \ --do_train \ --do_eval ``` The full code for `run_t5_mlm_flax.py` can be found [here](https://raw.githubusercontent.com/huggingface/transformers/main/examples/flax/language-modeling/run_t5_mlm_flax.py). But after `run_t5_mlm_flax.py` is completed , I can only find these files in `./model/norwegian-t5-base`: ``` . └── norwegian-t5-base ├── config.json ├── events.out.tfevents.1680920382.ip-172-31-30-81.71782.0.v2 └── tokenizer.json └── eval_results.json ``` What's wrong with my process. I expect it to produce more files (see Expected Behavior section). Additional note: I don't experience any error messages AT ALL. Everything completes smoothly without interruption. I'm using Amazon AWS p3.2xlarge; cuda_11.2.r11.2/compiler.29618528_0 ### Expected behavior I expect it to produce more files like these: 1. flax_model.msgpack: This file contains the weights of the fine-tuned Flax model. 2. tokenizer_config.json: This file contains the tokenizer configuration, such as the vocabulary size and special tokens. 3. training_args.bin: This file contains the training arguments used during fine-tuning, such as learning rate and batch size. 4. merges.txt: This file is part of the tokenizer and contains the subword merges. 5. vocab.json: This file is part of the tokenizer and contains the vocabulary mappings. 6. train.log: Logs from the training process, including loss, learning rate, and other metrics. 7. Checkpoint files: If you have enabled checkpoints during training, you will find checkpoint files containing the model weights at specific training steps.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22668/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22667
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22667/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22667/comments
https://api.github.com/repos/huggingface/transformers/issues/22667/events
https://github.com/huggingface/transformers/pull/22667
1,659,466,723
PR_kwDOCUB6oc5N3u8P
22,667
Update some `MarkupLM` tests' expected values
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Merge now. Feel free to leave comments if any :-)" ]
1,680
1,681
1,681
COLLABORATOR
null
# What does this PR do? Need to update some expected values in test files after #22302.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22667/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22667", "html_url": "https://github.com/huggingface/transformers/pull/22667", "diff_url": "https://github.com/huggingface/transformers/pull/22667.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22667.patch", "merged_at": 1681200034000 }
https://api.github.com/repos/huggingface/transformers/issues/22666
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22666/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22666/comments
https://api.github.com/repos/huggingface/transformers/issues/22666/events
https://github.com/huggingface/transformers/pull/22666
1,659,407,852
PR_kwDOCUB6oc5N3kdt
22,666
Fix quantization docs typo
{ "login": "python273", "id": 3097956, "node_id": "MDQ6VXNlcjMwOTc5NTY=", "avatar_url": "https://avatars.githubusercontent.com/u/3097956?v=4", "gravatar_id": "", "url": "https://api.github.com/users/python273", "html_url": "https://github.com/python273", "followers_url": "https://api.github.com/users/python273/followers", "following_url": "https://api.github.com/users/python273/following{/other_user}", "gists_url": "https://api.github.com/users/python273/gists{/gist_id}", "starred_url": "https://api.github.com/users/python273/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/python273/subscriptions", "organizations_url": "https://api.github.com/users/python273/orgs", "repos_url": "https://api.github.com/users/python273/repos", "events_url": "https://api.github.com/users/python273/events{/privacy}", "received_events_url": "https://api.github.com/users/python273/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger not quite sure where to report. HF blog's rss feed links are broken. Also running in trough https://validator.w3.org/feed/ shows a duplicated article" ]
1,680
1,681
1,681
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22666/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22666", "html_url": "https://github.com/huggingface/transformers/pull/22666", "diff_url": "https://github.com/huggingface/transformers/pull/22666.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22666.patch", "merged_at": 1681131233000 }
https://api.github.com/repos/huggingface/transformers/issues/22665
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22665/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22665/comments
https://api.github.com/repos/huggingface/transformers/issues/22665/events
https://github.com/huggingface/transformers/pull/22665
1,659,360,138
PR_kwDOCUB6oc5N3an9
22,665
add xformers dep, xformers attn for gpt2
{ "login": "ethansmith2000", "id": 98723285, "node_id": "U_kgDOBeJl1Q", "avatar_url": "https://avatars.githubusercontent.com/u/98723285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ethansmith2000", "html_url": "https://github.com/ethansmith2000", "followers_url": "https://api.github.com/users/ethansmith2000/followers", "following_url": "https://api.github.com/users/ethansmith2000/following{/other_user}", "gists_url": "https://api.github.com/users/ethansmith2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/ethansmith2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ethansmith2000/subscriptions", "organizations_url": "https://api.github.com/users/ethansmith2000/orgs", "repos_url": "https://api.github.com/users/ethansmith2000/repos", "events_url": "https://api.github.com/users/ethansmith2000/events{/privacy}", "received_events_url": "https://api.github.com/users/ethansmith2000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada I don't know if this redundant with the Better Transformer integration.", "Related: https://github.com/huggingface/transformers/pull/22386\r\n\r\nI don't think it's an issue to collide - if it is just better in most cases, having it default to users makes sense, in transformers natively (with some refactoring). However, for now, pytorch's sdpa has some limitations:\r\n* no scale argument (some archs do not scale query/key)\r\n* no speedup/memory savings for custom attention mask (flash and mem-efficient not supported)\r\n* no support for mixed fp16/fp32, like in some models where softmax is in fp32 while the rest in fp32\r\n* C++ implementation is good for all hardware, mem-efficient and flash are Nvidia-only", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
# What does this PR do? Add xformers as a dependency and implement xformers attention for gpt2. I am a bit of a novice to this, but would like to contribute in helping all models in the transformers library to have xformers support. It is likely the case that this PR is not ready to merge, but I was hoping I could get some feedback on what I would be able to provide Fixes # (issue) Reduces VRAM and increases speed
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22665/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22665", "html_url": "https://github.com/huggingface/transformers/pull/22665", "diff_url": "https://github.com/huggingface/transformers/pull/22665.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22665.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22664
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22664/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22664/comments
https://api.github.com/repos/huggingface/transformers/issues/22664/events
https://github.com/huggingface/transformers/pull/22664
1,659,339,496
PR_kwDOCUB6oc5N3W1M
22,664
Generate: add CJK support to TextStreamer
{ "login": "bcol23", "id": 12250696, "node_id": "MDQ6VXNlcjEyMjUwNjk2", "avatar_url": "https://avatars.githubusercontent.com/u/12250696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bcol23", "html_url": "https://github.com/bcol23", "followers_url": "https://api.github.com/users/bcol23/followers", "following_url": "https://api.github.com/users/bcol23/following{/other_user}", "gists_url": "https://api.github.com/users/bcol23/gists{/gist_id}", "starred_url": "https://api.github.com/users/bcol23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bcol23/subscriptions", "organizations_url": "https://api.github.com/users/bcol23/orgs", "repos_url": "https://api.github.com/users/bcol23/repos", "events_url": "https://api.github.com/users/bcol23/events{/privacy}", "received_events_url": "https://api.github.com/users/bcol23/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "cc @gante ", "Yes of course, I use this tiny script for screen recording.\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer\r\n\r\n\r\nclass TextCJKStreamer(TextStreamer):\r\n def put(self, value):\r\n \"\"\"\r\n Recives tokens, decodes them, and prints them to stdout as soon as they form entire words.\r\n \"\"\"\r\n if len(value.shape) > 1 and value.shape[0] > 1:\r\n raise ValueError(\"TextStreamer only supports batch size 1\")\r\n elif len(value.shape) > 1:\r\n value = value[0]\r\n\r\n if self.skip_prompt and self.next_tokens_are_prompt:\r\n self.next_tokens_are_prompt = False\r\n return\r\n\r\n # Add the new token to the cache and decodes the entire thing.\r\n self.token_cache.extend(value.tolist())\r\n text = self.tokenizer.decode(self.token_cache, **self.decode_kwargs)\r\n\r\n # After the symbol for a new line, we flush the cache.\r\n if text.endswith(\"\\n\"):\r\n printable_text = text[self.print_len :]\r\n self.token_cache = []\r\n self.print_len = 0\r\n # If the last token is a CJK character, we print the characters.\r\n elif len(text) > 0 and self._is_chinese_char(ord(text[-1])):\r\n printable_text = text[self.print_len :]\r\n self.print_len += len(printable_text)\r\n # Otherwise, prints until the last space char (simple heuristic to avoid printing incomplete words,\r\n # which may change with the subsequent token -- there are probably smarter ways to do this!)\r\n else:\r\n printable_text = text[self.print_len : text.rfind(\" \") + 1]\r\n self.print_len += len(printable_text)\r\n\r\n self.on_finalized_text(printable_text)\r\n\r\n def _is_chinese_char(self, cp):\r\n \"\"\"Checks whether CP is the codepoint of a CJK character.\"\"\"\r\n # This defines a \"chinese character\" as anything in the CJK Unicode block:\r\n # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)\r\n #\r\n # Note that the CJK Unicode block is NOT all Japanese and Korean characters,\r\n # despite its name. The modern Korean Hangul alphabet is a different block,\r\n # as is Japanese Hiragana and Katakana. Those alphabets are used to write\r\n # space-separated words, so they are not treated specially and handled\r\n # like the all of the other languages.\r\n if (\r\n (cp >= 0x4E00 and cp <= 0x9FFF)\r\n or (cp >= 0x3400 and cp <= 0x4DBF) #\r\n or (cp >= 0x20000 and cp <= 0x2A6DF) #\r\n or (cp >= 0x2A700 and cp <= 0x2B73F) #\r\n or (cp >= 0x2B740 and cp <= 0x2B81F) #\r\n or (cp >= 0x2B820 and cp <= 0x2CEAF) #\r\n or (cp >= 0xF900 and cp <= 0xFAFF)\r\n or (cp >= 0x2F800 and cp <= 0x2FA1F) #\r\n ): #\r\n return True\r\n\r\n return False\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bigscience/bloomz-560m\")\r\n# Use CPU to make generation slow\r\nmodel = AutoModelForCausalLM.from_pretrained(\"bigscience/bloomz-560m\")\r\nstreamer = TextStreamer(tokenizer, skip_prompt=True)\r\ncjk_streamer = TextCJKStreamer(tokenizer, skip_prompt=True)\r\n\r\nprompt = \"一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,\"\r\ntokenized_inputs = tokenizer([prompt], return_tensors=\"pt\")\r\nprint(\"Origin TextStreamer:\")\r\ntokenized_inputs = tokenized_inputs.to(model.device)\r\n_ = model.generate(\r\n **tokenized_inputs,\r\n do_sample=False,\r\n streamer=streamer,\r\n min_new_tokens=64,\r\n max_new_tokens=128,\r\n)\r\nprint(\"CJK TextStreamer\")\r\n_ = model.generate(\r\n **tokenized_inputs,\r\n do_sample=False,\r\n streamer=cjk_streamer,\r\n min_new_tokens=64,\r\n max_new_tokens=128,\r\n)\r\n\r\nprompt = \"Suggest at least five related search terms to 'Mạng neural nhân tạo'.\"\r\ntokenized_inputs = tokenizer([prompt], return_tensors=\"pt\")\r\nprint(\"Origin TextStreamer:\")\r\ntokenized_inputs = tokenized_inputs.to(model.device)\r\n_ = model.generate(\r\n **tokenized_inputs,\r\n do_sample=False,\r\n streamer=streamer,\r\n min_new_tokens=64,\r\n max_new_tokens=128,\r\n)\r\nprint(\"CJK TextStreamer\")\r\n_ = model.generate(\r\n **tokenized_inputs,\r\n do_sample=False,\r\n streamer=cjk_streamer,\r\n min_new_tokens=64,\r\n max_new_tokens=128,\r\n)\r\n```\r\n\r\nThe model and prompts are from https://huggingface.co/bigscience/bloomz-560m. And here is the comparison. As you can see, before the Chinese text only prints when it meets \"。\".\r\n\r\nhttps://user-images.githubusercontent.com/12250696/232178058-fdf2a7f7-5db0-4b3a-833c-09f097fd0ed6.mov\r\n\r\n\r\nhttps://user-images.githubusercontent.com/12250696/232178065-b0afd810-34dc-4aae-a370-5b35cb4ca9ed.mov\r\n\r\n", "@bcol23 thank you for adding the screen recordings for future reference 🙏 \r\n\r\nAnd thank you for making `transformers` a little bit more inclusive 🤗 " ]
1,680
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? This pull request adds support for streaming CJK (Chinese, Japanese, Korean) characters to the TextStreamer class. It now flushes the token cache if the last token is a CJK character, in addition to flushing it if the text ends with `"\n"` or `" "`. This prevents CJK characters from being stuck in `token_cache`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22664/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22664/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22664", "html_url": "https://github.com/huggingface/transformers/pull/22664", "diff_url": "https://github.com/huggingface/transformers/pull/22664.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22664.patch", "merged_at": 1681551308000 }
https://api.github.com/repos/huggingface/transformers/issues/22663
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22663/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22663/comments
https://api.github.com/repos/huggingface/transformers/issues/22663/events
https://github.com/huggingface/transformers/pull/22663
1,659,164,274
PR_kwDOCUB6oc5N2yWb
22,663
moved labels to the same device as logits for BLOOM, GPT Neo, GPT NeoX, RoBERTa and VIT models
{ "login": "iamarunbrahma", "id": 6504730, "node_id": "MDQ6VXNlcjY1MDQ3MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/6504730?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iamarunbrahma", "html_url": "https://github.com/iamarunbrahma", "followers_url": "https://api.github.com/users/iamarunbrahma/followers", "following_url": "https://api.github.com/users/iamarunbrahma/following{/other_user}", "gists_url": "https://api.github.com/users/iamarunbrahma/gists{/gist_id}", "starred_url": "https://api.github.com/users/iamarunbrahma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iamarunbrahma/subscriptions", "organizations_url": "https://api.github.com/users/iamarunbrahma/orgs", "repos_url": "https://api.github.com/users/iamarunbrahma/repos", "events_url": "https://api.github.com/users/iamarunbrahma/events{/privacy}", "received_events_url": "https://api.github.com/users/iamarunbrahma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? As suggested in the https://github.com/huggingface/transformers/issues/22561, moved labels to the same device as logits for `BLOOM`, `GPT Neo`, `GPT NeoX`, `RoBERTa` and `VIT` models. @sgugger Could you review this once?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22663/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22663", "html_url": "https://github.com/huggingface/transformers/pull/22663", "diff_url": "https://github.com/huggingface/transformers/pull/22663.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22663.patch", "merged_at": 1680901495000 }
https://api.github.com/repos/huggingface/transformers/issues/22662
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22662/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22662/comments
https://api.github.com/repos/huggingface/transformers/issues/22662/events
https://github.com/huggingface/transformers/issues/22662
1,659,137,945
I_kwDOCUB6oc5i5G-Z
22,662
run_text_classification.py: error: the following arguments are required: --model_name_or_path
{ "login": "iamemilyccc", "id": 72780802, "node_id": "MDQ6VXNlcjcyNzgwODAy", "avatar_url": "https://avatars.githubusercontent.com/u/72780802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iamemilyccc", "html_url": "https://github.com/iamemilyccc", "followers_url": "https://api.github.com/users/iamemilyccc/followers", "following_url": "https://api.github.com/users/iamemilyccc/following{/other_user}", "gists_url": "https://api.github.com/users/iamemilyccc/gists{/gist_id}", "starred_url": "https://api.github.com/users/iamemilyccc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iamemilyccc/subscriptions", "organizations_url": "https://api.github.com/users/iamemilyccc/orgs", "repos_url": "https://api.github.com/users/iamemilyccc/repos", "events_url": "https://api.github.com/users/iamemilyccc/events{/privacy}", "received_events_url": "https://api.github.com/users/iamemilyccc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
Hi! I'm running the 'transformers/examples/tensorflow/text-classification/run_text_classification.py' and got the following "Error: the following arguments are required: --model_name_or_path" and "--model_name_or_path=gpt2: command not found" at the same time. Could you please help?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22662/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22661
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22661/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22661/comments
https://api.github.com/repos/huggingface/transformers/issues/22661/events
https://github.com/huggingface/transformers/pull/22661
1,659,130,581
PR_kwDOCUB6oc5N2rW7
22,661
Make dynamic code work with offline mode
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you!!!", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,681
1,681
COLLABORATOR
null
# What does this PR do? Using dynamic code on the Hub won't work in offline mode if the model is cached. This is because of an old way of getting the commit hash I put there before we had the commit hash returned in the e-tag. Now it's very easy to get it, so this PR changes the line of code and adds a test to make sure we don't regress. cc @VictorSanh and @leot13 since you reported the bug.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22661/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22661/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22661", "html_url": "https://github.com/huggingface/transformers/pull/22661", "diff_url": "https://github.com/huggingface/transformers/pull/22661.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22661.patch", "merged_at": 1681130983000 }
https://api.github.com/repos/huggingface/transformers/issues/22660
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22660/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22660/comments
https://api.github.com/repos/huggingface/transformers/issues/22660/events
https://github.com/huggingface/transformers/pull/22660
1,659,056,898
PR_kwDOCUB6oc5N2cbZ
22,660
Remove 2 failing ONNX conversion tests
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "OK. But should we do something like removing CI regarding this? Currently failing tests pop up.", "_The documentation is not available anymore as the PR was closed or merged._", "I think you can remove the tests as well.", "@sgugger Just to confirm, we want/could remove all things like \r\n\r\n- `ConvertCommand` in `src/transformers/commands/transformers_cli.py`\r\n- `export_with_transformers` in `src/transformers/onnx/__main__.py`\r\n- the file `src/transformers/onnx/convert.py` and any test using this fle\r\n\r\nalso cc @michaelbenayoun @fxmarty ", "No we're not removing code, just the tests if they start failing.", "OK, glad I ask!", "Ping @sgugger again to draw a bit of his attention." ]
1,680
1,681
1,681
COLLABORATOR
null
# What does this PR do? After ##22212, two tests start to fail.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22660/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22660", "html_url": "https://github.com/huggingface/transformers/pull/22660", "diff_url": "https://github.com/huggingface/transformers/pull/22660.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22660.patch", "merged_at": 1681219592000 }
https://api.github.com/repos/huggingface/transformers/issues/22659
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22659/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22659/comments
https://api.github.com/repos/huggingface/transformers/issues/22659/events
https://github.com/huggingface/transformers/pull/22659
1,659,044,919
PR_kwDOCUB6oc5N2Z9b
22,659
Generate: add API warning to streamers
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,684
1,680
MEMBER
null
# What does this PR do? The API for the streamers is still being worked on, and will not be stable in time for the next release. This PR adds a warning regarding potential future changes in the API.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22659/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22659", "html_url": "https://github.com/huggingface/transformers/pull/22659", "diff_url": "https://github.com/huggingface/transformers/pull/22659.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22659.patch", "merged_at": 1680891320000 }
https://api.github.com/repos/huggingface/transformers/issues/22658
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22658/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22658/comments
https://api.github.com/repos/huggingface/transformers/issues/22658/events
https://github.com/huggingface/transformers/pull/22658
1,659,043,547
PR_kwDOCUB6oc5N2Zrb
22,658
Revert migration of setup to pyproject.toml
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
COLLABORATOR
null
# What does this PR do? As mentioned in #22599, the migration of the setup to pyproject is causing some issues for editable installs on some setups. This PR reverts that migration and adds the setup.py to the formatted files. (Note that I could not directly revert the original PR due to some of its changes being already reverted in #22587 ) Fixes #22599
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22658/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22658", "html_url": "https://github.com/huggingface/transformers/pull/22658", "diff_url": "https://github.com/huggingface/transformers/pull/22658.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22658.patch", "merged_at": 1680894524000 }
https://api.github.com/repos/huggingface/transformers/issues/22657
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22657/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22657/comments
https://api.github.com/repos/huggingface/transformers/issues/22657/events
https://github.com/huggingface/transformers/pull/22657
1,658,973,212
PR_kwDOCUB6oc5N2LZ7
22,657
[tokenization] do not push special file
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
COLLABORATOR
null
# What does this PR do? Prevent pushing the path of the special tokens map file
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22657/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22657", "html_url": "https://github.com/huggingface/transformers/pull/22657", "diff_url": "https://github.com/huggingface/transformers/pull/22657.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22657.patch", "merged_at": 1680891156000 }
https://api.github.com/repos/huggingface/transformers/issues/22656
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22656/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22656/comments
https://api.github.com/repos/huggingface/transformers/issues/22656/events
https://github.com/huggingface/transformers/pull/22656
1,658,957,804
PR_kwDOCUB6oc5N2IRJ
22,656
Reverting Deta cloning mecanism.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "There is however `test_can_use_safetensors` failing after this PR. Is this test still relevant (at least while we keep the changes in this PR)", "> There is however test_can_use_safetensors failing after this PR. Is this test still relevant (at least while we keep the changes in this PR)\r\n\r\nThe new code should fix everything.\r\n\r\n@sgugger for a new review since the change has evolved quite a bit and is not a simple revert anymore.\r\nAdded inline comments in the PR to explain what's going on.\r\n\r\n", "> So we tried it your way and it doesn't work. Can we try to use Accelerate to detect the tied weights instead as suggested initially?\r\n\r\nBecause `find_tied_weights` looks at the model, where as here we look at the state_dict, which can be passed directly to the function. In both functions the `state_dict` is the source of truth, not the model, isn't it ?\r\n\r\n\r\nWe could definitely use `find_tied_weights` and it would most likely pass the tests, but it wouldn't be exactly looking at the same thing. State dict is what is coming in, find_tied_weights is looking where it's being put on. (in from_pretrained, opposite in save_pretrained). In general they should be the same. But not necessarily always.\r\n\r\nFor instance, I wonder what happens for buffers.\r\n\r\n> This will ignore the whole state dict as soon as device_map=\"auto\" or low_cpu_mem_usage=True.\r\n\r\nWhy ? It seems you're using the hash (via `is`) in accelerate, I will switch to that since we want entirely shared tensors like in accelerate.", "> Why ? It seems you're using the hash (via is) in accelerate, I will switch to that since we want entirely shared tensors like in accelerate.\r\n\r\nSo actually `hash` doesn't seem to work either, you can have shared buffer and still different hashes.\r\nI'll try to exhibit a simple example, but deta `model_decoder.class_embed.n.bias` and `class_embed.n.bias` do share the buffer, and yet don't have the same hash.\r\n\r\nThis exhibits the different between find_tied_weights and the state_dict. Here the tensors from the state_dict don't share the hash, while the parameters do on the model, yet the tensors on the state dict do share memory.\r\nIn this particular case, using find_tied_weights would work, but that also means the opposite is possible.", "In both situations, you have access to the model, and `find_tied_weights` will give you a list of names that are compatible with the `state_dict` of the model.\r\n\r\n> In this particular case, using find_tied_weights would work, but that also means the opposite is possible.\r\n\r\nIf this situation (the opposite) does not appear in Transformers, let's just use `find_tied_weights`.\r\n\r\nI also would like to drive the point home that `safetensors` not dealing with shared weights makes it unusable in practice in other libs: see what we have to do here... and we really want to use `safetensors`. How are we going to convince other users?", "> makes it unusable in practice\r\n\r\nWhy are we even caring about `_keys_to_ignore` and `tie_weights` if it's so inconvenient ?\r\nWhy are we trying to even find tied weights in accelerate ?\r\nHow do we expect to use safetensors for the TF models, since sharing doesn't exist over there ?\r\n", "In order to help with ease of use of `safetensors` by itself I created this PR:\r\n\r\nhttps://github.com/huggingface/safetensors/pull/236\r\n\r\nwhich sorts of mimics what is done here. \r\n\r\nHowever I still think this PR and the mechanism in transformer should be kept, since `_keys_to_ignore` are very good at hinting which keys we should keep, and which to drop, information which is not available in `safetensors` directly.\r\nAlso modification are shallower here since it doesn't touch `state_dict` and `load_state_dict` which the proposed methods to have to change.", "> Thanks for considering shared weights in `safetensors` directly. I agree it would still be cleaner to have the same kind of mechanism in Transformers. Could you please explain to me once again why the hash check does not work for the first changes in the PR (dropping weights in the checkpoint before passing it to safetensors). I don't think we ever tie weights in Transformers other than just setting the same tensors.\r\n\r\nMostly this:\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L2146\r\n```python\r\n state_dict = kwargs.pop(\"state_dict\", None)\r\n ```\r\n Users can send a state_dict, not linked to `self` to this PRs tried to look only at the `state_dict`, instead of `self`.\r\n This is indeed a bit of an edge case.\r\n\r\nThen there are even further edge cases:\r\n\r\n```python\r\n class Model(torch.nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.a = torch.nn.Linear(100, 100)\r\n self.b = self.a\r\n\r\nmodel = Model()\r\nassert model.a is model.b # OK !\r\n```\r\n\r\n```python\r\nA = torch.zeros((1000, 100))\r\na = A[:100]\r\nmodel.a.weight = nn.Parameter(a)\r\nmodel.b.weight = model.a.weight\r\nassert model.a is model.b # Well indeed it's the same parameter, but both are shared with respect to a larger tensor\r\n```\r\n\r\n```python\r\n class NoSharedModel(torch.nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n self.a = torch.nn.Linear(100, 100)\r\n self.b = torch.nn.Linear(100, 100)\r\n \r\nmodel = NoSharedmodel()\r\nA = torch.zeros((100, 100))\r\nmodel.a.weight = nn.Parameter(A)\r\nmodel.b.weight = nn.Parameter(A[:10])\r\n\r\nassert model.a.weight is not model.b .weight # A is not B in parameters, however, the underlying tensors are indeed shared\r\n```\r\n\r\nI haven't looked at that deeply when fintune occurs to see if the autograd starts to copy the tensors\r\nDuring `state_dict()` will give back `a` and `b` as shared tensors, yet the params don't have the same hash.\r\n\r\nIf you want I could take a look at `accelerate` shared params function and see if this applies. There's a lot of weird things\r\nwhen playing super deeply with this. I discovered a lot of behavior with Deta from this PR.\r\n\r\nBut the biggest reason, really is the optional `state_dict` whereas `accelerate` looks directly at the model. Within `from_pretrained` looking at the model is better in this case since what matters is the users' model rather than the state_dict coming from file (be it pytorch or safetensors)\r\n\r\n> \r\n> Apart from that, just rebasing on main should be necessary here.\r\n> \r\n> Note that I will rework the constants in future work to have one distinct key for the tied weights (as sometimes they are not tied and we are currently not warning the user if they are missing), but it's orthogonal to this PR.\r\n\r\nGreat ! \r\n\r\n", "Seeing the rebase, `hash` doesn't work on tensors unfortunately:\r\n\r\n```python\r\nimport torch\r\n\r\nA = torch.zeros((10, 10))\r\nB = A[1]\r\nA.untyped_storage().data_ptr() == B.untyped_storage().data_ptr()\r\nhash(A) != hash(B)\r\n```", "> (which will become the default utlimately)\r\n\r\nHurray !!!\r\n", "Failing tests seem to be linked to newly release huggingface_hub==0.14.0\r\n\r\n@sgugger Merge if you think it's OK, I'm going to not merge given this PR affects core modeling." ]
1,680
1,682
1,682
CONTRIBUTOR
null
# What does this PR do? This one is quite odd. With the revert the slow test will work (I guess what we care most about): ```python from transformers import AutoImageProcessor, DetaForObjectDetection from PIL import Image import requests import torch url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("jozhang97/deta-swin-large") inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) target_sizes = torch.tensor([image.size[::-1]]) results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[0] print(results) ``` However if I incorporate this: ``` model = DetaForObjectDetection.from_pretrained("jozhang97/deta-swin-large") model.save_pretrained("./tmp") model = DetaForObjectDetection.from_pretrained("./tmp") ``` ~Then, the output is garbage again (this isn't using safetensors and is not linked to the original change). I even tried to revert the PR that introduced the bug.~ The change of output **is** due to safetensors. I need to thoroughly check this. This revert will fix the slow PR anyway. I think something is not properly setup in this model, becuase the uploaded model seems to have those layers NOT linked (hence the copy.deepcopy) but the rest of the configuration seems to supposed to assume they are, hence the issue maybe ? Fixes https://github.com/huggingface/transformers/pull/22437#issuecomment-1500356727 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22656/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22656", "html_url": "https://github.com/huggingface/transformers/pull/22656", "diff_url": "https://github.com/huggingface/transformers/pull/22656.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22656.patch", "merged_at": 1682349876000 }
https://api.github.com/repos/huggingface/transformers/issues/22655
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22655/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22655/comments
https://api.github.com/repos/huggingface/transformers/issues/22655/events
https://github.com/huggingface/transformers/pull/22655
1,658,911,610
PR_kwDOCUB6oc5N1--i
22,655
🌐 [i18n-KO] Translated `sequence_classification.mdx` to Korean
{ "login": "0525hhgus", "id": 47289574, "node_id": "MDQ6VXNlcjQ3Mjg5NTc0", "avatar_url": "https://avatars.githubusercontent.com/u/47289574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0525hhgus", "html_url": "https://github.com/0525hhgus", "followers_url": "https://api.github.com/users/0525hhgus/followers", "following_url": "https://api.github.com/users/0525hhgus/following{/other_user}", "gists_url": "https://api.github.com/users/0525hhgus/gists{/gist_id}", "starred_url": "https://api.github.com/users/0525hhgus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0525hhgus/subscriptions", "organizations_url": "https://api.github.com/users/0525hhgus/orgs", "repos_url": "https://api.github.com/users/0525hhgus/repos", "events_url": "https://api.github.com/users/0525hhgus/events{/privacy}", "received_events_url": "https://api.github.com/users/0525hhgus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "May you please review this PR?\r\n@sgugger, @ArthurZucker, @eunseojo" ]
1,680
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Translated the `tasks/sequence_classification.mdx` file of the documentation to Korean. - The file name is `sequence_classification.mdx`, but the document name is `text classification`. - Currently, it is being revised to consistent vocabulary. Thank you in advance for your review:) Part of https://github.com/huggingface/transformers/issues/20179 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- 제출 전 체크리스트로, 가짜연구소만의 체크리스트도 <details>로 감싸서 만들어두면 더 좋을 것 같아요. --> ## Who can review? <!-- 가짜연구소 팀원들과 리뷰가 끝난 후에만 허깅페이스 직원들에게 리뷰 요청하는 아래 주석을 노출해주세요! --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22655/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22655", "html_url": "https://github.com/huggingface/transformers/pull/22655", "diff_url": "https://github.com/huggingface/transformers/pull/22655.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22655.patch", "merged_at": 1681436436000 }
https://api.github.com/repos/huggingface/transformers/issues/22654
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22654/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22654/comments
https://api.github.com/repos/huggingface/transformers/issues/22654/events
https://github.com/huggingface/transformers/pull/22654
1,658,887,785
PR_kwDOCUB6oc5N15wl
22,654
Add Segment Anything Model (SAM)
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I have few comments for the reviewers before starting the review, IMO we should not expose `SamPromptEncoder` and `SamMaskDecoder` inside the main init (contrary to other models such as Blip where we used to expose the text module and the vision module) mainly because these modules cannot be used as a standalone module. The MaskDecoder needs the image embeddings and the points/bounding box/masks embeddings to predict the masks, and the Prompt Encoder is a super small module (just two embedding layers). But both the image encoder and prompt encoder can be called through `get_xxxx_embeddings` method from `SamForMaskGeneration`.\r\nOne last point regarding the PromptEmbedding module, in the paper they mention that this module should also accept textual inputs. However according to the authors this has not been released yet.\r\n\r\ncc @sgugger @amyeroberts just FYI", "Might be a few things here and there, but new pairs of eyes will help us fix fast. Pinging @amyeroberts and @sgugger for a review!", "Merging as I need it to update the pipeline based on reviews. Will adresse remaining comments in a follow up PR" ]
1,680
1,681
1,681
COLLABORATOR
null
# What does this PR do? Original repo: https://github.com/facebookresearch/segment-anything Segment Anything Model (SAM) is a recent model from Meta AI that makes it possible to predict image segmentation masks given an image and various inputs such as bounding boxes, 2D points or previous masks. It is also mentioned in the original paper that the model can take textual input, but this feature has not been released yet in the original repository. The release came with 3 weights, namely: - `sam_vit_b` - `sam_vit_h` - `sam_vit_l` Their main difference is about the vision encoder size, the prompt encoder, and mask decoder should stay the same. According to the paper, for each input, the model predicts 3 binary masks, corresponding to the region where the "object of interest" lives in the image. cc @sgugger @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22654/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 7, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22654/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22654", "html_url": "https://github.com/huggingface/transformers/pull/22654", "diff_url": "https://github.com/huggingface/transformers/pull/22654.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22654.patch", "merged_at": 1681930909000 }
https://api.github.com/repos/huggingface/transformers/issues/22653
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22653/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22653/comments
https://api.github.com/repos/huggingface/transformers/issues/22653/events
https://github.com/huggingface/transformers/pull/22653
1,658,841,123
PR_kwDOCUB6oc5N1wJ4
22,653
Small nit,
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
COLLABORATOR
null
# What does this PR do? Fixes #21986
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22653/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22653", "html_url": "https://github.com/huggingface/transformers/pull/22653", "diff_url": "https://github.com/huggingface/transformers/pull/22653.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22653.patch", "merged_at": 1680881363000 }
https://api.github.com/repos/huggingface/transformers/issues/22652
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22652/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22652/comments
https://api.github.com/repos/huggingface/transformers/issues/22652/events
https://github.com/huggingface/transformers/pull/22652
1,658,809,270
PR_kwDOCUB6oc5N1pq9
22,652
Fix `MegaModel` CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
COLLABORATOR
null
# What does this PR do? Fix `MegaModel` CI (some tests are skipped too). See comments.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22652/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22652", "html_url": "https://github.com/huggingface/transformers/pull/22652", "diff_url": "https://github.com/huggingface/transformers/pull/22652.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22652.patch", "merged_at": 1680880385000 }
https://api.github.com/repos/huggingface/transformers/issues/22651
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22651/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22651/comments
https://api.github.com/repos/huggingface/transformers/issues/22651/events
https://github.com/huggingface/transformers/issues/22651
1,658,805,629
I_kwDOCUB6oc5i3119
22,651
May I ask when will release 4.28.0
{ "login": "lihj1108", "id": 97330930, "node_id": "U_kgDOBc0m8g", "avatar_url": "https://avatars.githubusercontent.com/u/97330930?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lihj1108", "html_url": "https://github.com/lihj1108", "followers_url": "https://api.github.com/users/lihj1108/followers", "following_url": "https://api.github.com/users/lihj1108/following{/other_user}", "gists_url": "https://api.github.com/users/lihj1108/gists{/gist_id}", "starred_url": "https://api.github.com/users/lihj1108/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lihj1108/subscriptions", "organizations_url": "https://api.github.com/users/lihj1108/orgs", "repos_url": "https://api.github.com/users/lihj1108/repos", "events_url": "https://api.github.com/users/lihj1108/events{/privacy}", "received_events_url": "https://api.github.com/users/lihj1108/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Next week\r\nNext week\r\nNext week\r\nNext week", "I needed to use TF-BLIP offline, so I created whl. It may help for this week :D \r\n\r\nhttps://www.kaggle.com/datasets/ipythonx/tenp-transformer-4280", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
### Feature request May I ask when will release 4.28.0? ### Motivation May I ask when will release 4.28.0 ### Your contribution May I ask when will release 4.28.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22651/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22650
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22650/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22650/comments
https://api.github.com/repos/huggingface/transformers/issues/22650/events
https://github.com/huggingface/transformers/pull/22650
1,658,585,500
PR_kwDOCUB6oc5N07eZ
22,650
Fix typo
{ "login": "Ronalmoo", "id": 44221520, "node_id": "MDQ6VXNlcjQ0MjIxNTIw", "avatar_url": "https://avatars.githubusercontent.com/u/44221520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ronalmoo", "html_url": "https://github.com/Ronalmoo", "followers_url": "https://api.github.com/users/Ronalmoo/followers", "following_url": "https://api.github.com/users/Ronalmoo/following{/other_user}", "gists_url": "https://api.github.com/users/Ronalmoo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ronalmoo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ronalmoo/subscriptions", "organizations_url": "https://api.github.com/users/Ronalmoo/orgs", "repos_url": "https://api.github.com/users/Ronalmoo/repos", "events_url": "https://api.github.com/users/Ronalmoo/events{/privacy}", "received_events_url": "https://api.github.com/users/Ronalmoo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you for the fix!" ]
1,680
1,681
1,680
CONTRIBUTOR
null
# What does this PR do? Typo on trainer.py This should be modified from "forword" to "forward" Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22650/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22650", "html_url": "https://github.com/huggingface/transformers/pull/22650", "diff_url": "https://github.com/huggingface/transformers/pull/22650.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22650.patch", "merged_at": 1680871584000 }
https://api.github.com/repos/huggingface/transformers/issues/22649
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22649/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22649/comments
https://api.github.com/repos/huggingface/transformers/issues/22649/events
https://github.com/huggingface/transformers/pull/22649
1,658,564,246
PR_kwDOCUB6oc5N03If
22,649
[OPT] Fix default attention mask size
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm gonna add a test before merging " ]
1,680
1,680
1,680
COLLABORATOR
null
# What does this PR do? Fixes #21685, should also help in adding the ONNX configuration in #17771
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22649/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22649", "html_url": "https://github.com/huggingface/transformers/pull/22649", "diff_url": "https://github.com/huggingface/transformers/pull/22649.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22649.patch", "merged_at": 1680891177000 }
https://api.github.com/repos/huggingface/transformers/issues/22648
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22648/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22648/comments
https://api.github.com/repos/huggingface/transformers/issues/22648/events
https://github.com/huggingface/transformers/pull/22648
1,658,543,082
PR_kwDOCUB6oc5N0y1H
22,648
🚨🚨🚨 [`Blip`] Refactor the Blip modeling file + test file 🚨🚨🚨
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22648). All of your documentation changes will be reflected on that endpoint.", "Let's wait our great @sgugger to express his opinion on if we are allowed to change this. \r\nI think it's fine however.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
CONTRIBUTOR
null
# What does this PR do? Removes the file `test_modeling_blip_text` as its content is totally duplicated inside `test_modeling_blip`, so that we avoid running these tests twice This PR also refactors the modeling file of `blip`, to have a single file for the whole architecture. I also realized that there is no necessary to have a `BlipTextPretrainedModel`, so I decided to remove that class for a cleaner implementation. Hence, this PR might introduce breaking changes for blip. cc @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22648/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22648", "html_url": "https://github.com/huggingface/transformers/pull/22648", "diff_url": "https://github.com/huggingface/transformers/pull/22648.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22648.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22647
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22647/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22647/comments
https://api.github.com/repos/huggingface/transformers/issues/22647/events
https://github.com/huggingface/transformers/issues/22647
1,658,367,937
I_kwDOCUB6oc5i2K_B
22,647
Open AI GPT Model Implementation in Flax
{ "login": "mayankagarwals", "id": 39498938, "node_id": "MDQ6VXNlcjM5NDk4OTM4", "avatar_url": "https://avatars.githubusercontent.com/u/39498938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mayankagarwals", "html_url": "https://github.com/mayankagarwals", "followers_url": "https://api.github.com/users/mayankagarwals/followers", "following_url": "https://api.github.com/users/mayankagarwals/following{/other_user}", "gists_url": "https://api.github.com/users/mayankagarwals/gists{/gist_id}", "starred_url": "https://api.github.com/users/mayankagarwals/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayankagarwals/subscriptions", "organizations_url": "https://api.github.com/users/mayankagarwals/orgs", "repos_url": "https://api.github.com/users/mayankagarwals/repos", "events_url": "https://api.github.com/users/mayankagarwals/events{/privacy}", "received_events_url": "https://api.github.com/users/mayankagarwals/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "@sanchit-gandhi ", "@sanchit-gandhi @sgugger Are there any reservations around this? I have gone through GPT architecture and flax code of GPT2. I'm fairly certain this is implementable for exhaustiveness. OpenAI GPT model still sees almost a million downloads a month\r\n\r\nPlease let me know. Would like to start with a draft PR than just rushing in", "Hey @mayankagarwals! Super sorry for not getting back to you earlier here. Let me give you my two cents: the OpenAI GPT model is definitely still super popular amongst PyTorch users (as you say, ~1 mil downloads per month). What we tend to see with Flax users though is a preference for newer, larger models (e.g. OPT, Flan-T5). This is primarily because of how easy it is to run super large models in JAX with data and model parallelism. So whilst I think this PR would be cool for completeness, I think porting a newer, more flashy model might get the JAX/Flax community more excited! How does this sound?", "No worries :) @sanchit-gandhi \r\nYes, I had not gone ahead because of the same skepticism. Would you mind pointing me to what in your opinion might be a model worth digging into and think will benefit hugging face and the community? \r\nI have a good hold on text generation architecture so something aligned there would be better!", "LLaMA could be cool! What I would suggest doing is starting from the Flax GPT-Neo model (since this is the Flax model most similar to LLaMa) and then adding the new bits in", "@sanchit-gandhi I was also thinking of adding a Flax version of LLama (and also GPT-NeoX, maybe others) as some Flax practice. I couldn't find a guide on adding a new framework to an existing model, and I asked on the discord without much avail (but was directed to this issue).\r\n\r\nI'm familiar with the architectures having already ported them to other frameworks where I work.\r\n\r\nIf you could point me in the right direction, I would be happy to port this for you! I wasn't sure if it is as simple as adding a new `modeling_flax_*` file or if there are more parts / some best practices to be aware of.\r\n\r\nThanks 🤗 ", "Hey @vvvm23! In this case, since we already have the PT model, the best thing to do would be to add a new modelling file for flax (`modeling_flax_llama.py`) which is initially copied from the Flax GPT Neo modelling code. You can then start making changes to the Flax code to adapt it to LLama. The reason that we copy from Flax GPT Neo is that it contains optimised code for the attention layer which we should try and re-use for Flax LLama.\r\n\r\nYou'll then need to make sure that the weight names match and that you have equivalence between PyTorch LLama and Flax LLama. To do this, I would recommend creating a 'dummy' version of the PyTorch LLama model:\r\n```python\r\nfrom transformers import LlamaConfig, LlamaForCausalLM\r\n\r\nconfig = LlamaConfig(hidden_size=16, intermediate_size=24, max_position_embeddings=128, num_attention_heads=2, num_hidden_layers=2)\r\n\r\nmodel = LlamaForCausalLM(config)\r\nmodel.save_pretrained(\"./path/to/save\")\r\n```\r\n\r\nAnd then for your test script, load this same model in PyTorch, then Flax (pass `from_pt=True` in the `from_pretrained` call), and verify with random inputs that you get the same logits out when you do a forward pass (example here https://github.com/huggingface/transformers/issues/15476#issue-1121800731)\r\n\r\nYou can then focus on the tests and converting the actual model weights as required. Feel free to open a PR and tag me - more than happy to help with the integration here!\r\n\r\n", "Thanks @sanchit-gandhi that was very comprehensive! I'll let you know how I get on. :hugs: ", "Got a bit caught up with real life stuff, but I will be working on this more intensively from Monday, aiming to finish something by end of week.", "@sanchit-gandhi I made a draft PR of my current progress, see #24587. Sorry, I haven't made the full model, been very busy 😓 " ]
1,680
1,688
null
CONTRIBUTOR
null
### Model description https://huggingface.co/openai-gpt today supports tf and pytorch but not flax. I'd like to implement the support to enhance the current gpt offering by hugging face ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Given that the model is already implemented in other two frameworks, I'll try to infer the model from there. Please feel free to provide additional resources that can help me wrap this up better and faster
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22647/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/22646
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22646/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22646/comments
https://api.github.com/repos/huggingface/transformers/issues/22646/events
https://github.com/huggingface/transformers/issues/22646
1,658,355,967
I_kwDOCUB6oc5i2ID_
22,646
T5Tokenizer, TFT5ForConditionalGeneration Graph execution error using tfa.metrics.CohenKappa
{ "login": "paul590", "id": 24415267, "node_id": "MDQ6VXNlcjI0NDE1MjY3", "avatar_url": "https://avatars.githubusercontent.com/u/24415267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/paul590", "html_url": "https://github.com/paul590", "followers_url": "https://api.github.com/users/paul590/followers", "following_url": "https://api.github.com/users/paul590/following{/other_user}", "gists_url": "https://api.github.com/users/paul590/gists{/gist_id}", "starred_url": "https://api.github.com/users/paul590/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/paul590/subscriptions", "organizations_url": "https://api.github.com/users/paul590/orgs", "repos_url": "https://api.github.com/users/paul590/repos", "events_url": "https://api.github.com/users/paul590/events{/privacy}", "received_events_url": "https://api.github.com/users/paul590/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks like the metric does not like your labels. Are you sure it can be used in that case? cc @Rocketknight1 ", "Thank you @sgugger for your response. I was hoping it would work, I ran this same metric through a bert model and using its own tokenizers and I had no issues with it. Is there a way to tweak the fit function to insert the labels as is instead of a tensor?", "I believe the issue is caused by your combination of model and metric. `TFT5ForConditionalGeneration` is a model that outputs text, where the distribution over output tokens is conditioned on some input text. Tasks that are suitable for conditional generation models include summarization and translation. \r\n\r\nWhen using any model that generates text, the number of output classes is equal to the vocabulary size of the model - the model produces a distribution over all possible tokens at each position. However, your metric uses `num_classes=4`. This results in an error because the vocabulary for T5 is thousands of tokens, and so label values can be much higher than 4.\r\n\r\nIf this code worked with a BERT model, this is because BERT models mostly do not generate text. If you used e.g. `TFBertForSequenceClassification` or `TFBertForTokenClassification`, then the number of classes would be much lower. That is because these models predict categories for each token or for the entire sequence, and the number of categories is set by the `num_labels` argument to the model's `from_pretrained` method.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,683
1,683
NONE
null
### System Info - `transformers` version: 4.24.0 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.10.9 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> Hello Huggingface Team, I am currently using the T5 modules: T5Tokenizer, TFT5ForConditionalGeneration with the metric tfa.metrics.CohenKappa from Tensorflow Add-ons library. I have fine-tuned a model to achieve multi-class classification which works when I use a different metric, for example, 'accuracy'. The issue is when I exchange the metric to use CohenKappa I get the error found below. If more information is required please let me know. Thank you in advance! Error: > InvalidArgumentError Traceback (most recent call last) > Cell In[80], line 15 > 6 #model.fit(tokenized_train_data, validation_data=val_dataset, epochs=num_epochs) > 8 model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint( > 9 filepath='/', > 10 save_weights_only=True, > 11 monitor='val_accuracy', > 12 mode='max', > 13 save_best_only=True) > ---> 15 history = t5_model.fit(tokenized_train_data, > 16 validation_data=tokenized_test_data, > 17 callbacks=None,#[model_checkpoint_callback], > 18 batch_size=batch_size, > 19 epochs=num_epochs) > > File ~/anaconda3/lib/python3.10/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs) > 67 filtered_tb = _process_traceback_frames(e.__traceback__) > 68 # To get the full stack trace, call: > 69 # `tf.debugging.disable_traceback_filtering()` > ---> 70 raise e.with_traceback(filtered_tb) from None > 71 finally: > 72 del filtered_tb > > File ~/anaconda3/lib/python3.10/site-packages/tensorflow/python/eager/execute.py:52, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) > 50 try: > 51 ctx.ensure_initialized() > ---> 52 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, > 53 inputs, attrs, num_outputs) > 54 except core._NotOkStatusException as e: > 55 if name is not None: > > InvalidArgumentError: Graph execution error: > > Detected at node 'confusion_matrix/assert_less/Assert/AssertGuard/Assert' defined at (most recent call last): > File "/Users/pj/anaconda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main > return _run_code(code, main_globals, None, > File "/Users/pj/anaconda3/lib/python3.10/runpy.py", line 86, in _run_code > exec(code, run_globals) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel_launcher.py", line 17, in <module> > app.launch_new_instance() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/traitlets/config/application.py", line 992, in launch_instance > app.start() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelapp.py", line 711, in start > self.io_loop.start() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 199, in start > self.asyncio_loop.run_forever() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/base_events.py", line 603, in run_forever > self._run_once() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/base_events.py", line 1906, in _run_once > handle._run() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/events.py", line 80, in _run > self._context.run(self._callback, *self._args) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue > await self.process_one() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 499, in process_one > await dispatch(*args) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell > await result > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 729, in execute_request > reply_content = await reply_content > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/ipkernel.py", line 411, in do_execute > res = shell.run_cell( > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/zmqshell.py", line 531, in run_cell > return super().run_cell(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 2961, in run_cell > result = self._run_cell( > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3016, in _run_cell > result = runner(coro) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner > coro.send(None) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3221, in run_cell_async > has_raised = await self.run_ast_nodes(code_ast.body, cell_name, > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3400, in run_ast_nodes > if await self.run_code(code, result, async_=asy): > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3460, in run_code > exec(code_obj, self.user_global_ns, self.user_ns) > File "/var/folders/wv/10kjqk217c5039dg4pbqggh00000gn/T/ipykernel_39051/3646968355.py", line 15, in <module> > history = t5_model.fit(tokenized_train_data, > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler > return fn(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1685, in fit > tmp_logs = self.train_function(iterator) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1284, in train_function > return step_function(self, iterator) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1268, in step_function > outputs = model.distribute_strategy.run(run_step, args=(data,)) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1249, in run_step > outputs = model.train_step(data) > File "/var/folders/wv/10kjqk217c5039dg4pbqggh00000gn/T/ipykernel_39051/2458810848.py", line 36, in train_step > self.compiled_metrics.update_state(labels, tf.argmax(outputs.logits, -1)) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/compile_utils.py", line 605, in update_state > metric_obj.update_state(y_t, y_p, sample_weight=mask) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/utils/metrics_utils.py", line 77, in decorated > update_op = update_state_fn(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/metrics/base_metric.py", line 140, in update_state_fn > return ag_update_state(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 150, in update_state > return self._update(y_true, y_pred, sample_weight) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 165, in _update_multi_class_model > return self._update_confusion_matrix(y_true, y_pred, sample_weight) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 193, in _update_confusion_matrix > new_conf_mtx = tf.math.confusion_matrix( > Node: 'confusion_matrix/assert_less/Assert/AssertGuard/Assert' > Detected at node 'confusion_matrix/assert_less/Assert/AssertGuard/Assert' defined at (most recent call last): > File "/Users/pj/anaconda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main > return _run_code(code, main_globals, None, > File "/Users/pj/anaconda3/lib/python3.10/runpy.py", line 86, in _run_code > exec(code, run_globals) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel_launcher.py", line 17, in <module> > app.launch_new_instance() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/traitlets/config/application.py", line 992, in launch_instance > app.start() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelapp.py", line 711, in start > self.io_loop.start() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 199, in start > self.asyncio_loop.run_forever() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/base_events.py", line 603, in run_forever > self._run_once() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/base_events.py", line 1906, in _run_once > handle._run() > File "/Users/pj/anaconda3/lib/python3.10/asyncio/events.py", line 80, in _run > self._context.run(self._callback, *self._args) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue > await self.process_one() > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 499, in process_one > await dispatch(*args) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell > await result > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 729, in execute_request > reply_content = await reply_content > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/ipkernel.py", line 411, in do_execute > res = shell.run_cell( > File "/Users/pj/anaconda3/lib/python3.10/site-packages/ipykernel/zmqshell.py", line 531, in run_cell > return super().run_cell(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 2961, in run_cell > result = self._run_cell( > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3016, in _run_cell > result = runner(coro) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner > coro.send(None) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3221, in run_cell_async > has_raised = await self.run_ast_nodes(code_ast.body, cell_name, > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3400, in run_ast_nodes > if await self.run_code(code, result, async_=asy): > File "/Users/pj/anaconda3/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3460, in run_code > exec(code_obj, self.user_global_ns, self.user_ns) > File "/var/folders/wv/10kjqk217c5039dg4pbqggh00000gn/T/ipykernel_39051/3646968355.py", line 15, in <module> > history = t5_model.fit(tokenized_train_data, > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler > return fn(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1685, in fit > tmp_logs = self.train_function(iterator) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1284, in train_function > return step_function(self, iterator) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1268, in step_function > outputs = model.distribute_strategy.run(run_step, args=(data,)) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/training.py", line 1249, in run_step > outputs = model.train_step(data) > File "/var/folders/wv/10kjqk217c5039dg4pbqggh00000gn/T/ipykernel_39051/2458810848.py", line 36, in train_step > self.compiled_metrics.update_state(labels, tf.argmax(outputs.logits, -1)) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/engine/compile_utils.py", line 605, in update_state > metric_obj.update_state(y_t, y_p, sample_weight=mask) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/utils/metrics_utils.py", line 77, in decorated > update_op = update_state_fn(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/keras/metrics/base_metric.py", line 140, in update_state_fn > return ag_update_state(*args, **kwargs) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 150, in update_state > return self._update(y_true, y_pred, sample_weight) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 165, in _update_multi_class_model > return self._update_confusion_matrix(y_true, y_pred, sample_weight) > File "/Users/pj/anaconda3/lib/python3.10/site-packages/tensorflow_addons/metrics/cohens_kappa.py", line 193, in _update_confusion_matrix > new_conf_mtx = tf.math.confusion_matrix( > Node: 'confusion_matrix/assert_less/Assert/AssertGuard/Assert' > 2 root error(s) found. > (0) INVALID_ARGUMENT: assertion failed: [`labels` out of bound] [Condition x < y did not hold element-wise:] [x (confusion_matrix/control_dependency:0) = ] [[220 1][209...]...] [y (confusion_matrix/Cast:0) = ] [4] > [[{{node confusion_matrix/assert_less/Assert/AssertGuard/Assert}}]] > [[gradient_tape/tft5_for_conditional_generation/decoder/block_._5/layer_._0/SelfAttention/mul_1/_596]] > (1) INVALID_ARGUMENT: assertion failed: [`labels` out of bound] [Condition x < y did not hold element-wise:] [x (confusion_matrix/control_dependency:0) = ] [[220 1][209...]...] [y (confusion_matrix/Cast:0) = ] [4] > [[{{node confusion_matrix/assert_less/Assert/AssertGuard/Assert}}]] > 0 successful operations. > 0 derived errors ignored. [Op:__inference_train_function_393119] ### Who can help? @Rocketknight1 @gante @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` def train_step(self, inputs): input_ids = inputs['input_ids'] attention_mask = inputs['attention_mask'] labels = inputs['labels'] labels_mask = inputs['labels_mask'] with tf.GradientTape() as tape: outputs = self(input_ids=input_ids, attention_mask=attention_mask, labels=labels, decoder_attention_mask=labels_mask, training=True ) loss = self.compiled_loss(labels, outputs.logits, regularization_losses=self.losses) self.optimizer.minimize(loss, self.trainable_variables, tape=tape) self.compiled_metrics.update_state(labels, outputs.logits) ## error happens here return_metrics = {} for metric in self.metrics: result = metric.result() if isinstance(result, dict): return_metrics.update(result) else: return_metrics[metric.name] = result if "loss" in return_metrics and "loss_loss" in return_metrics: del return_metrics["loss_loss"] return return_metrics def test_step(self, inputs): input_ids = inputs['input_ids'] attention_mask = inputs['attention_mask'] labels = inputs['labels'] labels_mask = inputs['labels_mask'] outputs = self(input_ids=input_ids, attention_mask=attention_mask, labels=labels, decoder_attention_mask=labels_mask, training=False ) if not self.loss: self.loss_tracker.update_state(y_pred.loss) return_metrics = {"loss": self.loss_tracker.result()} else: return_metrics = {} self.compiled_loss(labels, outputs.logits, regularization_losses=self.losses) self.compiled_metrics.update_state(labels, outputs.logits) for metric in self.metrics: result = metric.result() if isinstance(result, dict): return_metrics.update(result) else: return_metrics[metric.name] = result if "loss" in return_metrics and "loss_loss" in return_metrics: del return_metrics["loss_loss"] return return_metrics import functools t5_model.train_step = functools.partial(train_step, t5_model) t5_model.test_step = functools.partial(test_step, t5_model) learning_rate = 0.00005 batch_size = 8 num_epochs = 2 optimizer = tf.keras.optimizers.legacy.Adam(learning_rate=learning_rate) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) t5_model.compile(optimizer=optimizer, loss=loss_fn, metrics=[tfa.metrics.CohenKappa(num_classes=4, weightage='quadratic', sparse_labels=True)]) history = t5_model.fit(tokenized_train_data, validation_data=tokenized_test_data, callbacks=None, batch_size=batch_size, epochs=num_epochs) ``` ### Expected behavior The model should return results instead of producing a crash.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22646/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22645
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22645/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22645/comments
https://api.github.com/repos/huggingface/transformers/issues/22645/events
https://github.com/huggingface/transformers/issues/22645
1,658,324,374
I_kwDOCUB6oc5i2AWW
22,645
Implement QFormer for pretrain
{ "login": "dinhanhx", "id": 38489776, "node_id": "MDQ6VXNlcjM4NDg5Nzc2", "avatar_url": "https://avatars.githubusercontent.com/u/38489776?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dinhanhx", "html_url": "https://github.com/dinhanhx", "followers_url": "https://api.github.com/users/dinhanhx/followers", "following_url": "https://api.github.com/users/dinhanhx/following{/other_user}", "gists_url": "https://api.github.com/users/dinhanhx/gists{/gist_id}", "starred_url": "https://api.github.com/users/dinhanhx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dinhanhx/subscriptions", "organizations_url": "https://api.github.com/users/dinhanhx/orgs", "repos_url": "https://api.github.com/users/dinhanhx/repos", "events_url": "https://api.github.com/users/dinhanhx/events{/privacy}", "received_events_url": "https://api.github.com/users/dinhanhx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@NielsRogge Gentle ping because I saw your name in the docs", "cc @younesbelkada ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "as the issue is reopened, is there any plan to impl the loss for qformer?", "Hi @jianantian , I didn't had time to have a look unfortunately, if you want to try your hands on it, feel free to open a PR and we'll guide you frm there!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,689
1,689
NONE
null
### Feature request In [BLIP-2](https://arxiv.org/pdf/2301.12597.pdf), there is a pretraining stage (or stage 1) of QFormer. ![image](https://user-images.githubusercontent.com/38489776/230536840-5b466474-0e29-4029-976b-68c966b2b499.png) Implementation of QFormer in this stage is requested. ### Motivation In [HuggingFace's source code of BLIP-2](https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/models/blip_2/modeling_blip_2.py#L1019), I see no implementations for text inputs, Image-text contrastive loss, Image-grounded text generation loss, Image-text matching loss for pretraining. Currently, The source code only provides for vision-language generative learning (stage 2). Therefore, it will be very helpful for people who are interested in stage 1 of QFormer (like me). ### Your contribution Unfortunately, I don't think there is a way that I could help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22645/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22644
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22644/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22644/comments
https://api.github.com/repos/huggingface/transformers/issues/22644/events
https://github.com/huggingface/transformers/pull/22644
1,658,297,153
PR_kwDOCUB6oc5N0CTp
22,644
Add support for Ascend NPU
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "For example, you can run the official question answering task using Ascend NPU with below command:\r\n\r\n```\r\npython examples/pytorch/question-answering/run_qa.py\r\n --model_name_or_path bert-base-uncased \\\r\n --dataset_name squad \\\r\n --do_train \\\r\n --do_eval \\\r\n --device_id 5 \\ // The specific device to be used for single card training on Ascend NPUs.\r\n --per_device_train_batch_size 24 \\\r\n --num_train_epochs 2 \\\r\n --learning_rate 3e-5 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --save_steps 5000 \\\r\n --fp16_opt_level O2 \\\r\n --half_precision_backend apex \\\r\n --dataloader_drop_last \\\r\n --overwrite_output_dir \\\r\n --output_dir ./output \\\r\n```\r\n\r\nBelow are the output logs:\r\n```\r\n04/07/2023 01:53:22 - WARNING - __main__ - Process rank: -1, device: npu:5, n_gpu: 1distributed training: False, 16-bits training: True\r\n04/07/2023 01:53:22 - INFO - __main__ - Training/evaluation parameters TrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\nadam_epsilon=1e-08,\r\nauto_find_batch_size=False,\r\nbf16=False,\r\nbf16_full_eval=False,\r\ndata_seed=None,\r\ndataloader_drop_last=True,\r\ndataloader_num_workers=0,\r\ndataloader_pin_memory=True,\r\nddp_bucket_cap_mb=None,\r\nddp_find_unused_parameters=None,\r\nddp_timeout=1800,\r\ndebug=[],\r\ndeepspeed=None,\r\ndevice_id=5,\r\ndisable_tqdm=False,\r\ndo_eval=True,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=None,\r\neval_delay=0,\r\neval_steps=None,\r\nevaluation_strategy=no,\r\nfp16=True,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O2,\r\nfsdp=[],\r\nfsdp_config={'fsdp_min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},\r\nfsdp_min_num_params=0,\r\nfsdp_transformer_layer_cls_to_wrap=None,\r\nfull_determinism=False,\r\ngradient_accumulation_steps=1,\r\ngradient_checkpointing=False,\r\ngreater_is_better=None,\r\ngroup_by_length=False,\r\nhalf_precision_backend=apex,\r\nhub_model_id=None,\r\nhub_private_repo=False,\r\nhub_strategy=every_save,\r\nhub_token=<HUB_TOKEN>,\r\nignore_data_skip=False,\r\ninclude_inputs_for_metrics=False,\r\njit_mode_eval=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=3e-05,\r\nlength_column_name=length,\r\nload_best_model_at_end=False,\r\nlocal_rank=-1,\r\nlog_level=passive,\r\nlog_level_replica=warning,\r\nlog_on_each_node=True,\r\nlogging_dir=./output/runs/Apr07_01-53-21_localhost,\r\nlogging_first_step=False,\r\nlogging_nan_inf_filter=True,\r\nlogging_steps=500,\r\nlogging_strategy=steps,\r\nlr_scheduler_type=linear,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmp_parameters=,\r\nno_cuda=False,\r\nnum_train_epochs=2.0,\r\noptim=adamw_hf,\r\noptim_args=None,\r\noutput_dir=./output,\r\noverwrite_output_dir=True,\r\npast_index=-1,\r\nper_device_eval_batch_size=8,\r\nper_device_train_batch_size=24,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=None,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=<PUSH_TO_HUB_TOKEN>,\r\nray_scope=last,\r\nremove_unused_columns=True,\r\nreport_to=[],\r\nresume_from_checkpoint=None,\r\nrun_name=./output,\r\nsave_on_each_node=False,\r\nsave_safetensors=False,\r\nsave_steps=5000,\r\nsave_strategy=steps,\r\nsave_total_limit=None,\r\nseed=42,\r\nsharded_ddp=[],\r\nskip_memory_metrics=True,\r\ntf32=None,\r\ntorch_compile=False,\r\ntorch_compile_backend=None,\r\ntorch_compile_mode=None,\r\ntorchdynamo=None,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=None,\r\nuse_ipex=False,\r\nuse_legacy_prediction_loop=False,\r\nuse_mps_device=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=0,\r\nweight_decay=0.0,\r\nxpu_backend=None,\r\n)\r\n04/07/2023 01:53:24 - INFO - datasets.builder - No config specified, defaulting to the single config: squad/plain_text\r\n04/07/2023 01:53:24 - INFO - datasets.info - Loading Dataset Infos from /root/.cache/huggingface/modules/datasets_modules/datasets/squad/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453\r\n04/07/2023 01:53:24 - INFO - datasets.builder - Overwrite dataset info from restored data version.\r\n04/07/2023 01:53:24 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453\r\n04/07/2023 01:53:24 - WARNING - datasets.builder - Found cached dataset squad (/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\r\n04/07/2023 01:53:24 - INFO - datasets.info - Loading Dataset info from /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453\r\n\r\n 0%| | 0/2 [00:00<?, ?it/s]\r\n100%|██████████| 2/2 [00:00<00:00, 196.27it/s]\r\n[INFO|configuration_utils.py:668] 2023-04-07 01:53:24,715 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/config.json\r\n[INFO|configuration_utils.py:720] 2023-04-07 01:53:24,723 >> Model config BertConfig {\r\n \"_name_or_path\": \"bert-base-uncased\",\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"classifier_dropout\": null,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"position_embedding_type\": \"absolute\",\r\n \"transformers_version\": \"4.28.0.dev0\",\r\n \"type_vocab_size\": 2,\r\n \"use_cache\": true,\r\n \"vocab_size\": 30522\r\n}\r\n\r\n[INFO|configuration_utils.py:668] 2023-04-07 01:53:25,029 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/config.json\r\n[INFO|configuration_utils.py:720] 2023-04-07 01:53:25,033 >> Model config BertConfig {\r\n \"_name_or_path\": \"bert-base-uncased\",\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"classifier_dropout\": null,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"position_embedding_type\": \"absolute\",\r\n \"transformers_version\": \"4.28.0.dev0\",\r\n \"type_vocab_size\": 2,\r\n \"use_cache\": true,\r\n \"vocab_size\": 30522\r\n}\r\n\r\n[INFO|tokenization_utils_base.py:1809] 2023-04-07 01:53:25,035 >> loading file vocab.txt from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/vocab.txt\r\n[INFO|tokenization_utils_base.py:1809] 2023-04-07 01:53:25,035 >> loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/tokenizer.json\r\n[INFO|tokenization_utils_base.py:1809] 2023-04-07 01:53:25,035 >> loading file added_tokens.json from cache at None\r\n[INFO|tokenization_utils_base.py:1809] 2023-04-07 01:53:25,035 >> loading file special_tokens_map.json from cache at None\r\n[INFO|tokenization_utils_base.py:1809] 2023-04-07 01:53:25,035 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/tokenizer_config.json\r\n[INFO|configuration_utils.py:668] 2023-04-07 01:53:25,036 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/config.json\r\n[INFO|configuration_utils.py:720] 2023-04-07 01:53:25,037 >> Model config BertConfig {\r\n \"_name_or_path\": \"bert-base-uncased\",\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"classifier_dropout\": null,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"position_embedding_type\": \"absolute\",\r\n \"transformers_version\": \"4.28.0.dev0\",\r\n \"type_vocab_size\": 2,\r\n \"use_cache\": true,\r\n \"vocab_size\": 30522\r\n}\r\n\r\n[INFO|modeling_utils.py:2478] 2023-04-07 01:53:25,108 >> loading weights file pytorch_model.bin from cache at /root/.cache/huggingface/hub/models--bert-base-uncased/snapshots/0a6aa9128b6194f4f3c4db429b6cb4891cdb421b/pytorch_model.bin\r\n[WARNING|modeling_utils.py:3118] 2023-04-07 01:53:27,033 >> Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias']\r\n- This IS expected if you are initializing BertForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing BertForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n[WARNING|modeling_utils.py:3130] 2023-04-07 01:53:27,034 >> Some weights of BertForQuestionAnswering were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n04/07/2023 01:53:27 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453/cache-d1f3bae3544867f1.arrow\r\n04/07/2023 01:53:27 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453/cache-7bcc744f960ea416.arrow\r\n[INFO|trainer.py:621] 2023-04-07 01:53:28,908 >> Using apex half precision backend\r\n[INFO|trainer.py:1766] 2023-04-07 01:53:30,199 >> ***** Running training *****\r\n[INFO|trainer.py:1767] 2023-04-07 01:53:30,199 >> Num examples = 88,524\r\n[INFO|trainer.py:1768] 2023-04-07 01:53:30,199 >> Num Epochs = 2\r\n[INFO|trainer.py:1769] 2023-04-07 01:53:30,200 >> Instantaneous batch size per device = 24\r\n[INFO|trainer.py:1770] 2023-04-07 01:53:30,200 >> Total train batch size (w. parallel, distributed & accumulation) = 24\r\n[INFO|trainer.py:1771] 2023-04-07 01:53:30,200 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1772] 2023-04-07 01:53:30,200 >> Total optimization steps = 7,376\r\n[INFO|trainer.py:1773] 2023-04-07 01:53:30,203 >> Number of trainable parameters = 108,893,186\r\nSelected optimization level O2: FP16 training with FP32 batchnorm and FP32 master weights.\r\n\r\nDefaults for this optimization level are:\r\nenabled : True\r\nopt_level : O2\r\ncast_model_type : torch.float16\r\npatch_torch_functions : False\r\nkeep_batchnorm_fp32 : True\r\nmaster_weights : True\r\nloss_scale : dynamic\r\ncombine_grad : None\r\ncombine_ddp : None\r\nddp_replica_count : 4\r\ncheck_combined_tensors : None\r\nuser_cast_preferred : None\r\nProcessing user overrides (additional kwargs that are not None)...\r\nAfter processing overrides, optimization options are:\r\nenabled : True\r\nopt_level : O2\r\ncast_model_type : torch.float16\r\npatch_torch_functions : False\r\nkeep_batchnorm_fp32 : True\r\nmaster_weights : True\r\nloss_scale : dynamic\r\ncombine_grad : None\r\ncombine_ddp : None\r\nddp_replica_count : 4\r\ncheck_combined_tensors : None\r\nuser_cast_preferred : None\r\n\r\n100%|██████████| 7376/7376 [37:49<00:00, 3.25it/s]\r\n\r\n[INFO|trainer.py:2865] 2023-04-07 02:31:19,790 >> Saving model checkpoint to ./output\r\n[INFO|configuration_utils.py:457] 2023-04-07 02:31:19,793 >> Configuration saved in ./output/config.json\r\n[INFO|modeling_utils.py:1839] 2023-04-07 02:31:21,077 >> Model weights saved in ./output/pytorch_model.bin\r\n[INFO|tokenization_utils_base.py:2170] 2023-04-07 02:31:21,079 >> tokenizer config file saved in ./output/tokenizer_config.json\r\n[INFO|tokenization_utils_base.py:2177] 2023-04-07 02:31:21,080 >> Special tokens file saved in ./output/special_tokens_map.json\r\nGradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0\r\nGradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0\r\n{'loss': 2.2345, 'learning_rate': 2.7966377440347073e-05, 'epoch': 0.14}\r\nGradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0\r\n{'loss': 1.3912, 'learning_rate': 2.5932754880694143e-05, 'epoch': 0.27}\r\nGradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0\r\n{'loss': 1.2321, 'learning_rate': 2.3899132321041215e-05, 'epoch': 0.41}\r\n{'loss': 1.1749, 'learning_rate': 2.1865509761388288e-05, 'epoch': 0.54}\r\n{'loss': 1.0975, 'learning_rate': 1.983188720173536e-05, 'epoch': 0.68}\r\n{'loss': 1.0988, 'learning_rate': 1.779826464208243e-05, 'epoch': 0.81}\r\n{'loss': 1.0514, 'learning_rate': 1.5764642082429502e-05, 'epoch': 0.95}\r\n{'loss': 0.8971, 'learning_rate': 1.3731019522776571e-05, 'epoch': 1.08}\r\n{'loss': 0.7757, 'learning_rate': 1.1697396963123646e-05, 'epoch': 1.22}\r\n{'loss': 0.7823, 'learning_rate': 9.663774403470717e-06, 'epoch': 1.36}\r\n{'loss': 0.7851, 'learning_rate': 7.630151843817788e-06, 'epoch': 1.49}\r\n{'loss': 0.7617, 'learning_rate': 5.5965292841648585e-06, 'epoch': 1.63}\r\nGradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0\r\n{'loss': 0.7459, 'learning_rate': 3.5629067245119307e-06, 'epoch': 1.76}\r\n{'loss': 0.7529, 'learning_rate': 1.529284164859002e-06, 'epoch': 1.9}\r\n{'train_runtime': 2269.5834, 'train_samples_per_second': 78.009, 'train_steps_per_second': 3.25, 'train_loss': 1.0397178831948635, 'epoch': 2.0}\r\n***** train metrics *****\r\n epoch = 2.0\r\n train_loss = 1.0397\r\n train_runtime = 0:37:49.58\r\n train_samples = 88524\r\n train_samples_per_second = 78.009\r\n train_steps_per_second = 3.25\r\n04/07/2023 02:31:21 - INFO - __main__ - *** Evaluate ***\r\n[INFO|trainer.py:763] 2023-04-07 02:31:21,152 >> The following columns in the evaluation set don't have a corresponding argument in `BertForQuestionAnswering.forward` and have been ignored: example_id, offset_mapping. If example_id, offset_mapping are not expected by `BertForQuestionAnswering.forward`, you can safely ignore this message.\r\n[INFO|trainer.py:3126] 2023-04-07 02:31:21,156 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:3128] 2023-04-07 02:31:21,157 >> Num examples = 10784\r\n[INFO|trainer.py:3131] 2023-04-07 02:31:21,157 >> Batch size = 8\r\n\r\n100%|██████████| 1348/1348 [01:07<00:00, 26.69it/s]04/07/2023 02:32:40 - INFO - utils_qa - Post-processing 10570 example predictions split into 10784 features.\r\n\r\n100%|██████████| 10570/10570 [01:09<00:00, 152.04it/s]04/07/2023 02:33:50 - INFO - utils_qa - Saving predictions to ./output/eval_predictions.json.\r\n04/07/2023 02:33:50 - INFO - utils_qa - Saving nbest_preds to ./output/eval_nbest_predictions.json.\r\n\r\n100%|██████████| 1348/1348 [02:40<00:00, 8.37it/s]\r\n[INFO|modelcard.py:451] 2023-04-07 02:34:02,898 >> Dropping the following result as it does not have all the necessary fields:\r\n{'task': {'name': 'Question Answering', 'type': 'question-answering'}, 'dataset': {'name': 'squad', 'type': 'squad', 'config': 'plain_text', 'split': 'validation', 'args': 'plain_text'}}\r\n***** eval metrics *****\r\n epoch = 2.0\r\n eval_exact_match = 80.0946\r\n eval_f1 = 87.853\r\n eval_runtime = 0:00:51.51\r\n eval_samples = 10784\r\n eval_samples_per_second = 209.317\r\n eval_steps_per_second = 26.165\r\n```", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22644). All of your documentation changes will be reflected on that endpoint.", "The test case needs to be executed on the Ascend NPU, and the results are shown below:\r\n![image](https://user-images.githubusercontent.com/28150734/230536813-d8457a5d-c73c-4165-9344-9433bc154811.png)\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@sgugger now if i want to use Ascend NPU and accelerate to complete distributed training. how can i start and is there any examples for reference", "The PR hasn't been moved there to add support for NPUs, so for now it's not possible.", "@sgugger Currently, I can use transformers on npu based on this PR, but I find that accelerate cannot be used. What should I do if I want to use accelerate on npu? Is there any reference?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,694
1,687
CONTRIBUTOR
null
# What does this PR do? This PR enables users to leverage the Ascend NPU for training and inference of 🤗 Transformers models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. #22600 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22644/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22644/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22644", "html_url": "https://github.com/huggingface/transformers/pull/22644", "diff_url": "https://github.com/huggingface/transformers/pull/22644.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22644.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22643
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22643/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22643/comments
https://api.github.com/repos/huggingface/transformers/issues/22643/events
https://github.com/huggingface/transformers/pull/22643
1,658,285,659
PR_kwDOCUB6oc5N0ADK
22,643
use __func__ to check can_generate
{ "login": "xin3he", "id": 83260933, "node_id": "MDQ6VXNlcjgzMjYwOTMz", "avatar_url": "https://avatars.githubusercontent.com/u/83260933?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xin3he", "html_url": "https://github.com/xin3he", "followers_url": "https://api.github.com/users/xin3he/followers", "following_url": "https://api.github.com/users/xin3he/following{/other_user}", "gists_url": "https://api.github.com/users/xin3he/gists{/gist_id}", "starred_url": "https://api.github.com/users/xin3he/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xin3he/subscriptions", "organizations_url": "https://api.github.com/users/xin3he/orgs", "repos_url": "https://api.github.com/users/xin3he/repos", "events_url": "https://api.github.com/users/xin3he/events{/privacy}", "received_events_url": "https://api.github.com/users/xin3he/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Could you share a script that shows any improvement? The model is a reference type, so what will be shared is a pointer to that class.\r\n\r\ncc @gante ", "```\r\nimport transformers\r\nimport time\r\n\r\nmodel_name = 'facebook/bart-large-cnn'\r\nmodel = transformers.AutoModelForSeq2SeqLM.from_pretrained(model_name)\r\n\r\nstart = time.time()\r\n#print(model.can_generate())\r\nprint(model._validate_model_class())\r\nprint('time1:', time.time() - start)\r\n\r\nstart = time.time()\r\nif \"GenerationMixin\" in str(model.prepare_inputs_for_generation.__func__):\r\n pass\r\nprint('time2:', time.time() - start)\r\n\r\nstart = time.time()\r\nif \"GenerationMixin\" in str(model.prepare_inputs_for_generation):\r\n pass\r\nprint('time3:', time.time() - start)\r\n\r\n\r\n\"\"\"\r\nResult from my side:\r\nTrue\r\ntime1: 0.001619100570678711\r\ntime2: 5.245208740234375e-06\r\ntime3: 0.0012629032135009766\r\n\"\"\"\r\n\r\n\r\nfrom optimum.intel.neural_compressor.quantization import IncQuantizedModelForSeq2SeqLM\r\nmodel_name = 'Intel/bart-large-cnn-int8-dynamic'\r\nmodel = IncQuantizedModelForSeq2SeqLM.from_pretrained(model_name)\r\n\r\nstart = time.time()\r\nprint(model._validate_model_class())\r\nprint('time1:', time.time() - start)\r\n\r\nstart = time.time()\r\nif \"GenerationMixin\" in str(model.prepare_inputs_for_generation.__func__):\r\n pass\r\nprint('time2:', time.time() - start)\r\n\r\nstart = time.time()\r\nif \"GenerationMixin\" in str(model.prepare_inputs_for_generation):\r\n pass\r\nprint('time3:', time.time() - start)\r\n\r\n\r\n\"\"\"\r\nResult from my side:\r\nTrue\r\ntime1: 0.5971765518188477\r\ntime2: 7.3909759521484375e-06\r\ntime3: 0.5961868762969971\r\n\"\"\"\r\n```" ]
1,680
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? Fixes # (the issue description is as follows) When using `model.generate()` it calls `self._validate_model_class()` to check if model can do generate. If we use `str(self.prepare_inputs_for_generation)` it will dump the entire model architecture which consumes resource and is not necessary. Using `str(self.prepare_inputs_for_generation.__func__)` is a better equivalent replacement. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22643/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22643/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22643", "html_url": "https://github.com/huggingface/transformers/pull/22643", "diff_url": "https://github.com/huggingface/transformers/pull/22643.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22643.patch", "merged_at": 1681132012000 }
https://api.github.com/repos/huggingface/transformers/issues/22642
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22642/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22642/comments
https://api.github.com/repos/huggingface/transformers/issues/22642/events
https://github.com/huggingface/transformers/issues/22642
1,658,266,947
I_kwDOCUB6oc5i1yVD
22,642
New LlamaTokenizer compat issues
{ "login": "Qubitium", "id": 417764, "node_id": "MDQ6VXNlcjQxNzc2NA==", "avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Qubitium", "html_url": "https://github.com/Qubitium", "followers_url": "https://api.github.com/users/Qubitium/followers", "following_url": "https://api.github.com/users/Qubitium/following{/other_user}", "gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}", "starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions", "organizations_url": "https://api.github.com/users/Qubitium/orgs", "repos_url": "https://api.github.com/users/Qubitium/repos", "events_url": "https://api.github.com/users/Qubitium/events{/privacy}", "received_events_url": "https://api.github.com/users/Qubitium/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You shouldn't use this checkpoint. They are ignoring all PRs to update the weights/configs/tokenizers to the latest fixes in Transformers (the repo was generated in the middle of the PR adding Llama so was never compatible with Hugging Face). You should indeed re-run the conversion script to be up to date, or use other checkpoints. For instance, I've found [this one](https://huggingface.co/huggyllama/llama-7b) to be fully compatible with Transformers.\r\n\r\nSince that repo has never worked with Transformers, I cannot speak for models fine-tuned using it sadly. The tokenizer implementation has been fixed to match the original tokenizer of the researchers.", "@sgugger The new LlamaTokenizerFast (the current default) is taking tremendously amount of time to load versus old tokenizer. Cpu is pegged at 100% and even on 5900x will take like 90s to load. Is this normal?", "It won't happen if you have the fast tokenizer file. The repo I linked in my comment above has it. It's because the conversion from slow to fast tokenizer is very slow for LLaMA. As a workaround, you can also use `LlamaTokenizer` instead of `AutoTokenizer` (which will force using the slow tokenizer).", "Sorry to comment on a old closed issue, but this pops out as first result when ddg'ing for \"LlamaTokenizer extremely slow\", so there are probably many people who will get here.\r\n\r\nIf I understand correctly, this happens when the tokenizer is in an old format, and it happens because the tokenizer is converted to the new format each time we load it, is that correct? If so, is there any way to persist the converted tokenizer, to use that the next time, instead of converting it again and again?", "You can save it, then reload it from the save you did. But note that if you follow the doc and use the conversion script on the weights obtained by Llama, you will get the fast tokenizer file created for you automatically.", "Thanks.\r\n\r\nFor anyone else reaching this page and wondering how to do it, the method to use is `save_pretrained`, like this:\r\n\r\n```\r\ntokenizer.save_pretrained(\"./models/tokenizer/\")\r\n````\r\n\r\nI can confirm loading this new version of the tokenizer fixes the slow load for me." ]
1,680
1,683
1,680
NONE
null
### System Info Cuda 11.8 Latest git transformers tokenizers 0.13.3 pytorch 2.0 python 3.9 ### Reproduction Run trained HF llama models based on https://huggingface.co/decapoda-research/llama-7b-hf without error. Using latest head transformer+tokenizers I am seeing: 1. Very long loading time for LlamaTokenizer with cpu pegged at 100% 2. Incorrect tokenization of trained Llama (7B tested) models that have the Lora adapter applied under old transformer/tokenizer code (from last week). I see that there are several commits regarding LlamaTokenizer but what is the correct usage now and is this compatible with old models trained on "old" LlamaTokenizer? For example, ```https://huggingface.co/decapoda-research/llama-7b-hf``` tokenizer supposedly has bad default values vs original. And the new commits are supposed to resolve this. Does this mean we need to regenerate a HF compatible 7B from original META llama using latest transformer and retrain all over again? Just need clarifications on I can move forward using the latest transformer/tokenizer code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22642/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22641
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22641/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22641/comments
https://api.github.com/repos/huggingface/transformers/issues/22641/events
https://github.com/huggingface/transformers/issues/22641
1,658,207,733
I_kwDOCUB6oc5i1j31
22,641
Compute Accuracy in clip-roberta
{ "login": "skaulintel", "id": 75697181, "node_id": "MDQ6VXNlcjc1Njk3MTgx", "avatar_url": "https://avatars.githubusercontent.com/u/75697181?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skaulintel", "html_url": "https://github.com/skaulintel", "followers_url": "https://api.github.com/users/skaulintel/followers", "following_url": "https://api.github.com/users/skaulintel/following{/other_user}", "gists_url": "https://api.github.com/users/skaulintel/gists{/gist_id}", "starred_url": "https://api.github.com/users/skaulintel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skaulintel/subscriptions", "organizations_url": "https://api.github.com/users/skaulintel/orgs", "repos_url": "https://api.github.com/users/skaulintel/repos", "events_url": "https://api.github.com/users/skaulintel/events{/privacy}", "received_events_url": "https://api.github.com/users/skaulintel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @skaulintel! According to what I see in the traceback, this issue is specific to Optimum Habana. Could you move this issue [there](https://github.com/huggingface/optimum-habana/issues) please? And then I'll follow up." ]
1,680
1,680
1,680
NONE
null
### Feature request Is it possible to include an accuracy metric when training clip-roberta? ### Motivation We would like to have something other than loss to track during training. ### Your contribution I tried creating a dummy compute_metrics function to pass to GaudiTrainer thusly. metric = evaluate.load("accuracy") def compute_metrics(p): return 1 get the following error: Traceback (most recent call last): File "run_clip.py", line 553, in <module> main() File "run_clip.py", line 532, in main metrics = trainer.evaluate() File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2932, in evaluate output = eval_loop( File "/usr/local/lib/python3.8/dist-packages/optimum/habana/transformers/trainer.py", line 1074, in evaluation_loop logits_dtype = get_dtype(logits) File "/usr/local/lib/python3.8/dist-packages/optimum/habana/transformers/trainer_utils.py", line 43, in get_dtype return [get_dtype(logits_tensor) for logits_tensor in logits] File "/usr/local/lib/python3.8/dist-packages/optimum/habana/transformers/trainer_utils.py", line 43, in <listcomp> return [get_dtype(logits_tensor) for logits_tensor in logits] File "/usr/local/lib/python3.8/dist-packages/optimum/habana/transformers/trainer_utils.py", line 45, in get_dtype raise TypeError(f"logits should be of type torch.Tensor or tuple, got {type(logits)} which is not supported") TypeError: logits should be of type torch.Tensor or tuple, got <class 'transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions'> which is not supported
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22641/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22640
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22640/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22640/comments
https://api.github.com/repos/huggingface/transformers/issues/22640/events
https://github.com/huggingface/transformers/issues/22640
1,658,146,812
I_kwDOCUB6oc5i1U_8
22,640
Seq2Seq Trainer for QA: No useful metric returned for evaluation
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You need to pass along `--predict_with_generate` to use generate in the evaluation, and then get the metrics.", "Thanks @sgugger ! I used that option, but after evaluation it outputs:\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"run_seq2seq_qa.py\", line 724, in <module>\r\n main()\r\n File \"run_seq2seq_qa.py\", line 683, in main\r\n metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix=\"eval\")\r\n File \"/home/ubuntu/transformers/examples/pytorch/question-answering/trainer_seq2seq_qa.py\", line 92, in evaluate\r\n eval_preds = self.post_process_function(eval_examples, eval_dataset, output)\r\n File \"run_seq2seq_qa.py\", line 617, in post_processing_function\r\n decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)\r\n File \"/home/ubuntu/transformers/src/transformers/tokenization_utils_base.py\", line 3445, in batch_decode\r\n return [\r\n File \"/home/ubuntu/transformers/src/transformers/tokenization_utils_base.py\", line 3446, in <listcomp>\r\n self.decode(\r\n File \"/home/ubuntu/transformers/src/transformers/tokenization_utils_base.py\", line 3485, in decode\r\n return self._decode(\r\n File \"/home/ubuntu/transformers/src/transformers/tokenization_utils_fast.py\", line 549, in _decode\r\n text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)\r\nOverflowError: out of range integral type conversion attempted\r\n```", "Ah, I've just found this issue for that problem: https://github.com/huggingface/transformers/issues/22634\r\n\r\nSo I'll close here!" ]
1,680
1,680
1,680
COLLABORATOR
null
### System Info Latest Transformers version from main. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, unfortunately, the `run_seq2seq_qa.py` in PyTorch Example folder does not output a useful evaluation metric: ```bash python run_seq2seq_qa.py \ --model_name_or_path google/mt5-small \ --dataset_name mlqa \ --dataset_config mlqa-translate-train.de \ --context_column context \ --question_column question \ --answer_column answers \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 1 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 5000 \ --max_steps 50 \ --output_dir ./mt5-small ``` I'm running this command and the final output looks like: ``` ***** eval metrics ***** epoch = 0.01 eval_loss = 15.7694 eval_runtime = 0:00:48.60 eval_samples = 10584 eval_samples_per_second = 217.741 eval_steps_per_second = 27.218 ``` So I'm missing EM and F1-Score: why is it no longer there? This also happens when fine-tuning SquAD 2.0. ### Expected behavior Useful evaluation metric: EM and F1-Score should be returned.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22640/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22639
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22639/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22639/comments
https://api.github.com/repos/huggingface/transformers/issues/22639/events
https://github.com/huggingface/transformers/issues/22639
1,658,076,330
I_kwDOCUB6oc5i1Dyq
22,639
Have a beam search sub batch size to limit memory use
{ "login": "JulesGM", "id": 3231217, "node_id": "MDQ6VXNlcjMyMzEyMTc=", "avatar_url": "https://avatars.githubusercontent.com/u/3231217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JulesGM", "html_url": "https://github.com/JulesGM", "followers_url": "https://api.github.com/users/JulesGM/followers", "following_url": "https://api.github.com/users/JulesGM/following{/other_user}", "gists_url": "https://api.github.com/users/JulesGM/gists{/gist_id}", "starred_url": "https://api.github.com/users/JulesGM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesGM/subscriptions", "organizations_url": "https://api.github.com/users/JulesGM/orgs", "repos_url": "https://api.github.com/users/JulesGM/repos", "events_url": "https://api.github.com/users/JulesGM/events{/privacy}", "received_events_url": "https://api.github.com/users/JulesGM/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[ "@sgugger ", "cc @gante ", "Hey @JulesGM! Thank you for suggesting sub-batch size beam search. It is somewhat related to #22340 (add some option to stop spending resources on beams that have already been finished).\r\n\r\nCurrently, we are unable to fulfill all requests, and I haven't seen demand for this particular feature. As such, I'll offer my standard pact: if/when this comment reaches 10 reactions, I'll put the feature on my todo list :) (and whoever does the 10th react, plz ping me)\r\n\r\nAlternatively, if you'd like to implement this feature yourself, I'd be happy to guide you 🙌 ", "I was thinking something like taking https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L2833\r\n\r\n```python\r\noutputs = self(\r\n **model_inputs,\r\n return_dict=True,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n)\r\n```\r\nInto, fairly naively splitting into sub-batches then concatenating should be fine:\r\n\r\n```python\r\n\r\n# Make it more explicit that all kwargs are shared by the two forward modes\r\n# & make the code shorter\r\nforward_kwargs = dict( \r\n return_dict=True,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n)\r\n\r\nif beam_search_batch_size:\r\n bsbs = beam_search_batch_size # Repeated a bunch of times, makes code too long\r\n outputs_per_sub_batch = []\r\n\r\n for i in range(0, seq_len, bsbs):\r\n model_inputs_sub_batch = {\r\n k: v[i * bsbs: (i + 1) * bsbs] \r\n for k, v in model_inputs\r\n }\r\n sub_batch_outputs = self(\r\n **model_inputs_sub_batch,\r\n **forward_kwargs,\r\n )\r\n outputs_per_sub_batch.append(sub_batch_outputs)\r\n\r\n keys = outputs_per_sub_batch[0].keys()\r\n outputs = {\r\n k: torch.cat([sub_batch[k] for sub_batch in outputs_per_sub_batch] f, dim=0) \r\n for k in keys\r\n }\r\n\r\nelse: # Unchanged original behavior\r\n outputs = self(\r\n **model_inputs,\r\n **forward_kwargs,\r\n )\r\n```\r\n\r\nThis could likely be moved to a new function called \"forward on beams\" that could be called on `model_inputs` in all beam search variants.", "I guess the one thing I'm not confident about is how to add the \"beam_search_sub_batch\" parameter to the `generate` call in a way that respects huggingface's .. interface objectives", "Ideally, no parameterization would be needed (e.g. split the beam search batch size in two every time it hits a memory exception). But a try/except on the main code path would be ugly 🤔 \r\n\r\nI am working on a plan to increase the flexibility of `.generate()`, hopefully this sort of features will become trivial to integrate 🙏 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
### Feature request Currently, beam search effectively multiplies the batch size memory-wise and compute-wise by the batch size. If you have a batch size of 1 and a beam search of 8, `model.forward` sees 8 samples at once. This becomes an unnecessary problem for situations where needs that beam size but is only able to fit a smaller quantity of samples into memory. That's why I suggest that a "beam search sub-batch" size be added to limit the number of samples (including beams) that are seen at once by the model. Eg, if the beam search batch size is 4, even if the main batch size is 1 and the beam search is 8, `model.foward` would only be called on 4 samples at once. Same for if the main batch size is 2 and the beam size is 8. ### Motivation With the size of LLM & their popularity, people are often stretching the capacities of their hardware, & in some cases (like majority voting for chain-of-thoughts), need beam search with some beam size. If that beam size can't fit all at once in the memory, they are currently doomed, which is unnecessary, as the beams could be computed in a smaller quantity at once.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22639/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22639/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22638
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22638/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22638/comments
https://api.github.com/repos/huggingface/transformers/issues/22638/events
https://github.com/huggingface/transformers/issues/22638
1,657,943,477
I_kwDOCUB6oc5i0jW1
22,638
When using transformers.DataCollatorWithPadding normally, always get annoying warning
{ "login": "JulesGM", "id": 3231217, "node_id": "MDQ6VXNlcjMyMzEyMTc=", "avatar_url": "https://avatars.githubusercontent.com/u/3231217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JulesGM", "html_url": "https://github.com/JulesGM", "followers_url": "https://api.github.com/users/JulesGM/followers", "following_url": "https://api.github.com/users/JulesGM/following{/other_user}", "gists_url": "https://api.github.com/users/JulesGM/gists{/gist_id}", "starred_url": "https://api.github.com/users/JulesGM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesGM/subscriptions", "organizations_url": "https://api.github.com/users/JulesGM/orgs", "repos_url": "https://api.github.com/users/JulesGM/repos", "events_url": "https://api.github.com/users/JulesGM/events{/privacy}", "received_events_url": "https://api.github.com/users/JulesGM/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" } ]
open
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Sorry this slipped through the cracks. If you have a way to disable the warning for the data collator, I'm happy to look at a PR!", "The simplest would be to add an argument to `tokenizer.pad` that defaults to `False` that is something like `supress_warning_slow: bool = False`, & when it's passed as `True`, the warning is not emitted. ", "This warning is genreated from [here](https://github.com/huggingface/transformers/blob/003a0cf8cc4d78e47ef9debfb1e93a5c1197ca9a/src/transformers/tokenization_utils_base.py#L2949), an easy way to turn off is\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(...)\r\ntokenizer.deprecation_warnings[\"Asking-to-pad-a-fast-tokenizer\"] = True\r\n```", "Oh I didn't know that! Do you want to make a PR to add this in `DataCollatorWithPadding`?", "What is the recommendation? Should we pre-pad together with tokenization or leave that to the collator, when using a fast tokenizer? Has anyone done any performance comparison?", "https://github.com/huggingface/transformers/pull/23742", "> \r\n\r\nIf you need to tokenize the data before training, just turn off this warning, if you can do tokenization during training(for small datasets or only train once), you can use the fast tokenizer in a custom data collector function without calling ```tokenizer.pad```.", "It's weird to me that when using the official Transformers data collators, the warnings are be emitted. I feel like they shouldn't. That is the point of this issue.", "I feel like the fact that one decides to use the collators means that the warnings should indeed be disabled. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hello, what is the status of this fix ? Was it merged ? I look for it but I still get the warning message", "I just didn't have the time to do it, had to finish my thesis & do\r\ninterviews.\r\nYou're welcome to try to figure this out.\r\n\r\nOn Tue, Dec 19, 2023, 4:50 PM Daniel Bustamante Ospina <\r\n***@***.***> wrote:\r\n\r\n> Hello, what is the status of this fix ? Was it merged ? I look for it but\r\n> I still get the warning message\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/22638#issuecomment-1863521135>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AAYU34OTHDOTBIECC6S4HVLYKID2RAVCNFSM6AAAAAAWVZYVB2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRTGUZDCMJTGU>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Thanks for the reply @JulesGM , I think that I could take your advances (indeed you already solved the issue) and give it a try :)" ]
1,680
1,703
null
NONE
null
### Explanation So systematically, people who use `transformers.DataCollatorWithPadding` get ```python You're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. ``` when the collator is called, because it's called on pre-tokenized samples. Now, I feel like it's still faster to pre-tokenize once & then dynamically pad batches (because samples can be shuffled), than only live tokenize, & tokenizing to fixed `max_length` is just very sub-optimal from a compute and memory standpoint. So, I feel like that warning should be disabled at least when using the collator, & using it as intented. @ArthurZucker @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22638/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22638/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/22637
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22637/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22637/comments
https://api.github.com/repos/huggingface/transformers/issues/22637/events
https://github.com/huggingface/transformers/pull/22637
1,657,919,949
PR_kwDOCUB6oc5Ny1FP
22,637
Update tiny model summary file for recent models
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
COLLABORATOR
null
# What does this PR do? I just created tiny models for recent models. This PR updates tiny model summary file to use them. This includes: - TFBartForSequenceClassification - TFBlipForConditionalGeneration - ClapModel - MegaModel - NllbMoeModel - TFVisionTextDualEncoderModel - WhisperForAudioClassification A few tiny changes are included too.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22637/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22637", "html_url": "https://github.com/huggingface/transformers/pull/22637", "diff_url": "https://github.com/huggingface/transformers/pull/22637.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22637.patch", "merged_at": 1680814380000 }
https://api.github.com/repos/huggingface/transformers/issues/22636
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22636/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22636/comments
https://api.github.com/repos/huggingface/transformers/issues/22636/events
https://github.com/huggingface/transformers/issues/22636
1,657,890,999
I_kwDOCUB6oc5i0Wi3
22,636
Both `max_new_tokens` and `max_length` seem to have been set.
{ "login": "HeekangPark", "id": 16741548, "node_id": "MDQ6VXNlcjE2NzQxNTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/16741548?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HeekangPark", "html_url": "https://github.com/HeekangPark", "followers_url": "https://api.github.com/users/HeekangPark/followers", "following_url": "https://api.github.com/users/HeekangPark/following{/other_user}", "gists_url": "https://api.github.com/users/HeekangPark/gists{/gist_id}", "starred_url": "https://api.github.com/users/HeekangPark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HeekangPark/subscriptions", "organizations_url": "https://api.github.com/users/HeekangPark/orgs", "repos_url": "https://api.github.com/users/HeekangPark/repos", "events_url": "https://api.github.com/users/HeekangPark/events{/privacy}", "received_events_url": "https://api.github.com/users/HeekangPark/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[ "cc @gante ", "@HeekangPark yeah, pipelines + new generation arguments have yet to be revisited. Thank you for raising the issue! \r\n\r\nI took note of your suggestions. However, since the output is not broken, I may take a while to actually fix it :)", "@QuentinAmbard @gante , could you please tell how to fix this bug? I still see \"logging error message\".", "@IamExperimenting @HeekangPark The warning is no longer present in the text generation pipeline, if you install from `main` :)" ]
1,680
1,683
1,683
NONE
null
### System Info - `transformers` version: 4.27.4 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm trying to generate some text with `text-generation` pipeline. ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, GenerationConfig device = "cuda:0" model_name = "facebook/opt-1.3b" # tokenizer, model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, pad_token_id=tokenizer.eos_token_id ).to(device) # pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=device) # generate text text = "Hello " result = pipe( text, generation_config=GenerationConfig( max_new_tokens=70, return_full_text=False, num_beams=1, do_sample=False ) ) # print result print(result) ``` When I execute the code above, it shows error/warning messages like below. ```text --- Logging error --- Traceback (most recent call last): File "/python-path/python3.9/logging/__init__.py", line 1083, in emit msg = self.format(record) File "/python-path/python3.9/logging/__init__.py", line 927, in format return fmt.format(record) File "/python-path/python3.9/logging/__init__.py", line 663, in format record.message = record.getMessage() File "/python-path/python3.9/logging/__init__.py", line 367, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Call stack: File "/python-path/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/python-path/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/python-path/python3.9/site-packages/ipykernel_launcher.py", line 17, in <module> app.launch_new_instance() File "/python-path/python3.9/site-packages/traitlets/config/application.py", line 1043, in launch_instance app.start() File "/python-pathpython3.9/site-packages/ipykernel/kernelapp.py", line 725, in start self.io_loop.start() File "/python-path/python3.9/site-packages/tornado/platform/asyncio.py", line 215, in start self.asyncio_loop.run_forever() File "/python-path/python3.9/asyncio/base_events.py", line 601, in run_forever self._run_once() File "/python-path/python3.9/asyncio/base_events.py", line 1905, in _run_once handle._run() File "/python-path/python3.9/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/python-path/python3.9/site-packages/ipykernel/kernelbase.py", line 513, in dispatch_queue await self.process_one() File "/python-path/python3.9/site-packages/ipykernel/kernelbase.py", line 502, in process_one await dispatch(*args) File "/python-path/python3.9/site-packages/ipykernel/kernelbase.py", line 409, in dispatch_shell await result File "/python-path/python3.9/site-packages/ipykernel/kernelbase.py", line 729, in execute_request reply_content = await reply_content File "/python-path/python3.9/site-packages/ipykernel/ipkernel.py", line 422, in do_execute res = shell.run_cell( File "/python-path/python3.9/site-packages/ipykernel/zmqshell.py", line 540, in run_cell return super().run_cell(*args, **kwargs) File "/python-path/python3.9/site-packages/IPython/core/interactiveshell.py", line 3006, in run_cell result = self._run_cell( File "/python-path/python3.9/site-packages/IPython/core/interactiveshell.py", line 3061, in _run_cell result = runner(coro) File "/python-path/python3.9/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner coro.send(None) File "/python-path/python3.9/site-packages/IPython/core/interactiveshell.py", line 3266, in run_cell_async has_raised = await self.run_ast_nodes(code_ast.body, cell_name, File "/python-path/python3.9/site-packages/IPython/core/interactiveshell.py", line 3445, in run_ast_nodes if await self.run_code(code, result, async_=asy): File "/python-path/python3.9/site-packages/IPython/core/interactiveshell.py", line 3505, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "/tmp/ipykernel_872573/1980627959.py", line 19, in <module> result = pipe( File "/python-path/python3.9/site-packages/transformers/pipelines/text_generation.py", line 209, in __call__ return super().__call__(text_inputs, **kwargs) File "/python-path/python3.9/site-packages/transformers/pipelines/base.py", line 1109, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/python-path/python3.9/site-packages/transformers/pipelines/base.py", line 1116, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/python-path/python3.9/site-packages/transformers/pipelines/base.py", line 1015, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/python-path/python3.9/site-packages/transformers/pipelines/text_generation.py", line 251, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) File "/python-path/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/python-path/python3.9/site-packages/transformers/generation/utils.py", line 1297, in generate logger.warn( Message: 'Both `max_new_tokens` (=70) and `max_length`(=73) seem to have been set. `max_new_tokens` will take precedence. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)' Arguments: (<class 'UserWarning'>,) ``` ### Expected behavior 1. It seems like that `transformers` gives a warning message when both `max_new_tokens` and `max_length` are set. But `max_length` is not set by me, but the downloaded pretrained model(`facebook/opt-1.3b`). So far as I know, almost all generative models set `max_length`, so this warning message is always shown up when the user set `max_new_tokens`, regardless of whether the user actually set `max_length` as well or not. However, to avoid unnecessary warning messages, I think **the warning message should be shown up only when the user *explicitly* set both `max_new_tokens` and `max_length`** - Even `max_length` value on the warning message is wrong, because `generation_config.max_length` is overwrited with `generation_config.max_new_tokens + input_ids_seq_length` if `max_new_tokens` has been set. 2. `logging` module throws an error, because `UserWarning` is passed as a parameter to `logger.warn()` method. ```python logger.warn( f"Both `max_new_tokens` (={generation_config.max_new_tokens}) and `max_length`(=" f"{generation_config.max_length}) seem to have been set. `max_new_tokens` will take precedence. " "Please refer to the documentation for more information. " "(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)", UserWarning, ) ``` - It seems like `transformers` use `warnings.warn()`, `logger.warn()`, and `logger.warning()`. I think **it should be consolidated to use one method consistently for better coherence.**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22636/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22635
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22635/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22635/comments
https://api.github.com/repos/huggingface/transformers/issues/22635/events
https://github.com/huggingface/transformers/pull/22635
1,657,863,602
PR_kwDOCUB6oc5NypUV
22,635
[doc] Try a few ≠ ways of linking to Papers, users, and org profiles
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22635/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22635", "html_url": "https://github.com/huggingface/transformers/pull/22635", "diff_url": "https://github.com/huggingface/transformers/pull/22635.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22635.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22634
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22634/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22634/comments
https://api.github.com/repos/huggingface/transformers/issues/22634/events
https://github.com/huggingface/transformers/issues/22634
1,657,760,350
I_kwDOCUB6oc5iz2pe
22,634
[run_translation.py] out of range integral type conversion attempted
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker Seems like the model and tokenizer have mismatched length", "Yeah, but :\r\n- the tokenizer has 100 additional special tokens so even if the model predicts something above 32000 (the model's vocab size) you get an extra id (until 32099)\r\n- the tokenizer has an `unk_token` so when you go above `32099`, the fast simply outputs `''` while the slow ` '<extra_id_-29>'` (which is a bit strange I'll give you that 😅 \r\nsnippet:\r\n```python \r\n>>> from transformers import T5Tokenizer, T5TokenizerFast\r\n>>> tokenizer_slow = T5Tokenizer.from_pretrained(\"t5-base\")\r\n>>> tokenizer_slow.decode(32140) # above vocab size\r\n'<extra_id_-3167901>'\r\n>>> tokenizer_fast = T5TokenizerFast.from_pretrained(\"t5-base\")\r\n''\r\n```\r\nThe issue is different. This is a integer overflow in rust: \r\n```python \r\n>>> tokenizer_fast.decode(3200000000000)\r\n---------------------------------------------------------------------------\r\nOverflowError Traceback (most recent call last)\r\nCell In[29], line 1\r\n----> 1 tokenizer_fast.decode(3200000000000)\r\n\r\nFile ~/Work/transformers/src/transformers/tokenization_utils_base.py:3485, in PreTrainedTokenizerBase.decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)\r\n 3482 # Convert inputs to python lists\r\n 3483 token_ids = to_py_obj(token_ids)\r\n-> 3485 return self._decode(\r\n 3486 token_ids=token_ids,\r\n 3487 skip_special_tokens=skip_special_tokens,\r\n 3488 clean_up_tokenization_spaces=clean_up_tokenization_spaces,\r\n 3489 **kwargs,\r\n 3490 )\r\n\r\nFile ~/Work/transformers/src/transformers/tokenization_utils_fast.py:549, in PreTrainedTokenizerFast._decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces, **kwargs)\r\n 547 if isinstance(token_ids, int):\r\n 548 token_ids = [token_ids]\r\n--> 549 text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)\r\n 551 clean_up_tokenization_spaces = (\r\n 552 clean_up_tokenization_spaces\r\n 553 if clean_up_tokenization_spaces is not None\r\n 554 else self.clean_up_tokenization_spaces\r\n 555 )\r\n 556 if clean_up_tokenization_spaces:\r\n\r\nOverflowError: out of range integral type conversion attempted\r\n``` \r\nThat means you are juste giving a huge huge number to decode is there a reason ?", "Please note I've only relayed the errors reported on the pytorch Issued by a user trying to use `torch.compile`. ", "Hi guys,\r\n\r\nI have the same problem with the `run_seq2seq_qa.py` script and it turns out, that `preds` are passed to the `decode` function, with the following content:\r\n\r\n```\r\n[[ 0 250099 1013 ... -100 -100 -100] \r\n [ 0 250099 1013 ... -100 -100 -100] \r\n [ 0 250099 1013 ... -100 -100 -100] \r\n ... \r\n [ 0 250099 260 ... -100 -100 -100] \r\n [ 0 250099 442 ... -100 -100 -100]\r\n [ 0 250099 3883 ... -100 -100 -100]]\r\n```\r\n\r\nSo the problematic thing here is `-100` I guess, because I can reproduce the error with:\r\n\r\n```\r\n>>> tokenizer.decode(-100)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/ubuntu/transformers/src/transformers/tokenization_utils_base.py\", line 3485, in decode\r\n return self._decode(\r\n File \"/home/ubuntu/transformers/src/transformers/tokenization_utils_fast.py\", line 549, in _decode\r\n text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)\r\nOverflowError: out of range integral type conversion attempted\r\n```", "Awsome thanks for providing this! Indeed these should be converted to padding", "Could it be similar to this fix? https://github.com/huggingface/transformers/pull/18592\r\nThe hardcoded -100 doesn't seem to always do the right thing.", "I tried with another model arch and it's breaks too but in another way. so eval is quite broken in many ways.\r\n\r\n```\r\nCUDA_VISIBLE_DEVICES=0 PYTHONPATH=src python examples/pytorch/translation/run_translation.py --model_name_or_path 'facebook/wmt19-en-ru' --do_train --do_eval --source_lang en --target_lang de --source_prefix 'translate English to German: ' --dataset_name stas/wmt14-en-de-pre-processed --output_dir /tmp/tst-translation --num_train_epochs 1 --per_device_train_batch_size=1 --max_train_samples 10 --overwrite_output_dir --seed 1137 --per_device_eval_batch_size 1 --predict_with_generate --fp16 --max_eval_samples 10\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"examples/pytorch/translation/run_translation.py\", line 664, in <module>\r\n main()\r\n File \"examples/pytorch/translation/run_translation.py\", line 605, in main\r\n metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix=\"eval\")\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py\", line 159, in evaluate\r\n return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py\", line 2993, in evaluate\r\n output = eval_loop(\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py\", line 3174, in evaluation_loop\r\n loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py\", line 290, in prediction_step\r\n outputs = model(**inputs)\r\n File \"/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/fsmt/modeling_fsmt.py\", line 1251, in forward\r\n masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.tgt_vocab_size), labels.view(-1))\r\n File \"/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/nn/modules/loss.py\", line 1174, in forward\r\n return F.cross_entropy(input, target, weight=self.weight,\r\n File \"/home/stas/anaconda3/envs/py38-pt20/lib/python3.8/site-packages/torch/nn/functional.py\", line 3029, in cross_entropy\r\n return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)\r\nValueError: Expected input batch_size (56) to match target batch_size (48).\r\n```", "@stas00 I am facing the same issue while fine-tuning t5-small using `examples/pytorch/summarization/run_summarization.py`\r\nAnd I can see `preds` has `-100` and so decode fails with the below error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"examples/pytorch/summarization/run_summarization.py\", line 751, in <module> main()\r\n File \"examples/pytorch/summarization/run_summarization.py\", line 705, in main\r\n predict_results = trainer.predict(predict_dataset, metric_key_prefix=\"predict\")\r\n File \"src/transformers/trainer_seq2seq.py\", line 216, in predict\r\n return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"src/transformers/trainer.py\", line 3069, in predict\r\n output = eval_loop(\r\n File \"src/transformers/trainer.py\", line 3281, in evaluation_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))\r\n File \"examples/pytorch/summarization/run_summarization.py\", line 635, in compute_metrics\r\n decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)\r\n File \"src/transformers/tokenization_utils_base.py\", line 3446, in batch_decode\r\n return [\r\n File \"src//transformers/tokenization_utils_base.py\", line 3447, in <listcomp>\r\n self.decode(\r\n File \"src/transformers/tokenization_utils_base.py\", line 3486, in decode\r\n return self._decode(\r\n File \"src/transformers/tokenization_utils_fast.py\", line 549, in _decode\r\n text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)\r\nOverflowError: out of range integral type conversion attempted\r\n```\r\n\r\n", "The first issue is addressed in #22693\r\n\r\nThe second issue with FSMT is due to [this line](https://github.com/huggingface/transformers/blob/151425ddb29d4ad1a121e8cce62000a2ac52d3ba/src/transformers/trainer_seq2seq.py#L270) added by @gante . The `decoder_input_ids` not passed to `generate` result in generations that have the same length as the inputs and not the targets.", "@sgugger thanks for the fix. I can see the same issue in line 718 https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py#L718\r\n\r\npossible fix:\r\npreds= np.where(predict_results.predictions != -100, predict_results.predictions, tokenizer.pad_token_id)\r\npredictions = tokenizer.batch_decode(preds, skip_special_tokens=True, clean_up_tokenization_spaces=True)\r\n", "Good catch, adding this too in the PR.", "Thinking more, I think this is also a result of the recent changes in generate, which used to be the one padding the result with `tokenizer.pad_token_id`, and it's now the `Trainer` padding them with -100. cc @gante ", "Hey everyone -- the last issues should be gone with #22772, but feel free to comment/reopen if any related problem persists!", "Hi! since a couple of weeks I also stumbled on this error. It was working just fine before. I am pretty sure I have transformer installed from source so the PR with the fix is there as well. I am using Bart-large and the Trainer class.\r\nI first define rouge as training evaluation function:\r\n```python\r\n def compute_rouge(pred): \r\n predictions, labels = pred\r\n #decode the predictions\r\n decode_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True)\r\n #decode labels\r\n decode_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)\r\n\r\n #compute results\r\n res = rouge.compute(predictions=decode_predictions, references=decode_labels, use_stemmer=True)\r\n #get %\r\n return res\r\n``` \r\nAnd give it to the trainer\r\n```python\r\n trainer = Seq2SeqTrainer(\r\n model, \r\n args,\r\n train_dataset=tokenized_dataset['train'],\r\n eval_dataset=tokenized_dataset['valid'],\r\n data_collator=collator,\r\n tokenizer=tokenizer,\r\n compute_metrics=compute_rouge\r\n )\r\n``` \r\nThen the script breaks in Trainer.train, while decoding for dev set evaluation:\r\n\r\n\r\n``` \r\nTraceback (most recent call last):\r\n File \"/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/finetunemodel.py\", line 226, in <module>\r\n main(args)\r\n File \"/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/finetunemodel.py\", line 149, in main\r\n trainer.train()\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py\", line 1662, in train\r\n return inner_training_loop(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py\", line 2022, in _inner_training_loop\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py\", line 2288, in _maybe_log_save_evaluate\r\n metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer_seq2seq.py\", line 159, in evaluate\r\n return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py\", line 2994, in evaluate\r\n output = eval_loop(\r\n ^^^^^^^^^^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py\", line 3283, in evaluation_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/finetunemodel.py\", line 103, in compute_rouge\r\n decode_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 3456, in batch_decode\r\n return [\r\n ^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 3457, in <listcomp>\r\n self.decode(\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 3496, in decode\r\n return self._decode(\r\n ^^^^^^^^^^^^^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py\", line 549, in _decode\r\n text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nOverflowError: out of range integral type conversion attempted\r\n\r\n```\r\nInterestingly enough, on a similar formatted dataset (but longer text) while using Longformer (led), I get the same error but this time at prediction time, thus the trained is completed successfully: \r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/LED_4_DWIE.py\", line 236, in <module>\r\n main(args)\r\n File \"/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/LED_4_DWIE.py\", line 161, in main\r\n preds, labels, metrics = trainer.predict(tokenized_dataset['test'], num_beams=5, min_length=50, max_length=max_target, no_repeat_ngram_size=2, early_stopping=True)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer_seq2seq.py\", line 216, in predict\r\n return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py\", line 3070, in predict\r\n output = eval_loop(\r\n ^^^^^^^^^^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/trainer.py\", line 3283, in evaluation_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ghoogerw/kg2Narrative/KGNarrative2/script4trainingLLM/LED_4_DWIE.py\", line 103, in compute_rouge\r\n decode_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 3456, in batch_decode\r\n return [\r\n ^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 3457, in <listcomp>\r\n self.decode(\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 3496, in decode\r\n return self._decode(\r\n ^^^^^^^^^^^^^\r\n File \"/home/ghoogerw/.conda/envs/kg2Narrative/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py\", line 549, in _decode\r\n text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nOverflowError: out of range integral type conversion attempted\r\n```\r\n\r\n\r\n\r\n\r\n", "Hey @GabHoo -- could you share with us a short stand-alone script to reproduce the issue? :)", "Thank you for time. Here is a standaone version of the script. I hope it is the case,\r\n\r\n```python\r\n\r\n from transformers import AutoTokenizer,AutoModelForSeq2SeqLM,DataCollatorForSeq2Seq,Seq2SeqTrainingArguments,Seq2SeqTrainer\r\nimport os\r\nfrom datasets import load_dataset\r\nimport numpy as np\r\nfrom utils import *\r\nimport torch\r\nimport evaluate\r\nimport sys\r\nimport json\r\nimport time\r\nimport argparse\r\n\r\ndef tokenize_for_evaluation(tokenizer,preds,labels):\r\n\r\n\r\n predicted_text = []\r\n golden_labels = []\r\n\r\n for pred, label in zip(preds, labels):\r\n\r\n gen = tokenizer.decode(pred, skip_special_tokens=True)\r\n gen = str(gen)\r\n predicted_text.append(gen)\r\n\r\n gold = tokenizer.decode(label, skip_special_tokens=True)\r\n gold = str(gold)\r\n golden_labels.append(gold)\r\n\r\n return predicted_text,golden_labels\r\n \r\ndef process_data_BART(data_to_process,tokenizer,max_input,max_target,typeKG ):\r\n\r\n #get the dialogue text\r\n inputs = [graph for graph in data_to_process[f'{typeKG}']]\r\n #tokenize text\r\n model_inputs = tokenizer(inputs, max_length=max_input, padding='max_length', truncation=True)\r\n\r\n #tokenize labels\r\n #with tokenizer.as_target_tokenizer():\r\n targets = [target for target in data_to_process['story']]\r\n model_targets = tokenizer(targets, max_length=max_target, padding='max_length', truncation=True)\r\n \r\n\r\n #reuturns input_ids, attention_masks, labels\r\n \r\n data_to_process[\"input_ids\"] = model_inputs.input_ids\r\n data_to_process[\"attention_mask\"] = model_inputs.attention_mask\r\n data_to_process[\"labels\"] = model_targets.input_ids\r\n\r\n return data_to_process\r\n\r\n \r\ndatapath ='/daatapath\r\ndataprefix ='pop'\r\ntypeKG = 'Instances_KG'\r\nmodel_checkpoint=\"facebook/bart-base\"\r\nexperiment_name = 'exp'\r\nlearning_rate =1e-4\r\nbatch_size = 1\r\nepochs =3\r\nsave_model = False\r\nmax_target = 512\r\nmax_input = 512\r\n\r\n\r\ntrain_file = datapath +'/' + dataprefix + '_train' + '.json'\r\ndev_file = datapath +'/'+ dataprefix + '_dev' + '.json'\r\ntest_file = datapath +'/' + dataprefix + '_test'+ '.json'\r\n\r\n\r\nprint(\"Loading dataset from \",datapath)\r\ndataset = load_dataset('json', data_files={'train': train_file, 'valid': dev_file, 'test': test_file})\r\n\r\ntodrop=list(set(dataset['test'].column_names)-set([typeKG,'story'])) #This line returns a list of all the columns to drop (all columns minus the ones we need (input typeKG and story))\r\n\r\n\r\nprint(\"Loading tokenizer\")\r\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint,add_eos_token=True)\r\n\r\nprint(\"\\nProcessing Dataset\")\r\n#the processing of the data is done batches for make it faster,number of processes 4\r\ntokenized_dataset = dataset.map(lambda example: process_data_BART(example, tokenizer,max_input,max_target,typeKG), batched=True, num_proc=4,remove_columns=todrop)\r\n\r\nprint(\"\\nLoading MODEL\")\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)\r\n#model.to(device)\r\n\r\nprint(\"Collator for batches\")\r\ncollator = DataCollatorForSeq2Seq(tokenizer, model=model) #this is necessary for diving in batch for training\r\n\r\nprint('Loading rouge')\r\nrouge = evaluate.load('rouge')\r\n\r\n\r\ndef compute_rouge(pred): \r\n predictions, labels = pred\r\n #decode the predictions\r\n decode_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True)\r\n #decode labels\r\n decode_labels = tokenizer.batch_decode(labels, skip_special_tokens=True,clean_up_tokenization_spaces=True)\r\n\r\n #compute results\r\n res = rouge.compute(predictions=decode_predictions, references=decode_labels, use_stemmer=True)\r\n #get %\r\n return res\r\n\r\nprint(\"\\nPREPARING FOR TRAINING...\")\r\n\r\n#defining training arogouments\r\nargs = Seq2SeqTrainingArguments(\r\n experiment_name,\r\n evaluation_strategy='epoch',\r\n learning_rate=learning_rate, \r\n per_device_train_batch_size= batch_size,\r\n per_device_eval_batch_size= batch_size,\r\n gradient_accumulation_steps=3, #compute gradient on n examples KG story \r\n weight_decay=0.01, #regularization\r\n save_total_limit=1, #this is the max amount of checkpoint saved, after which previous checpoints are removed\r\n num_train_epochs=epochs, #number of epochs\r\n predict_with_generate=True, \r\n generation_max_length = 512, #max number of tokens per generation \r\n generation_num_beams=5, #decoding strategy! greedy search, beam search \r\n eval_accumulation_steps=1, #backprop \r\n fp16=True, #memory management\r\n disable_tqdm=True)\r\n#only CUDA available -> fp16=True\r\n\r\n\r\n### almost training time\r\ntrainer = Seq2SeqTrainer(\r\n model, \r\n args,\r\n train_dataset=tokenized_dataset['train'],\r\n eval_dataset=tokenized_dataset['valid'],\r\n data_collator=collator,\r\n tokenizer=tokenizer,\r\n compute_metrics=compute_rouge\r\n)\r\n\r\n\r\ntrainer.train()\r\n\r\n\r\nif save_model:\r\n print(\"Saving model\")\r\n trainer.save_model(experiment_name+\"/saved_model\")\r\n\r\n\r\nprint(\"\\nPREDICTING..\")\r\npreds, labels, metrics = trainer.predict(tokenized_dataset['test'], num_beams=5, min_length=50, max_length=512, no_repeat_ngram_size=2, early_stopping=True)\r\n\r\npredicted_text,golden_labels=tokenize_for_evaluation(tokenizer,preds,labels)\r\n\r\n#here is already past the error \r\nprint(\"\\nRESULT SCORES:\")\r\n\r\nscores = metrics.items()\r\nprint(f'Results: {scores}')\r\n```\r\nThe data looks the following, to substitute folde in data/path\r\n```\r\n {\r\n \"story\": \"Baymax is a character from the film Big Hero 6 starring Scott Adsit. He was created by Steven T Seagle and the American, Duncan Rouleau.\",\r\n \"Types_KG\": \"[CORE] Baymax is a character from the film Big Hero 6 [TRIPLES] Duncan Rouleau - nationality - Americans | Baymax - creators - Duncan Rouleau | Baymax - creator - Steven T. Seagle | Baymax - series - Big Hero 6 (film) | Big Hero 6 (film) - starring - Scott Adsit | Baymax - creator - Duncan Rouleau | Duncan Rouleau - nationality - Americans | Baymax - creators - Steven T. Seagle | Baymax - series - Big Hero 6 (film) | Big Hero 6 (film) - starring - Scott Adsit | Scott Adsit - type - person | Americans - type - ethnic group | Steven T. Seagle - type - person | Duncan Rouleau - type - person | Big Hero 6 (film) - type - person\",\r\n \"Instances_KG\": \"[CORE] Baymax is a character from the film Big Hero 6 [TRIPLES] Duncan Rouleau - nationality - Americans | Baymax - creators - Duncan Rouleau | Baymax - creator - Steven T. Seagle | Baymax - series - Big Hero 6 (film) | Big Hero 6 (film) - starring - Scott Adsit | Baymax - creator - Duncan Rouleau | Duncan Rouleau - nationality - Americans | Baymax - creators - Steven T. Seagle | Baymax - series - Big Hero 6 (film) | Big Hero 6 (film) - starring - Scott Adsit\",\r\n \"\r\n```", "@GabHoo I'm afraid you'll have you will have to share complete data example or another script, the current instructions fail at data loading time if I create a file as specified. (`ArrowInvalid: JSON parse error: Missing a name for object member. in row 0`)", "@GabHoo Hello, I had same problem and I think problem in DataCollatorForSeq2Seq, \r\nmore specifically in label_pad_token_id. \r\nCollator using label_pad_token_id = -100, but your tokenizer using a different (tokenizer.pad_token_id = 1).\r\nCan you try?\r\n`\r\ncollator = DataCollatorForSeq2Seq(tokenizer, model=model, label_pad_token_id=tokenizer.pad_token_id)\r\n`", "Hey @gante, I think [behavior of DataCollatorForSeq2Seq](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/data/data_collator.py#L576) is really unexpected. Why it requires label_pad_token_id, if it can use tokenizer.pad_token_id as with [padding_side](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/data/data_collator.py#LL574C13-L574C26)?", "Hey @Pavloveuge -- the label padding triggers a different behavior at train time (if my memory does not fail me, the loss is ignored for that token)", "Oh, yeah, you right, but this behavior still results in an error. And it doesn't matter which version of the tokenizer I use(Fast or not). \r\nIn case use_fast=False:\r\n`\r\nTypeError: sequence item 9: expected str instance, NoneType found\r\n`\r\nin case use_fast=True:\r\n`\r\nOverflowError: out of range integral type conversion attempted.\r\n`\r\n", "@Pavloveuge that sounds like a bug indeed :) Would you be able to share a short stand-alone script to reproduce the issue?", "@gante Should I open new issue or reopen this?", "@Pavloveuge A new issue would be preferable 👍 " ]
1,680
1,687
1,681
CONTRIBUTOR
null
spitting off from https://github.com/huggingface/transformers/issues/22571 as it was a secondary problem reported there: ### Reproduction ``` CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src python examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-base --do_train --do_eval --source_lang en \ --target_lang de --source_prefix 'translate English to German: ' \ --dataset_name stas/wmt14-en-de-pre-processed --output_dir \ /tmp/tst-translation --num_train_epochs 1 --per_device_train_batch_size=1 \ --max_train_samples 10 --overwrite_output_dir --seed 1137 \ --per_device_eval_batch_size 1 --predict_with_generate --fp16 \ --max_eval_samples 10 ``` fails inside eval: ``` [INFO|trainer.py:3126] 2023-04-04 09:28:07,548 >> ***** Running Evaluation ***** [INFO|trainer.py:3128] 2023-04-04 09:28:07,548 >> Num examples = 10 [INFO|trainer.py:3131] 2023-04-04 09:28:07,548 >> Batch size = 1 [INFO|configuration_utils.py:575] 2023-04-04 09:28:07,552 >> Generate config GenerationConfig { "_from_model_config": true, "decoder_start_token_id": 0, "eos_token_id": 1, "pad_token_id": 0, "transformers_version": "4.28.0.dev0" } 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 3.72it/s]Traceback (most recent call last): File "examples/pytorch/translation/run_translation.py", line 664, in <module> main() File "examples/pytorch/translation/run_translation.py", line 605, in main metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval") File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer_seq2seq.py", line 159, in evaluate return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 2990, in evaluate output = eval_loop( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/trainer.py", line 3278, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "examples/pytorch/translation/run_translation.py", line 546, in compute_metrics decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3445, in batch_decode return [ File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3446, in <listcomp> self.decode( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_base.py", line 3485, in decode return self._decode( File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/tokenization_utils_fast.py", line 549, in _decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) OverflowError: out of range integral type conversion attempted ``` @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22634/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22633
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22633/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22633/comments
https://api.github.com/repos/huggingface/transformers/issues/22633/events
https://github.com/huggingface/transformers/pull/22633
1,657,736,175
PR_kwDOCUB6oc5NyPMX
22,633
Debugging the doc-builder
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22633/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22633", "html_url": "https://github.com/huggingface/transformers/pull/22633", "diff_url": "https://github.com/huggingface/transformers/pull/22633.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22633.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22632
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22632/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22632/comments
https://api.github.com/repos/huggingface/transformers/issues/22632/events
https://github.com/huggingface/transformers/pull/22632
1,657,724,136
PR_kwDOCUB6oc5NyMut
22,632
[`Blip`] Fix slow tests and doctests with correct values
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/22625 In fact, the `num_attention_heads` of `BlipTextModel` should be 12 and not 8. Hence the models that are on the Hub were producing different logits / generations that the original implementation in some cases. I made PRs on the Hub: - https://huggingface.co/Salesforce/blip-itm-large-flickr/discussions/1 - https://huggingface.co/Salesforce/blip-itm-base-coco/discussions/3 - https://huggingface.co/Salesforce/blip-image-captioning-base/discussions/13 - https://huggingface.co/Salesforce/blip-image-captioning-large/discussions/8#642eed4cce2efe48a1aa1497 - https://huggingface.co/Salesforce/blip-vqa-base/discussions/3#642eed68ae8ae35b7a9bbb7f - https://huggingface.co/Salesforce/blip-vqa-capfilt-large/discussions/5 - https://huggingface.co/Salesforce/blip-itm-large-flickr/discussions/3 - https://huggingface.co/Salesforce/blip-itm-large-coco/discussions/3 And tested them on the slow tests and doctests with `.from_pretrained(xxx, revision="refs/pr/xx")`, this PR fixes the slow tests and doctests with the correct values. Let's merge this PR and I'll merge the PRs that are on the Hub cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22632/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22632", "html_url": "https://github.com/huggingface/transformers/pull/22632", "diff_url": "https://github.com/huggingface/transformers/pull/22632.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22632.patch", "merged_at": 1680801172000 }
https://api.github.com/repos/huggingface/transformers/issues/22631
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22631/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22631/comments
https://api.github.com/repos/huggingface/transformers/issues/22631/events
https://github.com/huggingface/transformers/pull/22631
1,657,707,904
PR_kwDOCUB6oc5NyJeY
22,631
Update input values for docstring
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Asking for a sanity check from you, @sanchit-gandhi to make sure the description of the audio inputs is correct :) ", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,681
1,681
COLLABORATOR
null
# What does this PR do? Updates docstring values for AST model for `input_values` that were incorrectly for pixel values. Fixes #22610 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22631/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22631", "html_url": "https://github.com/huggingface/transformers/pull/22631", "diff_url": "https://github.com/huggingface/transformers/pull/22631.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22631.patch", "merged_at": 1681296269000 }
https://api.github.com/repos/huggingface/transformers/issues/22630
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22630/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22630/comments
https://api.github.com/repos/huggingface/transformers/issues/22630/events
https://github.com/huggingface/transformers/pull/22630
1,657,696,168
PR_kwDOCUB6oc5NyHD9
22,630
LlamaTokenizerFast Fix (.., from_slow=True).
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "WIll add the tests in a followup PR ", "So `test_save_slow_from_fast_and_reload_fast` tests this, but it was skipped. It should never be skipped. We should test with a minimal config exactly for the reason we just saw" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? `AutoTokenizer.from_pretrained(..., from_slow=True)` and ` tokenizer.save_pretrained("./tmp")` wasn't working without those. @ArthurZucker wants to add some tests for those. I'm surprised by a few of these necessities since they seems quite standard defaults. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22630/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22630", "html_url": "https://github.com/huggingface/transformers/pull/22630", "diff_url": "https://github.com/huggingface/transformers/pull/22630.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22630.patch", "merged_at": 1680799980000 }
https://api.github.com/repos/huggingface/transformers/issues/22629
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22629/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22629/comments
https://api.github.com/repos/huggingface/transformers/issues/22629/events
https://github.com/huggingface/transformers/pull/22629
1,657,690,008
PR_kwDOCUB6oc5NyF0A
22,629
YaLM Implementation
{ "login": "BlackSamorez", "id": 16901341, "node_id": "MDQ6VXNlcjE2OTAxMzQx", "avatar_url": "https://avatars.githubusercontent.com/u/16901341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BlackSamorez", "html_url": "https://github.com/BlackSamorez", "followers_url": "https://api.github.com/users/BlackSamorez/followers", "following_url": "https://api.github.com/users/BlackSamorez/following{/other_user}", "gists_url": "https://api.github.com/users/BlackSamorez/gists{/gist_id}", "starred_url": "https://api.github.com/users/BlackSamorez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BlackSamorez/subscriptions", "organizations_url": "https://api.github.com/users/BlackSamorez/orgs", "repos_url": "https://api.github.com/users/BlackSamorez/repos", "events_url": "https://api.github.com/users/BlackSamorez/events{/privacy}", "received_events_url": "https://api.github.com/users/BlackSamorez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22629). All of your documentation changes will be reflected on that endpoint.", "Hey! How do you feel about maybe adding this to the hub rather than on transformers? Should be easier to do following this [tutorial](https://huggingface.co/docs/transformers/custom_models). WDYT? ", "Wow, didn't know this was an option. I'll definitely look into this since it's unlikely we'll be getting more models based on this architecture anyway.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
CONTRIBUTOR
null
# What does this PR do? Implementation of YaLM model (https://github.com/yandex/YaLM-100B). Model weights are available [here](https://huggingface.co/yandex/yalm-100b). Weight conversion will be included. ### Sources The code is based on the [original model code](https://github.com/yandex/YaLM-100B) which is an old and heavily modified [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) fork. I also borrowed some code from [gpt_neox transformers implementation](https://github.com/huggingface/transformers/tree/main/src/transformers/models/gpt_neox). ### Licences The model weights [were published under the Apache 2.0 license](https://github.com/yandex/YaLM-100B/blob/main/LICENSE). Megatron-LM is licensed under the [Megatron-LM license](https://github.com/yandex/YaLM-100B/blob/main/megatron_lm/LICENSE). Not sure what the latter is. ### Correctness The model works but I can't verify it's correctness since I don't have access to 200GB of VRAM. A smaller model of same architecture should be created for testing purposes. Tokenizer and conversion script are not done yet. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22629/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22629", "html_url": "https://github.com/huggingface/transformers/pull/22629", "diff_url": "https://github.com/huggingface/transformers/pull/22629.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22629.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22628
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22628/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22628/comments
https://api.github.com/repos/huggingface/transformers/issues/22628/events
https://github.com/huggingface/transformers/pull/22628
1,657,674,619
PR_kwDOCUB6oc5NyCra
22,628
[`bnb`] 8bit models should not be converted to `DDP`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Fix issues that users can encounter on multi-GPU such as: https://github.com/huggingface/peft/issues/269#issuecomment-1498776567 In fact 8bit models should not be converted to DDP cc @sgugger @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22628/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22628", "html_url": "https://github.com/huggingface/transformers/pull/22628", "diff_url": "https://github.com/huggingface/transformers/pull/22628.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22628.patch", "merged_at": 1680797364000 }
https://api.github.com/repos/huggingface/transformers/issues/22627
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22627/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22627/comments
https://api.github.com/repos/huggingface/transformers/issues/22627/events
https://github.com/huggingface/transformers/pull/22627
1,657,670,282
PR_kwDOCUB6oc5NyBzT
22,627
Make FlaxPreTrainedModel a Flax Module
{ "login": "cgarciae", "id": 5862228, "node_id": "MDQ6VXNlcjU4NjIyMjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5862228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cgarciae", "html_url": "https://github.com/cgarciae", "followers_url": "https://api.github.com/users/cgarciae/followers", "following_url": "https://api.github.com/users/cgarciae/following{/other_user}", "gists_url": "https://api.github.com/users/cgarciae/gists{/gist_id}", "starred_url": "https://api.github.com/users/cgarciae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cgarciae/subscriptions", "organizations_url": "https://api.github.com/users/cgarciae/orgs", "repos_url": "https://api.github.com/users/cgarciae/repos", "events_url": "https://api.github.com/users/cgarciae/events{/privacy}", "received_events_url": "https://api.github.com/users/cgarciae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sanchit-gandhi's previous comment:\r\n\r\n\"\"\"\r\n\r\n\r\nThis looks super clean @cgarciae - really like this way of making the FlaxPreTrainedModel into an nn.Module through dataclasses.\r\n\r\nHad some questsions about the PR (mainly for my understanding), but think the general design philosophy looks good here.\r\n\r\nFrom my testing it all seems to work as expected - think though we should add some very visible warning messages though when a user passes _do_init=False to advise them to use the returned model as a Flax nn.Module (rather than falling-back on the __call__ method of FlaxPreTrainedModel). This is the only breaking change I see from this PR, but one that is unavoidable (since it's the exact thing we're trying to change).\r\n\r\n=> perhaps as a start we first make a PR that triggers this warning (advising users that the functinality is going to change in N months / releases time), and then have this PR as a follow-up that makes the changes?\r\n\r\nFor Flax BERT though this approach gives equivalence between the FlaxPreTrainedModel and the nn.Module -> for models that do extra pre-processing we can just modify the __call__ to do all the data pre-processing?\r\n\r\n\"\"\"", "Thanks for the review @sanchit-gandhi !\r\n\r\n> => perhaps as a start we first make a PR that triggers this warning (advising users that the functinality is going to change in N months / releases time), and then have this PR as a follow-up that makes the changes?\r\n\r\nI was planning on entirely deleting the `params` argument from `__call__` 😅. If we want to make the change a bit more gradual then maybe we could do something like this:\r\n\r\n```diff\r\n- if self._do_init:\r\n+ if self.scope is None:\r\n```\r\nThis would condition on whether the module being called inside `apply` or not.", "@sanchit-gandhi cleaned the PR a little, these are the minimal changes required.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22627). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Leaving closed in favour of #22866", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,687
1,687
NONE
null
# What does this PR do? WIP. Makes `FlaxPreTrainedModel` a `nn.Module` so Flax users can easily integrate it into other Flax networks or systems that expect Flax Modules. See for #22499 some discussion of the approach.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22627/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22627", "html_url": "https://github.com/huggingface/transformers/pull/22627", "diff_url": "https://github.com/huggingface/transformers/pull/22627.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22627.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22626
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22626/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22626/comments
https://api.github.com/repos/huggingface/transformers/issues/22626/events
https://github.com/huggingface/transformers/pull/22626
1,657,643,935
PR_kwDOCUB6oc5Nx8g5
22,626
WIP Umt5
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Many thanks @agemagician ! I could import the flax import for umT5 (and made some suggestions here) and will try to get the conversion script running (I think it only needs adjustments for this relative position embeddings thing).", "cc @ArthurZucker ", "So far the Pytorch version correctly uses \"relative_attention_bias\" across all layers. However, the flax version doesn't.\r\nAny idea what could be the issue?\r\n\r\nYou can see this in line 224 on the flax file. I have even removed the if statement to force adding the \"relative_attention_bias\" layer but it doesn't \r\n\r\n@stefan-it @sgugger @ArthurZucker", "> So far the Pytorch version correctly uses \"relative_attention_bias\" across all layers. However, the flax version doesn't. Any idea what could be the issue?\r\n> \r\n> You can see this in line 224 on the flax file. I have even removed the if statement to force adding the \"relative_attention_bias\" layer but it doesn't\r\n> \r\n> @stefan-it @sgugger @ArthurZucker\r\n\r\nTo clarify, if I created a new flax model for UMT5, and printed the keys for the zero layer then:\r\n```\r\ntest_flax_model.params[\"encoder\"][\"block\"][\"0\"][\"layer\"][\"0\"][\"SelfAttention\"].keys()\r\ndict_keys(['q', 'k', 'v', 'o', 'relative_attention_bias'])\r\n```\r\nbut if I printed it for any follow layers:\r\n```\r\ntest_flax_model.params[\"encoder\"][\"block\"][\"1\"][\"layer\"][\"0\"][\"SelfAttention\"].keys()\r\ndict_keys(['q', 'k', 'v', 'o'])\r\n```\r\n\r\nHowever, in pytorch it shows that all layers have a separate relative_attention_bias:\r\n```\r\nMT5ForConditionalGeneration(\r\n (shared): Embedding(256384, 512)\r\n (encoder): UMT5Stack(\r\n (embed_tokens): Embedding(256384, 512)\r\n (block): ModuleList(\r\n (0-7): 8 x UMT5Block(\r\n (layer): ModuleList(\r\n (0): UMT5LayerSelfAttention(\r\n (SelfAttention): UMT5Attention(\r\n (q): Linear(in_features=512, out_features=384, bias=False)\r\n (k): Linear(in_features=512, out_features=384, bias=False)\r\n (v): Linear(in_features=512, out_features=384, bias=False)\r\n (o): Linear(in_features=384, out_features=512, bias=False)\r\n (relative_attention_bias): Embedding(32, 6)\r\n )\r\n (layer_norm): UMT5LayerNorm()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (1): UMT5LayerFF(\r\n (DenseReluDense): UMT5DenseGatedActDense(\r\n (wi_0): Linear(in_features=512, out_features=1024, bias=False)\r\n (wi_1): Linear(in_features=512, out_features=1024, bias=False)\r\n (wo): Linear(in_features=1024, out_features=512, bias=False)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n (act): NewGELUActivation()\r\n )\r\n (layer_norm): UMT5LayerNorm()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n )\r\n )\r\n (final_layer_norm): UMT5LayerNorm()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (decoder): UMT5Stack(\r\n (embed_tokens): Embedding(256384, 512)\r\n (block): ModuleList(\r\n (0-7): 8 x UMT5Block(\r\n (layer): ModuleList(\r\n (0): UMT5LayerSelfAttention(\r\n (SelfAttention): UMT5Attention(\r\n (q): Linear(in_features=512, out_features=384, bias=False)\r\n (k): Linear(in_features=512, out_features=384, bias=False)\r\n (v): Linear(in_features=512, out_features=384, bias=False)\r\n (o): Linear(in_features=384, out_features=512, bias=False)\r\n (relative_attention_bias): Embedding(32, 6)\r\n )\r\n (layer_norm): UMT5LayerNorm()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (1): UMT5LayerCrossAttention(\r\n (EncDecAttention): UMT5Attention(\r\n (q): Linear(in_features=512, out_features=384, bias=False)\r\n (k): Linear(in_features=512, out_features=384, bias=False)\r\n (v): Linear(in_features=512, out_features=384, bias=False)\r\n (o): Linear(in_features=384, out_features=512, bias=False)\r\n (relative_attention_bias): Embedding(32, 6)\r\n )\r\n (layer_norm): UMT5LayerNorm()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (2): UMT5LayerFF(\r\n (DenseReluDense): UMT5DenseGatedActDense(\r\n (wi_0): Linear(in_features=512, out_features=1024, bias=False)\r\n (wi_1): Linear(in_features=512, out_features=1024, bias=False)\r\n (wo): Linear(in_features=1024, out_features=512, bias=False)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n (act): NewGELUActivation()\r\n )\r\n (layer_norm): UMT5LayerNorm()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n )\r\n )\r\n (final_layer_norm): UMT5LayerNorm()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (lm_head): Linear(in_features=512, out_features=256384, bias=False)\r\n)\r\n```", "I have added a script to convert the original t5x jax model directly to pytorch.\r\nHowever, the results of the pytorch output are still garbage.\r\n\r\nPytoch automatically check for any missing keys + the shape. So, I think now the problem might be the merging of the KV into a single matrix.\r\n\r\nI have also disabled relative bias for cross attention as it is not needed there.", "@adarob could you please help us with the `joined_kv` \"fusing\" operation. \r\n\r\nT5 uses `joined_kv` for `num_heads` and `d_kv`. But scaled T5 has these two variables, so we need to \"fuse\" them manually. Can we just use a `.reshape(config.d_model, config.num_heads * config.d_kv)`, e.g. shape of (512, 6, 64) will be converted to (512, 384) :thinking: ", "Not sure it follow, in transformers, T5 has \r\n```python \r\nself.key_value_proj_dim = config.d_kv\r\nself.n_heads = config.num_heads\r\nself.inner_dim = self.n_heads * self.key_value_proj_dim\r\n```\r\nare you looking for something like this? ", "> @adarob could you please help us with the `joined_kv` \"fusing\" operation.\r\n> \r\n> T5 uses `joined_kv` for `num_heads` and `d_kv`. But scaled T5 has these two variables, so we need to \"fuse\" them manually. Can we just use a `.reshape(config.d_model, config.num_heads * config.d_kv)`, e.g. shape of (512, 6, 64) will be converted to (512, 384) 🤔\r\n\r\n@@adarob, we will highly appreciate your feedback as this is the current bottleneck to finalize the model integration.", "> Not sure it follow, in transformers, T5 has\r\n> \r\n> ```python\r\n> self.key_value_proj_dim = config.d_kv\r\n> self.n_heads = config.num_heads\r\n> self.inner_dim = self.n_heads * self.key_value_proj_dim\r\n> ```\r\n> \r\n> are you looking for something like this?\r\n\r\nThe current issue is related to the weights, in T5 the attention k,v,q,v are fused into 2d matrix, while the new umt5x version has a 3d matrix.\r\nPlease check the difference in \"t5x_attention_lookup\" function in both T5 and UMT5. We need to use reshape to fix it, but it seems it doesn't provide the correct results.", "Also pinging the main author of umT5 paper @hwchung27 for help :hugs: ", "Maybe we have more luck with pinging @cpgaffney1 as t5x contributor here :hugs: ", "adarob and hwchung27 no longer have this project as their main focus. I am also not a T5X owner. Tagging @gauravmishra ", "When are you planning to merge this MR? I would like to test this new model", "> When are you planning to merge this MR? I would like to test this new model\r\n\r\nUnfortunately, we are stuck in the conversion process and we didn't get help yet from either the authors or the hugging face team.", "@agemagician Could you please point out where you were asking a Hugging Face team member for help? I'm sorry we missed that message, but looking at the conversation I only see pings directed at persons out of the team.\r\n\r\ncc @younesbelkada since Arthur is on vacation.", "Thanks for the PR !\r\n@agemagician , could you point me on the exact issue you are facing, and on which conversion process?\r\nIs there something that is reproducible so that I can try to have a look locally?", "> @agemagician Could you please point out where you were asking a Hugging Face team member for help? I'm sorry we missed that message, but looking at the conversation I only see pings directed at persons out of the team.\r\n> \r\n> cc @younesbelkada since Arthur is on vacation.\r\n\r\nHi @sgugger No worries, It was mainly when we created a new issue for integrating the model and then we created this PR ourselves.\r\nhttps://github.com/huggingface/transformers/issues/22573\r\n\r\nIt would be great if HF team could help us to finalize the integration of this model, since it is almost finished.", "> Thanks for the PR ! @agemagician , could you point me on the exact issue you are facing, and on which conversion process? Is there something that is reproducible so that I can try to have a look locally?\r\n\r\nHi @younesbelkada ,\r\n\r\nThanks for offering your help.\r\n\r\nHere is a summary of the current state:\r\nUMT5 model is almost similar to T5 model except the following:\r\n1. The original T5X checkpoint does not merge kv for k, o, q, and v values.\r\n2. They use separate relative attention for each layer.\r\n3. They use byte fallback in case of OOV for the tokenizer.\r\n\r\nWhat we have done:\r\n1. We created the script for converting the original t5x to pytorch and jax.\r\n2. We replicated T5 model and separated the relative attention for each layer.\r\n3. We converted the tokenizer.\r\n\r\nWhat does not work:\r\n1. The results of the output model are rubbish.\r\n\r\nWhere do we think the problem:\r\n1. It is how we join the the separated kv values.\r\nhttps://github.com/agemagician/transformers/blob/UMT5/src/transformers/models/umt5/convert_umt5x_checkpoint_to_pytorch.py#L51\r\n\r\nHow you can replicate the process:\r\n1. You can use the pytorch conversion script here:\r\nhttps://github.com/agemagician/transformers/blob/UMT5/src/transformers/models/umt5/convert_umt5x_checkpoint_to_pytorch.py \r\n2. Then you can use the pytorch model from here:\r\nhttps://huggingface.co/agemagician/umt5-small\r\n\r\n@stefan-it please, let me know if I missed something here.\r\n\r\n@younesbelkada Please, let me know if you need any additional information :)", "Thanks for the detailed pointers! \r\nCan you point me to the t5x checkpoint for umt5-small so that I can try to convert the weights myself? ", "> Thanks for the detailed pointers! Can you point me to the t5x checkpoint for umt5-small so that I can try to convert the weights myself?\r\n\r\nSure, here is the link:\r\nhttps://github.com/google-research/t5x/blob/main/docs/models.md#umt5-checkpoints", "I appreciate everyone's hard work in this thread. I really hope that umT5 gets merged to HF, since the last time we got an LM that supports ~100 languages was in 2020 with mT5.", "it seems they moved out the checkpoints to another place, this should be the correct path:\r\n```bash\r\ngcloud storage cp -r gs://t5-data/pretrained_models/t5x/umt5_small/checkpoint_1000000 ./\r\n```\r\n", "Thank you for all the work in this thread. Are you aware of [mLongT5](https://arxiv.org/pdf/2305.11129.pdf)? \r\nDo you know when this is going to be merged?", "Any update on the status of integrating umT5 and mLong?\r\n", "Hi, I am ramping up on taking over the PR, will update on the progress whenever possible. thanks", "I am currently working on the tokenizer, we'll be linking a new PR for this model addition in the coming days! ", "Hi @ArthurZucker ,\r\n\r\nThe tokenizer in my repo should be working:\r\nhttps://huggingface.co/agemagician/umt5-small/tree/main\r\n\r\n", "Mmm I meant the fast tokenizer, I am mostly working on adding byte fallback support for fast tokenizers, which doesn’t exist yet 👌" ]
1,680
1,688
1,688
CONTRIBUTOR
null
# What does this PR do? It supports umt5 models, which need separate relative attention biases for each layer. The current code will have backward compatibility with previous T5 and MT5 checkpoints. Fixes # (issue) https://github.com/huggingface/transformers/issues/22573 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Models: - text models: @ArthurZucker and @younesbelkada - @stefan-it
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22626/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22626", "html_url": "https://github.com/huggingface/transformers/pull/22626", "diff_url": "https://github.com/huggingface/transformers/pull/22626.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22626.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22625
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22625/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22625/comments
https://api.github.com/repos/huggingface/transformers/issues/22625/events
https://github.com/huggingface/transformers/issues/22625
1,657,639,257
I_kwDOCUB6oc5izZFZ
22,625
BLIP coco base default num_attention_heads should be 12
{ "login": "DianeBouchacourt", "id": 13796686, "node_id": "MDQ6VXNlcjEzNzk2Njg2", "avatar_url": "https://avatars.githubusercontent.com/u/13796686?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DianeBouchacourt", "html_url": "https://github.com/DianeBouchacourt", "followers_url": "https://api.github.com/users/DianeBouchacourt/followers", "following_url": "https://api.github.com/users/DianeBouchacourt/following{/other_user}", "gists_url": "https://api.github.com/users/DianeBouchacourt/gists{/gist_id}", "starred_url": "https://api.github.com/users/DianeBouchacourt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DianeBouchacourt/subscriptions", "organizations_url": "https://api.github.com/users/DianeBouchacourt/orgs", "repos_url": "https://api.github.com/users/DianeBouchacourt/repos", "events_url": "https://api.github.com/users/DianeBouchacourt/events{/privacy}", "received_events_url": "https://api.github.com/users/DianeBouchacourt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts and @younesbelkada ", "This seems to be the right fix, I will check the original repo again and get back to you\r\n", "The attention head is indeed seems to be 12 instead of 8: https://github.com/salesforce/BLIP/blob/3a29b7410476bf5f2ba0955827390eb6ea1f4f9d/configs/bert_config.json#L14 / https://github.com/salesforce/BLIP/blob/3a29b7410476bf5f2ba0955827390eb6ea1f4f9d/configs/med_config.json#L14 \r\nWe'll need to modify the `num_attention_heads` of all blip models as they all use the same base text model. \r\nDoctests and slow tests might need to be adapted accordingly. ", "Thanks a lot @DianeBouchacourt for the great catch\r\nEverything should have been updated correctly, now if you load again your model, you should see the correct results!" ]
1,680
1,680
1,680
NONE
null
### System Info I was trying to replicate BLIP results on VG Relation dataset as in https://github.com/mertyg/vision-language-models-are-bows. I was using `BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco")` following this example https://huggingface.co/docs/transformers/model_doc/blip#transformers.BlipForImageTextRetrieval The results were much poorer, and after a lot of digging it turned out that this is because the num_attention_head in the current HuggingFace implementation is 8 instead of 12 as in https://github.com/mertyg/vision-language-models-are-bows (and as in common in BLIP). I manually changed it via ``` model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco") processor = AutoProcessor.from_pretrained("Salesforce/blip-itm-base-coco") tmp_config=model.config tmp_config.text_config.num_attention_heads = 12 model = BlipForImageTextRetrieval.from_pretrained( "Salesforce/blip-itm-base-coco", config=tmp_config ) ``` and matched the results on VGR. For example, if you run the doc example (with a slightly changed caption): ``` from PIL import Image import requests from transformers import AutoProcessor, BlipForImageTextRetrieval model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco") processor = AutoProcessor.from_pretrained("Salesforce/blip-itm-base-coco") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "two cats on a couch" inputs = processor(images=image, text=text, return_tensors="pt") outputs = model(**inputs) print(torch.softmax(outputs['itm_score'],dim=-1)) ``` ![image](https://user-images.githubusercontent.com/13796686/230425156-008d7ae2-d947-4f63-bfa1-8688f1e870c4.png) **with 8 heads you get scores of `[[0.9489, 0.0511]]` for the above image i.e. the caption ` "two cats on a couch"` is not relevant, whereas with 12 heads you get scores of `[[0.1727, 0.8273]]` i.e. the caption is very relevant (which is the correct answer)** Proposed fix: **I suggest we switch it to 12 by default.**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22625/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22625/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22624
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22624/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22624/comments
https://api.github.com/repos/huggingface/transformers/issues/22624/events
https://github.com/huggingface/transformers/issues/22624
1,657,584,344
I_kwDOCUB6oc5izLrY
22,624
Decoding with language model issue
{ "login": "ngawang88", "id": 62231990, "node_id": "MDQ6VXNlcjYyMjMxOTkw", "avatar_url": "https://avatars.githubusercontent.com/u/62231990?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ngawang88", "html_url": "https://github.com/ngawang88", "followers_url": "https://api.github.com/users/ngawang88/followers", "following_url": "https://api.github.com/users/ngawang88/following{/other_user}", "gists_url": "https://api.github.com/users/ngawang88/gists{/gist_id}", "starred_url": "https://api.github.com/users/ngawang88/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ngawang88/subscriptions", "organizations_url": "https://api.github.com/users/ngawang88/orgs", "repos_url": "https://api.github.com/users/ngawang88/repos", "events_url": "https://api.github.com/users/ngawang88/events{/privacy}", "received_events_url": "https://api.github.com/users/ngawang88/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi ", "Hey @ngawang88, sorry for the late reply, but unfortunately the code snippet is currently un-reproducible (it leverages a local model which I cannot load). Without the full stack trace I can't see precisely where the code is failing. If you're able to share a reproducible code snippet (e.g. one that loads a model from the HF Hub) and share the full stack trace, I'd be able to have a deeper look. Thanks", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
### System Info - `transformers` version: 4.26.1 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.10.0 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.11.0+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from datasets import load_dataset, Dataset,Audio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from pyctcdecode import build_ctcdecoder from transformers import Wav2Vec2ProcessorWithLM,AutoProcessor import torch def process_audio(filepath,model_id): print(filepath) model = Wav2Vec2ForCTC.from_pretrained("media/model/transformer/") processor = Wav2Vec2Processor.from_pretrained("media/model/transformer/") audio_file_path = [filepath] audio_data = Dataset.from_dict({"audio": audio_file_path}).cast_column("audio", Audio()) audio_data = audio_data.cast_column("audio", Audio(sampling_rate=16_000)) def prepare_dataset(batch): audio = batch["audio"] batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] batch["input_length"] = len(batch["input_values"]) return batch test_dataset = audio_data.map(prepare_dataset) input_dict = processor(test_dataset[0]["input_values"], return_tensors="pt", padding=True) logits = model(input_dict.input_values).logits if (int(model_id)==1): pred_ids = torch.argmax(logits, dim=-1)[0] return processor.decode(pred_ids) elif (int(model_id)==2): processorLM = AutoProcessor.from_pretrained("media/model/transformer/") vocab_dict = processorLM.tokenizer.get_vocab() sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])} decoder = build_ctcdecoder(labels=list(sorted_vocab_dict.keys()), kenlm_model_path="media/model/transformer/5gram_correct.arpa",) processor_with_lm = Wav2Vec2ProcessorWithLM(feature_extractor=processor.feature_extractor,tokenizer=processorLM.tokenizer,decoder=decoder) return processor_with_lm.batch_decode(logits.detach().numpy()).text ``` ### Expected behavior I have been using the below code to decode my wav2vac2 Automatic Speech Recognition Model and it works fine on the google Collab but when I shift to Django to deploy in website the build_ctcdecoder give UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 174: character maps to <undefined> error processorLM = AutoProcessor.from_pretrained("media/model/transformer/") vocab_dict = processorLM.tokenizer.get_vocab() sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])} decoder = build_ctcdecoder(labels=list(sorted_vocab_dict.keys()), kenlm_model_path="media/model/transformer/5gram_correct.arpa",) processor_with_lm = Wav2Vec2ProcessorWithLM(feature_extractor=processor.feature_extractor,tokenizer=processorLM.tokenizer,decoder=decoder) return processor_with_lm.batch_decode(logits.detach().numpy()).text Note: I have using Dzongkha Language which is based on utf-8
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22624/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22623
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22623/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22623/comments
https://api.github.com/repos/huggingface/transformers/issues/22623/events
https://github.com/huggingface/transformers/pull/22623
1,657,573,710
PR_kwDOCUB6oc5Nxt5S
22,623
docs: Fix broken link to generation strategies
{ "login": "connor-henderson", "id": 78612354, "node_id": "MDQ6VXNlcjc4NjEyMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-henderson", "html_url": "https://github.com/connor-henderson", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "repos_url": "https://api.github.com/users/connor-henderson/repos", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,682
1,680
CONTRIBUTOR
null
# What does this PR do? Addresses fixing the broken link from clicking [here](https://huggingface.co/docs/transformers/main_classes/text_generation#:~:text=generate%E2%80%99.%20To%20learn%20more%20about%20decoding%20strategies%20refer%20to%20the-,text%20generation%20strategies%20guide,-.) Link should direct to `https://huggingface.co/docs/transformers/generation_strategies` instead of `https://huggingface.co/docs/transformers/main_classes/generation_strategies` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> (no issue filed) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22623/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22623", "html_url": "https://github.com/huggingface/transformers/pull/22623", "diff_url": "https://github.com/huggingface/transformers/pull/22623.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22623.patch", "merged_at": 1680796131000 }
https://api.github.com/repos/huggingface/transformers/issues/22622
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22622/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22622/comments
https://api.github.com/repos/huggingface/transformers/issues/22622/events
https://github.com/huggingface/transformers/issues/22622
1,657,538,633
I_kwDOCUB6oc5izAhJ
22,622
No TPU found colab
{ "login": "sr5434", "id": 118690585, "node_id": "U_kgDOBxMTGQ", "avatar_url": "https://avatars.githubusercontent.com/u/118690585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sr5434", "html_url": "https://github.com/sr5434", "followers_url": "https://api.github.com/users/sr5434/followers", "following_url": "https://api.github.com/users/sr5434/following{/other_user}", "gists_url": "https://api.github.com/users/sr5434/gists{/gist_id}", "starred_url": "https://api.github.com/users/sr5434/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sr5434/subscriptions", "organizations_url": "https://api.github.com/users/sr5434/orgs", "repos_url": "https://api.github.com/users/sr5434/repos", "events_url": "https://api.github.com/users/sr5434/events{/privacy}", "received_events_url": "https://api.github.com/users/sr5434/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Flax didn't find the TPU, but the actual error is a connection error when downloading the datasets.", "How do I fix it?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
### System Info Transformers installed from source. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Use xla_spawn to run the run_clm.py script. Colab link:https://colab.research.google.com/drive/1U1kcI4gXPhEeAZkichOejw0ceNlX4E6z?usp=share_link ### Expected behavior It should detect and run on the Colab TPU, but no TPU is found. I am using a custom text dataset(created in the colab). When I test if torch XLA detects the TPU, it does. I am able to create tensors on the TPU, but it is not being detected.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22622/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22621
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22621/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22621/comments
https://api.github.com/repos/huggingface/transformers/issues/22621/events
https://github.com/huggingface/transformers/issues/22621
1,657,495,237
I_kwDOCUB6oc5iy17F
22,621
TypeError: create_repo() got an unexpected keyword argument 'organization'
{ "login": "SzaboGergo01", "id": 79022886, "node_id": "MDQ6VXNlcjc5MDIyODg2", "avatar_url": "https://avatars.githubusercontent.com/u/79022886?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SzaboGergo01", "html_url": "https://github.com/SzaboGergo01", "followers_url": "https://api.github.com/users/SzaboGergo01/followers", "following_url": "https://api.github.com/users/SzaboGergo01/following{/other_user}", "gists_url": "https://api.github.com/users/SzaboGergo01/gists{/gist_id}", "starred_url": "https://api.github.com/users/SzaboGergo01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SzaboGergo01/subscriptions", "organizations_url": "https://api.github.com/users/SzaboGergo01/orgs", "repos_url": "https://api.github.com/users/SzaboGergo01/repos", "events_url": "https://api.github.com/users/SzaboGergo01/events{/privacy}", "received_events_url": "https://api.github.com/users/SzaboGergo01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You need to update your installation of `huggingface_hub`: `pip install --upgrade huggingface_hub`.", "I tried to update but got the same error message.\r\nAnyway, I'm checking now to see if it's a new update, I mean it came out yesterday, but no, it doesn't work either way.", "Oh I'm sorry, that's a deprecated argument. You need to pass the organization with the model name now:\r\n`model.save_to_hub(\"gszabo/sent_bert\", ...)`", "Unfortunately, it's not good either...\r\n\r\n```python\r\nimport huggingface_hub\r\nhuggingface_hub.__version__\r\n```\r\nAnd its version: 0.13.4\r\n", "What is the error message you are getting?", "I got the same one\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n[<ipython-input-17-9445b27a9463>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 model.save_to_hub(\"gszabo/sent_bert\", \r\n 2 organization=\"gszabo\",\r\n 3 train_datasets=[\"gszabo/sentence-compression\"],\r\n 4 exist_ok=True,\r\n 5 )\r\n\r\n1 frames\r\n[/usr/local/lib/python3.9/dist-packages/sentence_transformers/SentenceTransformer.py](https://localhost:8080/#) in save_to_hub(self, repo_name, organization, private, commit_message, local_model_path, exist_ok, replace_model_card, train_datasets)\r\n 465 \r\n 466 endpoint = \"https://huggingface.co/\"\r\n--> 467 repo_url = HfApi(endpoint=endpoint).create_repo(\r\n 468 token,\r\n 469 repo_name,\r\n\r\n[/usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in _inner_fn(*args, **kwargs)\r\n 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n 119 \r\n--> 120 return fn(*args, **kwargs)\r\n 121 \r\n 122 return _inner_fn # type: ignore\r\n\r\nTypeError: create_repo() got an unexpected keyword argument 'organization'\r\n```", "I am getting the same error for the same environment", "Yes, you need to remove the `organization` argument, it is not accepted anymore. The organization should be part of your repo ID now.\r\n\r\n```py\r\nmodel.save_to_hub(\"gszabo/sent_bert\", train_datasets=[\"gszabo/sentence-compression\"], exist_ok=True)\r\n```", "Also please open issues related to sentence-transformers in that repo, there is only so much I can do to help since I don't know it at all and don't maintain it. Closing it here, if you have further issues please go [there](https://github.com/UKPLab/sentence-transformers) :-)", "I think the problem here is a version mismatch between `transformers` and `huggigface_hub`. \r\n\r\nFor eg, a `transformers` version 4.17.0, will not work with a `huggingface_hub` version of 0.14.1. \r\n\r\nSpecifically, [L2954](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/file_utils.py#L2954) has the offending call. \r\n\r\nSo we have to check when this tag was released, for 4.17.0, the date is March 3, 2022. The appropriate `huggingface_hub` version is probably `0.5.0`, but then you would have to downgrade `datasets`, which depending on the version, might have a wrong filter implementation ([this issue](https://github.com/huggingface/datasets/pull/2947)).\r\n\r\nIn summary, if you are not updated to the latest transformers + datasets + huggingface_hub, and using all of them, you might break your code." ]
1,680
1,685
1,681
NONE
null
### System Info My environment: ``` - `transformers` version: 4.27.4 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.8 (gpu) - Jax version: 0.4.7 - JaxLib version: 0.4.7 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python model.save_to_hub("sent_bert", organization="gszabo", train_datasets=["gszabo/sentence-compression"], exist_ok=True, ) ``` ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-13-cbde1c79e18f>](https://localhost:8080/#) in <cell line: 1>() ----> 1 model.save_to_hub("sent_bert", 2 organization="gszabo", 3 train_datasets=["gszabo/sentence-compression"], 4 exist_ok=True, 5 ) 1 frames [/usr/local/lib/python3.9/dist-packages/sentence_transformers/SentenceTransformer.py](https://localhost:8080/#) in save_to_hub(self, repo_name, organization, private, commit_message, local_model_path, exist_ok, replace_model_card, train_datasets) 465 466 endpoint = "https://huggingface.co/" --> 467 repo_url = HfApi(endpoint=endpoint).create_repo( 468 token, 469 repo_name, [/usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in _inner_fn(*args, **kwargs) 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs) 119 --> 120 return fn(*args, **kwargs) 121 122 return _inner_fn # type: ignore TypeError: create_repo() got an unexpected keyword argument 'organization' ``` ### Expected behavior I used Google Colab, and I wanted to fine-tuned a sentence-bert (`sentence_transformers`) with my own data and then I wanted to push to the HF-hub. After that, I created a public repo on the hub in my account under the name `sent_bert`, before that I also logged in to Colab with `notebook_login()`. Also I created the model and then I used the `save_to_hub()` function when got the error. I tried to use the `push_to_hub()` function as well, but that doesn't support the sentence transformer models. Has anyone encountered something similar or do you know a solution to this? Anyway, I pretty much followed [these steps](https://huggingface.co/blog/how-to-train-sentence-transformers):
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22621/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22620
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22620/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22620/comments
https://api.github.com/repos/huggingface/transformers/issues/22620/events
https://github.com/huggingface/transformers/pull/22620
1,657,467,699
PR_kwDOCUB6oc5NxYR-
22,620
Add TensorFlow implementation of EfficientFormer
{ "login": "D-Roberts", "id": 4791217, "node_id": "MDQ6VXNlcjQ3OTEyMTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4791217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/D-Roberts", "html_url": "https://github.com/D-Roberts", "followers_url": "https://api.github.com/users/D-Roberts/followers", "following_url": "https://api.github.com/users/D-Roberts/following{/other_user}", "gists_url": "https://api.github.com/users/D-Roberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/D-Roberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/D-Roberts/subscriptions", "organizations_url": "https://api.github.com/users/D-Roberts/orgs", "repos_url": "https://api.github.com/users/D-Roberts/repos", "events_url": "https://api.github.com/users/D-Roberts/events{/privacy}", "received_events_url": "https://api.github.com/users/D-Roberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22620). All of your documentation changes will be reflected on that endpoint.", "cc @Rocketknight1 ", "Hi @D-Roberts, just letting you know the TF team at Hugging Face is aware of this and definitely interested in the port! Please ping me or @gante whenever it's ready for review, or if you run into any issues while porting.", "@Rocketknight1 @gante This PR is now ready for review.", "cc @amyeroberts for core maintainer review as well", "@Rocketknight1 @amyeroberts I addressed your comments and also submitted two PRs for the l1 and l3 weights (and tagged Rocketknight1). Let me know what's next!", "@D-Roberts - that's great! \r\n\r\nFor the CI - it seems there is an issue with your CircleCI permissions, as the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? Once all the tests are green, we'll be ready for final reviews :) ", "@amyeroberts Thanks for pointing out the circle ci fix. It appears that one doc test which (rightly) can't find tf weights is failing for now. I added back the `from_pt` in the model tests for the sake of ci tests until the tf weights get merged.", "@D-Roberts Just to let you know, we've reached out to the team at Snap to ask them to merge your PRs on the EfficientFormer checkpoints. Sorry for the delay!", "@D-Roberts the checkpoint PRs should be merged now. Thank you to @alanspike for the quick response!", "@amyeroberts @Rocketknight1 All local tests pass with the new tf weights. The CI gets this documentation tests failing; the pt version also predicts 281 which maps to label_281 in config.", "@D-roberts I think it's fine to swap those tests for just checking the actual argmax index rather than the `id2label` string value. Obviously the repository config doesn't actually have the `id2label` values set, so fixing that would require another PR to the repos.", "@Rocketknight1 @amyeroberts Alright. :)", "LGTM now - I'm happy to merge as soon as you and @amyeroberts are!", "Hi @Rocketknight1 , I've addressed the last comments from @amyeroberts and had all tests pass. I am ready for merge whenever you are. I've just rebased to upstream and there are some unrelated ci tests failing, though last night everything was green.", "@Rocketknight1 All green again. :)\r\n", "@sgugger @amyeroberts @Rocketknight1 I was wondering - when do you plan a transformers release that includes this code? ", "@D-Roberts We release [roughly once a month](https://github.com/huggingface/transformers/releases) and are planning on releasing 4.30 later this week. If you need it right now, it's possible to [install from source](https://huggingface.co/docs/transformers/installation#install-from-source) to have the `main` version too. " ]
1,680
1,686
1,685
CONTRIBUTOR
null
# What does this PR do? * Adding EfficientFormer computer vision model TensorFlow port (not a llm port). * Fixes some minor typos and a couple of differences in the PyTorch model code: 1) the non-dict / tuple return was not returning last hidden state but the state before last stage. The dict and tuple return of the encoder should be equivalent, as seen in other [models](https://github.com/huggingface/transformers/blob/main/src/transformers/models/poolformer/modeling_poolformer.py#L258). 2) Two layernorms were not using the config eps (assuming that the config is the ground truth). Let me know how you think about this. Ran tests (CPU-only, all pass) with: `NVIDIA_TF32_OVERRIDE=1 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 py.test -vv -rA tests/models/efficientformer/test_modeling_tf_efficientformer.py ` Double checked pt and tf architecture codes with the "[EfficientFormer: Vision Transformers at MobileNet Speed](https://proceedings.neurips.cc/paper_files/paper/2022/hash/5452ad8ee6ea6e7dc41db1cbd31ba0b8-Abstract-Conference.html)" paper. Verified on example image shapes and diffs in hidden states: ``` from transformers import EfficientFormerImageProcessor from src.transformers.models.efficientformer.modeling_tf_efficientformer import TFEfficientFormerModel from src.transformers.models.efficientformer.modeling_efficientformer import EfficientFormerModel model_tf = TFEfficientFormerModel.from_pretrained("snap-research/efficientformer-l1-300", from_pt=True) model_pt = EfficientFormerModel.from_pretrained("snap-research/efficientformer-l1-300") image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") proc = EfficientFormerImageProcessor.from_pretrained("snap-research/efficientformer-l1-300") inputstf = proc(images=image, return_tensors="tf") inputspt = proc(images=image, return_tensors="pt") outtf = model_tf(**inputstf, output_hidden_states=True, training=False) with torch.no_grad(): outpt = model_pt(**inputspt, output_hidden_states=True) max_diff = np.amax(np.abs(outtf[0].numpy() - outpt[0].numpy())) print(f"last hidden diff shape: {outtf[0].shape}, last hidden diff: {max_diff}, last hidden <= 1e-4, {max_diff <= 1e-4}") for i in range(7): max_diff = np.amax(np.abs(outtf[1][i].numpy() - outpt[1][i].numpy())) print(f"hidden state {i} shape: {outtf[1][i].shape}, diff: {max_diff}, max_diff <= 1e-4: {max_diff <= 1e-4}") ``` which gives: ``` last hidden diff shape: (1, 49, 448), last hidden diff: 2.1457672119140625e-05, last hidden <= 1e-4, True hidden state 0 shape: (1, 48, 56, 56), diff: 7.271766662597656e-06, max_diff <= 1e-4: True hidden state 1 shape: (1, 48, 56, 56), diff: 5.054473876953125e-05, max_diff <= 1e-4: True hidden state 2 shape: (1, 96, 28, 28), diff: 2.9087066650390625e-05, max_diff <= 1e-4: True hidden state 3 shape: (1, 96, 28, 28), diff: 2.3603439331054688e-05, max_diff <= 1e-4: True hidden state 4 shape: (1, 224, 14, 14), diff: 1.6689300537109375e-05, max_diff <= 1e-4: True hidden state 5 shape: (1, 224, 14, 14), diff: 4.1961669921875e-05, max_diff <= 1e-4: True hidden state 6 shape: (1, 448, 7, 7), diff: 1.9550323486328125e-05, max_diff <= 1e-4: True ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22620/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22620/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22620", "html_url": "https://github.com/huggingface/transformers/pull/22620", "diff_url": "https://github.com/huggingface/transformers/pull/22620.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22620.patch", "merged_at": 1685526193000 }
https://api.github.com/repos/huggingface/transformers/issues/22619
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22619/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22619/comments
https://api.github.com/repos/huggingface/transformers/issues/22619/events
https://github.com/huggingface/transformers/pull/22619
1,657,415,061
PR_kwDOCUB6oc5NxNpl
22,619
Add TimmBackbone model
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ydshieh Merging now. I'll address any comments or requests for changes in a follow up PR." ]
1,680
1,686
1,686
COLLABORATOR
null
# What does this PR do? Adds a new model TimmBackbone to use for loading timm weights for use in the AutoBackbone API. Example usage: ``` from transformers import AutoBackbone # Loads a transformers model backbone = AutoBackbone.from_pretrained("microsoft/resnet-18") # Loads a timm checkpoint backbone = AutoBackbone.from_pretrained("resnet18", use_timm_backbone=True) ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22619/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22619", "html_url": "https://github.com/huggingface/transformers/pull/22619", "diff_url": "https://github.com/huggingface/transformers/pull/22619.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22619.patch", "merged_at": 1686067891000 }
https://api.github.com/repos/huggingface/transformers/issues/22618
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22618/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22618/comments
https://api.github.com/repos/huggingface/transformers/issues/22618/events
https://github.com/huggingface/transformers/pull/22618
1,657,403,862
PR_kwDOCUB6oc5NxLaw
22,618
Fix docstrings for TF BLIP
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@Rocketknight1 Thanks for adding tf-blip. \r\nPS. in case you're interested to contribute https://github.com/keras-team/keras-nlp/issues/941", "@Rocketknight1 I tried to re-run the CI, but it still fails. Could you push an empty commit to trigger it maybe?", "_The documentation is not available anymore as the PR was closed or merged._", "@ydshieh tests are passing now! Can you approve the PR?", "@Rocketknight1 When I run the doctest\r\n\r\n```python\r\npython3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules src/transformers/models/blip/modeling_tf_blip.py::transformers.models.blip.modeling_tf_blip.TFBlipForConditionalGeneration.generate -sv --doctest-continue-on-failure --doctest-glob=\"*.mdx\"\r\n```\r\nI got\r\n```\r\nExpected:\r\n two cats are laying on a couch\r\nGot:\r\n two cats sleeping on a couch\r\n```", "Also, the changes are not just docstrings. After looking at the changes, I am wondering why we don't have CI failures (i.e. the usual testing).\r\n\r\nIt turns out that we do have some failures\r\nhttps://github.com/huggingface/transformers/actions/runs/4653835201/jobs/8235101265\r\n\r\n@Rocketknight1 Do you want to verify/fix those in this same PR? (probably already fixed as CircleCI is green?)", "Oh, that's very odd - I wonder why it's not visible in the CI? I'll take a look!", "Thank you @Rocketknight1 . I am also confused why CircleCI is green but failed on daily CI.", "@ydshieh tests should pass now! The cause was some expected values in the tests being wrong. I copied the right ones from the torch tests and now everything is passing locally, so hopefully the CI will agree.", "The doctests are weird, though - I think some of them were broken in PyTorch too. Working on it!", "Thanks @Rocketknight1 I will double check. But do you figure out (some of) tests pass on CircleCI but fails on daily CI - the pt<->tf equivalence tests also run on CircleCI. We should see they fail on it (as they fail on daily CI).", "For the doctest, we still get \r\n\r\n```python\r\nExpected:\r\n two cats are laying on a couch\r\nGot:\r\n two cats sleeping on a couch\r\n```\r\nfor `TFBlipForConditionalGeneration.generate`.\r\n\r\nRegarding the modeling tests - all tests pass now 🥳 ", "@ydshieh should be resolved now!" ]
1,680
1,681
1,681
MEMBER
null
Some of the docstrings were still a bit PyTorchy, this is fixed now! (cc @ydshieh)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22618/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22618/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22618", "html_url": "https://github.com/huggingface/transformers/pull/22618", "diff_url": "https://github.com/huggingface/transformers/pull/22618.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22618.patch", "merged_at": 1681318002000 }
https://api.github.com/repos/huggingface/transformers/issues/22617
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22617/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22617/comments
https://api.github.com/repos/huggingface/transformers/issues/22617/events
https://github.com/huggingface/transformers/issues/22617
1,657,368,723
I_kwDOCUB6oc5iyXCT
22,617
Error while installing dev dependencies for Apple Silicon
{ "login": "xssChauhan", "id": 9297805, "node_id": "MDQ6VXNlcjkyOTc4MDU=", "avatar_url": "https://avatars.githubusercontent.com/u/9297805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xssChauhan", "html_url": "https://github.com/xssChauhan", "followers_url": "https://api.github.com/users/xssChauhan/followers", "following_url": "https://api.github.com/users/xssChauhan/following{/other_user}", "gists_url": "https://api.github.com/users/xssChauhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/xssChauhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xssChauhan/subscriptions", "organizations_url": "https://api.github.com/users/xssChauhan/orgs", "repos_url": "https://api.github.com/users/xssChauhan/repos", "events_url": "https://api.github.com/users/xssChauhan/events{/privacy}", "received_events_url": "https://api.github.com/users/xssChauhan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts and @nateraw ", "Hmm maybe its the Python version? Some issue mentioning that here: https://github.com/dmlc/gluon-cv/issues/1539", "The issue is mainly because of Apple Silicon. `decord` does not provide any built wheels for apple silicon, and hence cannot be found using pip. I had to build it from source and then install the python bindings.\r\n\r\nSimilar issue arises for `tensorflow-text` since it also does not provide any built wheels for apple silicon, and has to be built from scratch. I used a community built wheel from [here](https://github.com/sun1638650145/Libraries-and-Extensions-for-TensorFlow-for-Apple-Silicon/releases).\r\n\r\nI think the docs should be updated to account for these issues.", "I see. we should add a note then that in some cases you may need to install `decord` from source, and link to any related issues.\r\n\r\nOr, perhaps we migrate fully to `pyav` at this point, given we started to do that here: #21572 (since decord is no longer being actively maintained and these issues will never go away).\r\n\r\nWDYT?", "My dev setup on apple silicon failed with \r\n```\r\nERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11\r\nERROR: Could not find a version that satisfies the requirement jaxlib<=0.3.6,>=0.1.65; extra == \"dev\" (from transformers[dev]) (from versions: 0.3.24, 0.3.25, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.6, 0.4.7)\r\nERROR: No matching distribution found for jaxlib<=0.3.6,>=0.1.65; extra == \"dev\"\r\n```\r\n\r\nAfter a bit of hunting found success by installing jaxlib through conda : https://github.com/google/jax/issues/5501#issuecomment-1032891169 \r\n\r\nMaybe it helps someone \r\n\r\n\r\n", "@nateraw Migrating fully to `pyav` is indeed the correct thing to do since the migration has already begun.\r\n\r\nThere are still other issues with setting up dev env on apple silicon, and setting it up correctly should be part of docs. It took me some time to correctly install the entire `dev` env. Following is the list of issues and solutions that worked for me for `python 3.9`:\r\n\r\n- `decord`\r\n - `Problem`: No prebuilt wheels for apple silicon.\r\n - `Solution`: Building locally, and installing python bindings. \r\n - `Action`: Complete migration to `pyav`.\r\n- `tensorflow`, `tensorflow-*`\r\n - `Problem`: `tensorflow`, `tensorflow-*` are not directly installable for macOS. \r\n - `Solution`: Need to install `tensorflow-deps` from conda apple channel. This has already been highlighted in a previous issue #18355. Install `tensorflow-macos`( instead of `tensorflow`).\r\n - `Action`: \r\n - `setup.py`should be able to detect if the dev env is being setup on apple silicon, and install `tensorflow-macos`instead of `tensorflow`. \r\n - Docs should account for setting up dev env on apple silicon.\r\n- `pip`\r\n - `Problem`: `pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 20000`\r\n - Not sure if others have faced this issue, but this could also just be my machine.\r\n - `pip` was unable to resolve the dev dependencies in about 6 hours, and failed with the above error.\r\n - This could be because some package versions are not aligned for apple silicon correctly.\r\n - `Solution`: \r\n - Follow the steps mentioned in issue #18355 .\r\n - install and build record\r\n - install `tensorflow-text` from [here](https://github.com/sun1638650145/Libraries-and-Extensions-for-TensorFlow-for-Apple-Silicon/releases)\r\n - Run `pip install -e \".[dev]\" --use-deprecated=legacy-resolver`\r\n - Run `pip3 install lxml`\r\n - Some package versions were still not correctly resolved, and tests were failing, along with the `make fixup`, etc commands. So, I had to install specific versions of the following packages from pip: `jax==0.4.7`, `numpy==1.23`.\r\n - `onnx` had to be installed using instructions [here](https://github.com/onnx/onnx/issues/3621#issuecomment-890351498).\r\n - `Action`:\r\n - I need some sanity check for the pip errors. Am I the only one who faced this or is this reproducible for other `M2`users.\r\n - Possible update of `setup.py` to account for apple silicon, and guides in docs.\r\n\r\n\r\nAfter these steps, the dev environment is finally working for me along with tests, and other commands. But, this took way too long to get working. I also tried setting up `vscode devcontainer` for dev dependencies, but jaxlib still does not provide wheels for `manylinux aarch64` wheels yet.\r\n\r\nHow can we proceed here? I want to actively contribute towards solving these issues :)\r\n", "@mayankagarwals Did you face the issues highlighted in above comment? [Link](https://github.com/huggingface/transformers/issues/22617#issuecomment-1501078224)", "For most contributions, you only need to run `pip install -e .[\"quality\"]`, but we do need TensorFlow and Jax for the complete quality checks (as we have many models in both those frameworks). But if you make contributions that do not require them (e.g. you're not touching a TensorFlow or Flax model) you will be fine.", "@nateraw I could already start working on completing the migration from `decord` to `pyav` .\r\n\r\nWhat do you think about the other set of problems I point out?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @sgugger, I tried installing transformers using `pip install -e '.[quality]'` and it failed to build the wheel for `safetensors`.\r\n\r\nI am facing this on M1 Pro, Python 3.10\r\n\r\n<img width=\"1507\" alt=\"Screenshot 2023-08-12 at 7 45 40 PM\" src=\"https://github.com/huggingface/transformers/assets/26519539/08e5ddb8-af1d-423c-9acf-dd2b3a44bb95\">\r\n", "cc @Narsil ", "@tanaymeh \r\n\r\nCan you try doing `pip install -U safetensors` to confirm the bug occurs there ?\r\nVersion `0.3.2` should be precompiled for Python 3.10 macos 13 on ARM (m1)...\r\nhttps://pypi.org/project/safetensors/0.3.2/#files\r\n\r\nIf the command does work, you should just use that.\r\nIf that doesn't work, I'll investigate why your environment is not picking it up.", "As of [`transformers==4.36.2`](https://github.com/huggingface/transformers/tree/v4.36.2), on macOS 13.5.2 on a MacBook Pro with an M1 chip with Python 3.11.7, I was able to:\r\n\r\n```bash\r\npip install -e .[quality,testing,docs_specific,sentencepiece,torch]\r\n```\r\n\r\nThis is enough to run _some_ of the `pytest` cases without `ImportError`s\r\n\r\nI am still unable to install `tensorflow-text` as of its latest `v2.15.0`. https://github.com/tensorflow/text/issues/823 links to some instructions on how to manually build, for those who need it" ]
1,680
1,705
1,684
CONTRIBUTOR
null
### System Info - `transformers` version: 4.24.0 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.10.10 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Follow contribution guidelines as outlined [here](https://huggingface.co/docs/transformers/contributing#create-a-pull-request), at the `pip install -e ".[dev]"` step and results in the following output and error: ``` $ pip install -e ".[dev]" Obtaining file:///Users/eipizero/Documents/Code/transformers Installing build dependencies ... done Checking if build backend supports build_editable ... done Getting requirements to build editable ... done Installing backend dependencies ... done Preparing editable metadata (pyproject.toml) ... done Collecting tqdm>=4.27 Using cached tqdm-4.65.0-py3-none-any.whl (77 kB) Collecting packaging>=20.0 Using cached packaging-23.0-py3-none-any.whl (42 kB) Collecting pyyaml>=5.1 Using cached PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl (173 kB) Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 Using cached tokenizers-0.13.3-cp310-cp310-macosx_12_0_arm64.whl (3.9 MB) Collecting huggingface-hub<1.0,>=0.11.0 Using cached huggingface_hub-0.13.3-py3-none-any.whl (199 kB) Collecting requests Using cached requests-2.28.2-py3-none-any.whl (62 kB) Collecting filelock Using cached filelock-3.10.7-py3-none-any.whl (10 kB) Collecting numpy>=1.17 Using cached numpy-1.24.2-cp310-cp310-macosx_11_0_arm64.whl (13.9 MB) Collecting regex!=2019.12.17 Using cached regex-2023.3.23-cp310-cp310-macosx_11_0_arm64.whl (288 kB) Collecting optax>=0.0.8 Using cached optax-0.1.4-py3-none-any.whl (154 kB) Collecting pyctcdecode>=0.4.0 Using cached pyctcdecode-0.5.0-py2.py3-none-any.whl (39 kB) Collecting nltk Using cached nltk-3.8.1-py3-none-any.whl (1.5 MB) Collecting torch!=1.12.0,>=1.9 Using cached torch-2.0.0-cp310-none-macosx_11_0_arm64.whl (55.8 MB) Collecting jax!=0.3.2,<=0.3.6,>=0.2.8 Using cached jax-0.3.6.tar.gz (936 kB) Preparing metadata (setup.py) ... done Collecting pytest Using cached pytest-7.2.2-py3-none-any.whl (317 kB) Collecting kenlm Using cached kenlm-0.1.tar.gz (424 kB) Preparing metadata (setup.py) ... done Collecting GitPython<3.1.19 Using cached GitPython-3.1.18-py3-none-any.whl (170 kB) Collecting sudachipy>=0.6.6 Using cached SudachiPy-0.6.7-cp310-cp310-macosx_10_12_universal2.whl (2.4 MB) ERROR: Could not find a version that satisfies the requirement decord==0.6.0; extra == "dev" (from transformers[dev]) (from versions: none) ERROR: No matching distribution found for decord==0.6.0; extra == "dev" ``` ### Expected behavior Development dependencies should be installed without error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22617/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22616
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22616/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22616/comments
https://api.github.com/repos/huggingface/transformers/issues/22616/events
https://github.com/huggingface/transformers/issues/22616
1,657,336,336
I_kwDOCUB6oc5iyPIQ
22,616
transformers python module “tokenizers” version is not matching with FastChat project “tokenizers”
{ "login": "SullivanJia", "id": 26838155, "node_id": "MDQ6VXNlcjI2ODM4MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/26838155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SullivanJia", "html_url": "https://github.com/SullivanJia", "followers_url": "https://api.github.com/users/SullivanJia/followers", "following_url": "https://api.github.com/users/SullivanJia/following{/other_user}", "gists_url": "https://api.github.com/users/SullivanJia/gists{/gist_id}", "starred_url": "https://api.github.com/users/SullivanJia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SullivanJia/subscriptions", "organizations_url": "https://api.github.com/users/SullivanJia/orgs", "repos_url": "https://api.github.com/users/SullivanJia/repos", "events_url": "https://api.github.com/users/SullivanJia/events{/privacy}", "received_events_url": "https://api.github.com/users/SullivanJia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm not sure why you are opening the issue here. It's a problem in the dependencies of FastChat.", "Because this makes me very confused about which dependency configuration should refer to when project updating so quickly", "This issue can resolve by installing a specific version's transformer (with source),:\r\n```\r\npip uninstall transformers\r\npip install git+https://github.com/huggingface/transformers@cae78c46d\r\n```", "After implementing the soln, this issue pops up\r\nFailed to import transformers.models.llama.tokenization_llama_fast because of the following error (look up to see its traceback):\r\nNo module named 'transformers.models.llama.tokenization_llama_fast'\r\n", "pip install tokenizers==0.13.3 -- this should solve your problem" ]
1,680
1,689
1,680
NONE
null
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.28.0.dev0 - Platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1、When I installed FastChat which need to install the lastest main brach of huggingface/transformers , I found that https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/tokenization_llama_fast.py which tokenizers version ( require_version("tokenizers>=0.13.3") ) is not matching the latest main branch of FastChat tokenizers version: ``` https://github.com/lm-sys/FastChat/blob/main/pyproject.toml dependencies = [ "accelerate", "fastapi", "gradio==3.23", "markdown2[all]", "numpy", "requests", "sentencepiece", **"tokenizers==0.12.1",** "torch", "uvicorn", "wandb", "transformers @ git+https://github.com/huggingface/transformers.git" ] ``` 2、So ,FastChat current version(0.1.4) needs to match which transformers version and tokenizers version? Errors: ``` Loading base model Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:26<00:00, 1.54it/s] Loading delta Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:19<00:00, 6.47s/it] Traceback (most recent call last): File "/home/jiagy/transformers/src/transformers/utils/import_utils.py", line 1125, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/root/miniconda3/envs/fast-chat/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/home/jiagy/transformers/src/transformers/models/llama/tokenization_llama_fast.py", line 19, in <module> require_version("tokenizers>=0.13.3") File "/home/jiagy/transformers/src/transformers/utils/versions.py", line 117, in require_version _compare_versions(op, got_ver, want_ver, requirement, pkg, hint) File "/home/jiagy/transformers/src/transformers/utils/versions.py", line 50, in _compare_versions raise ImportError( ImportError: tokenizers>=0.13.3 is required for a normal functioning of this module, but found tokenizers==0.12.1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/root/miniconda3/envs/fast-chat/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/miniconda3/envs/fast-chat/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/jiagy/FastChat/fastchat/model/apply_delta.py", line 49, in <module> apply_delta(args.base_model_path, args.target_model_path, args.delta_path) File "/home/jiagy/FastChat/fastchat/model/apply_delta.py", line 19, in apply_delta delta_tokenizer = AutoTokenizer.from_pretrained(delta_path) File "/home/jiagy/transformers/src/transformers/models/auto/tokenization_auto.py", line 691, in from_pretrained tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate) File "/home/jiagy/transformers/src/transformers/models/auto/tokenization_auto.py", line 392, in tokenizer_class_from_name return getattr(module, class_name) File "/home/jiagy/transformers/src/transformers/utils/import_utils.py", line 1115, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/home/jiagy/transformers/src/transformers/utils/import_utils.py", line 1127, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.llama.tokenization_llama_fast because of the following error (look up to see its traceback): tokenizers>=0.13.3 is required for a normal functioning of this module, but found tokenizers==0.12.1. ``` ### Expected behavior transformer tokenizers version is matching with FastChat tokenizers version.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22616/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22615
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22615/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22615/comments
https://api.github.com/repos/huggingface/transformers/issues/22615/events
https://github.com/huggingface/transformers/pull/22615
1,657,306,383
PR_kwDOCUB6oc5Nw3i8
22,615
Translated title of fast_tokenizer to test PR
{ "login": "kihoon71", "id": 75935546, "node_id": "MDQ6VXNlcjc1OTM1NTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/75935546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kihoon71", "html_url": "https://github.com/kihoon71", "followers_url": "https://api.github.com/users/kihoon71/followers", "following_url": "https://api.github.com/users/kihoon71/following{/other_user}", "gists_url": "https://api.github.com/users/kihoon71/gists{/gist_id}", "starred_url": "https://api.github.com/users/kihoon71/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kihoon71/subscriptions", "organizations_url": "https://api.github.com/users/kihoon71/orgs", "repos_url": "https://api.github.com/users/kihoon71/repos", "events_url": "https://api.github.com/users/kihoon71/events{/privacy}", "received_events_url": "https://api.github.com/users/kihoon71/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22615). All of your documentation changes will be reflected on that endpoint.", "Dear @KIHOON71 ,\r\n1. Please remove \"Fixes # (issue)\" on the decription.\r\n2. Please add [WIP] on the title or change to draft status.\r\n Then reivewers can check this PR is now in-progress. :-)\r\nBRs." ]
1,680
1,682
1,682
CONTRIBUTOR
null
# What does this PR do? Firstly, sorry for late PR, and From this week i can handle rest of the task. so Next this kind of thing will not happen again. I did translate the title of fast_tokenizers.mdx. Part of https://github.com/huggingface/transformers/issues/20179 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Team PseudoLab, may you please review this PR? @0525hhgus, @wonhyeongseo , @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22615/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22615", "html_url": "https://github.com/huggingface/transformers/pull/22615", "diff_url": "https://github.com/huggingface/transformers/pull/22615.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22615.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22614
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22614/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22614/comments
https://api.github.com/repos/huggingface/transformers/issues/22614/events
https://github.com/huggingface/transformers/pull/22614
1,657,290,563
PR_kwDOCUB6oc5Nw0PO
22,614
Add DistilBERTForCausalLM
{ "login": "leonjovanovic", "id": 45070620, "node_id": "MDQ6VXNlcjQ1MDcwNjIw", "avatar_url": "https://avatars.githubusercontent.com/u/45070620?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leonjovanovic", "html_url": "https://github.com/leonjovanovic", "followers_url": "https://api.github.com/users/leonjovanovic/followers", "following_url": "https://api.github.com/users/leonjovanovic/following{/other_user}", "gists_url": "https://api.github.com/users/leonjovanovic/gists{/gist_id}", "starred_url": "https://api.github.com/users/leonjovanovic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leonjovanovic/subscriptions", "organizations_url": "https://api.github.com/users/leonjovanovic/orgs", "repos_url": "https://api.github.com/users/leonjovanovic/repos", "events_url": "https://api.github.com/users/leonjovanovic/events{/privacy}", "received_events_url": "https://api.github.com/users/leonjovanovic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22614). All of your documentation changes will be reflected on that endpoint.", "There is no checkpoint available for this task and DistilBERT, which is an encoder model. What is your usecase for adding this?", "> There is no checkpoint available for this task and DistilBERT, which is an encoder model. What is your usecase for adding this?\r\n\r\n@sgugger To fine-tune DistilBERT models for text generation with EncoderDecoder class.", "How is it different than using BERT?", "@sgugger It is not possible to create Transformer model with EncoderDecoderModel using DistilBERT checkpoint (e.g. BertConfig is supported, but DistilBertConfig is not).\r\nIf I try to create Encoder Decoder model with distilbert checkpoints like this:\r\n`\r\nmodel_name = 'distilbert-base-multilingual-cased'\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained(model_name, model_name)\r\n`\r\n\r\nError is raised: \r\n> ValueError: Unrecognized configuration class for this kind of AutoModel: AutoModelForCausalLM.\r\n> Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig...", "@sgugger \r\n@patrickvonplaten\r\n@ArthurZucker\r\n@younesbelkada\r\n\r\nAnyone looking at this?", "Those are not changes we want everyone to have in the distilBERT: it makes the model code too unreadable just so that you can use it in the EncoderDecoder framework. We can leave the fork open if you want to share it with others, and you can also push it in any repo on the Hub using the dynamic code feature.", "I followed same code structure as in BERT. Its not only for EncoderDecoder, your current version doesnt allow usage of DistilBERT in Text generation, which can be useful.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
# What does this PR do? Similar to the BertLMHeadModel this PR aims to add a DistilBertForCausalLM model in modeling_distilbert.py. Fixes https://github.com/huggingface/transformers/issues/7397 Replaces https://github.com/huggingface/transformers/pull/8387, https://github.com/huggingface/transformers/pull/11085 ## Who can review? @patrickvonplaten @ArthurZucker @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22614/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22614/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22614", "html_url": "https://github.com/huggingface/transformers/pull/22614", "diff_url": "https://github.com/huggingface/transformers/pull/22614.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22614.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22613
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22613/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22613/comments
https://api.github.com/repos/huggingface/transformers/issues/22613/events
https://github.com/huggingface/transformers/pull/22613
1,657,245,537
PR_kwDOCUB6oc5Nwq8_
22,613
allow separate relative attention bias
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks for your PR. Transformers is not a modular toolbox so we never add functionality to existing models like this. If you need this parameter for a new model, you should create a copy of T5 with the add-new-model-like command and just adapt the modeling code.\r\n> \r\n> Or you can jsut host your slightly modified T5 model on the Hub with the [code on the Hub API](https://huggingface.co/docs/transformers/custom_models).\r\n\r\nThanks for your feedback. No problem, I will close this PR and add another one that have a separate model." ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? It supports umt5 models, which need separate relative attention biases for each layer. The current code will have backward compatibility with previous T5 and MT5 checkpoints. Fixes # (issue) https://github.com/huggingface/transformers/issues/22573 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Models: - text models: @ArthurZucker and @younesbelkada - @stefan-it
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22613/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22613", "html_url": "https://github.com/huggingface/transformers/pull/22613", "diff_url": "https://github.com/huggingface/transformers/pull/22613.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22613.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22612
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22612/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22612/comments
https://api.github.com/repos/huggingface/transformers/issues/22612/events
https://github.com/huggingface/transformers/issues/22612
1,657,227,373
I_kwDOCUB6oc5ix0ht
22,612
Add `output_hidden_state` and `output_scores` to Flax generate
{ "login": "hannan72", "id": 8229163, "node_id": "MDQ6VXNlcjgyMjkxNjM=", "avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hannan72", "html_url": "https://github.com/hannan72", "followers_url": "https://api.github.com/users/hannan72/followers", "following_url": "https://api.github.com/users/hannan72/following{/other_user}", "gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}", "starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hannan72/subscriptions", "organizations_url": "https://api.github.com/users/hannan72/orgs", "repos_url": "https://api.github.com/users/hannan72/repos", "events_url": "https://api.github.com/users/hannan72/events{/privacy}", "received_events_url": "https://api.github.com/users/hannan72/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" }, { "id": 2934977194, "node_id": "MDU6TGFiZWwyOTM0OTc3MTk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Flax", "name": "Flax", "color": "4862AD", "default": false, "description": "" } ]
open
false
null
[]
[ "cc @sanchit-gandhi ", "I found that Flax model when set to use beam-search calculates the scores value:\r\nhttps://github.com/huggingface/transformers/blob/12d51db243a00726a548a43cc333390ebae731e3/src/transformers/generation/flax_utils.py#L83-L96\r\n\r\nand in the _beam_search method it is calculated and returned:\r\nhttps://github.com/huggingface/transformers/blob/12d51db243a00726a548a43cc333390ebae731e3/src/transformers/generation/flax_utils.py#L998-L1004\r\n\r\nbut it doesn't return scores when greedy-search is done:\r\nhttps://github.com/huggingface/transformers/blob/12d51db243a00726a548a43cc333390ebae731e3/src/transformers/generation/flax_utils.py#L55-L65", "I run the flax whisper model in beam_search model by passing `generation_config.num_beams` to a value larger than 1.\r\nIt returns `scores` at the output but it is totally different from the `scores` returned from PyTorch model.\r\nscores in Flax is just a scalar value but scores output of PyTorch model is a List of n (n = number of output tokens) in which each element of list is a torch.tensor(1, size of vocab). In other words the scores of Pytorch return score of each output token with the probability (score) of every vocab token.\r\n\r\nSo the Flax output scores is something totally different", "I found logits of Flax in flax_utils.py as follows:\r\nhttps://github.com/huggingface/transformers/blob/ed67286465c5e9e3d3005de3e21bc3c679d93072/src/transformers/generation/flax_utils.py#L610-L618\r\n\r\nJust need to extract this logits out of greed_search function and return it", "I've added the support of `output_scores` to the flax_utils.py code in the followin fork:\r\nhttps://github.com/hannan72/transformers/commit/116d8f38722359ca5d2dad918975348359cc2ac1\r\n\r\nAnd also add support of the following parameters to the Flax-Whisper model:\r\nhttps://github.com/hannan72/transformers/commit/accdcb2d66496c5ee8547739bf833c95e189344c\r\n\r\n@sanchit-gandhi \r\nCould you review changes and do a PR to support scores value for flax model?", "I have made a PR about this feature:\r\nhttps://github.com/huggingface/transformers/pull/22700", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "https://github.com/huggingface/transformers/pull/22700 is still open and active 🤗", "https://github.com/huggingface/transformers/pull/22700 is still open and active", "Hey everyone! @hannan72 has done a great job at working on the PR for this feature. The Flax generation code is more or less complete, but there are a few extra integration tests we want to add to make sure the code gives the expected results: https://github.com/huggingface/transformers/pull/22700#discussion_r1288921417\r\n\r\nIf anyone would like to finish this PR, contributions are more than welcome! Feel free to have a look through the pull request and familiarise yourself with the generation code changes. The last pending point is the integration test mentioned above, which should be quite straightforward to add by comparing the Flax outputs to the PyTorch ones.\r\n\r\ncc @teddius" ]
1,680
1,696
null
NONE
null
I need whisper's output_scores and output_hidden_states as the result of generate() method. On Pytorch model, I can easily get the output_scores and output_hidden_states by setting these parameters in generate() method as follows: ``` whisper_output = model.generate(inputs=input_features, max_new_tokens=180, output_scores=True, output_hidden_states=True, return_dict_in_generate=True) ``` and the resulted `whisper_output` returns 'scores' and 'output_hidden_states' as it keys alongside 'sequences' Now I want to do so for Flax whisper model. but setting these parameters as the static_argnames of model doesn't have effect to get output_scores. Is there any solution for getting output_scores or logits from Flax whisper model?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22612/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/22611
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22611/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22611/comments
https://api.github.com/repos/huggingface/transformers/issues/22611/events
https://github.com/huggingface/transformers/pull/22611
1,657,186,200
PR_kwDOCUB6oc5NweuK
22,611
[doc] Try a few ≠ ways of linking to Papers, users, and org profiles
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "for some reason i can't see the model pages in the PR's generated doc", "I can't preview the doc for some reason (getting an error on the [distilbert preview page](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22611/en/model_doc/distilbert)). Also there seems to be an issue with your CircleCI permissions, the tests won't run.\r\nCould you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?\r\n\r\nOtherwise the changes look good to me in preview. Would just love to preview the badge for papers!", "Weird, the doc build ran successfully and uploaded a zip file, but it does not contain the modeling files. Will see what's up in a bit.", "It's correctly displayed on the docs now and on the link you shared @sgugger, there was an error with the backend sync.", "ok i kinda like it, WDYT?\r\n\r\n<img width=\"1160\" alt=\"image\" src=\"https://user-images.githubusercontent.com/326577/230628465-f3850483-8338-4956-a969-f4f9ffe6b3ea.png\">\r\n\r\nLink: https://moon-ci-docs.huggingface.co/docs/transformers/pr_22611/en/model_doc/t5" ]
1,680
1,683
1,683
MEMBER
null
would love to hear others' thoughts.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22611/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22611", "html_url": "https://github.com/huggingface/transformers/pull/22611", "diff_url": "https://github.com/huggingface/transformers/pull/22611.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22611.patch", "merged_at": 1683130990000 }
https://api.github.com/repos/huggingface/transformers/issues/22610
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22610/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22610/comments
https://api.github.com/repos/huggingface/transformers/issues/22610/events
https://github.com/huggingface/transformers/issues/22610
1,657,143,402
I_kwDOCUB6oc5ixgBq
22,610
ASTModel Signature doesn't work
{ "login": "conradg", "id": 4610193, "node_id": "MDQ6VXNlcjQ2MTAxOTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4610193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/conradg", "html_url": "https://github.com/conradg", "followers_url": "https://api.github.com/users/conradg/followers", "following_url": "https://api.github.com/users/conradg/following{/other_user}", "gists_url": "https://api.github.com/users/conradg/gists{/gist_id}", "starred_url": "https://api.github.com/users/conradg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/conradg/subscriptions", "organizations_url": "https://api.github.com/users/conradg/orgs", "repos_url": "https://api.github.com/users/conradg/repos", "events_url": "https://api.github.com/users/conradg/events{/privacy}", "received_events_url": "https://api.github.com/users/conradg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @amyeroberts ", "Hi @conradg, thanks for reporting this. \r\n\r\nI believe this is an with the documentation for `input_values` having the incorrect info. Looking at a [code example](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/audio-spectrogram-transformer#transformers.ASTModel.forward.example), the shape of the input array to the model is `(batch_size, max_length, num_mel_bins)`. Testing, on `main` the example runs successfully. I'll open a quick PR to update. " ]
1,680
1,681
1,681
NONE
null
### System Info - `transformers` version: 4.27.2 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.10.7 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @NielsRogge ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When I run the following code, I expect this to do a forward pass successfully. I'm using random numbers to test the Model. ``` import torch import numpy as np from transformers import ASTModel, ASTConfig from torch.utils.data import DataLoader configuration = ASTConfig() model = ASTModel(configuration) # (batch_size, channels, height, width) dataset = torch.tensor(np.random.normal(size = (100,1,256,256))) dataLoader = DataLoader(dataset, batch_size=25, pin_memory=True) for data in dataLoader: model(torch.tensor(data).float()) ``` the shape of the input_values is pulled from the [Official docs](https://huggingface.co/docs/transformers/v4.27.2/en/model_doc/audio-spectrogram-transformer#transformers.ASTModel.forward.input_values). > `input_values (torch.FloatTensor of shape (batch_size, num_channels, height, width))` What I see is the following error raised ``` Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [25, 1, 256, 1, 256] ``` It looks like the cause to me is [these lines](https://github.com/huggingface/transformers/blame/1670be4bdec19d5a8893f943bf78a8d9b3dc8911/src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py#L110-L113) in the forward pass of the ASTPatchEmbeddings ``` def forward(self, input_values: torch.Tensor) -> torch.Tensor: input_values = input_values.unsqueeze(1) input_values = input_values.transpose(2, 3) embeddings = self.projection(input_values).flatten(2).transpose(1, 2) return embeddings ``` When I step through with the debugger, I see that the unsqueeze and transpose commands are what is affecting the shape of the tensor. ### Expected behavior I expect to see the model silently do a forward pass.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22610/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/22610/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22609
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22609/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22609/comments
https://api.github.com/repos/huggingface/transformers/issues/22609/events
https://github.com/huggingface/transformers/pull/22609
1,657,113,952
PR_kwDOCUB6oc5NwPSH
22,609
Revert error back into warning for byte fallback conversion.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Handles https://github.com/huggingface/transformers/pull/22264#issuecomment-1498681408 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22609/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22609", "html_url": "https://github.com/huggingface/transformers/pull/22609", "diff_url": "https://github.com/huggingface/transformers/pull/22609.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22609.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22608
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22608/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22608/comments
https://api.github.com/repos/huggingface/transformers/issues/22608/events
https://github.com/huggingface/transformers/pull/22608
1,657,103,051
PR_kwDOCUB6oc5NwNDc
22,608
[DO NOT MERGE] Add Crop Transformation
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22608). All of your documentation changes will be reflected on that endpoint." ]
1,680
1,688
null
COLLABORATOR
null
# What does this PR do? Abstracts out cropping logic to be a more generic `crop` function which other, more specific cropping functions e.g. `center_crop` can call. Motivation: * The output of the CLIP feature extractor changed after #17628. This was due to a difference in how the `top` and `left` coordinates were calculated resulting in some values being off by one. * The original CLIP feature extractor matched the original implementation * Having a more generic `crop` method enables each image processor to have its own center_crop logic with minimal code replication. [BEFORE MERGING]: Verify this doesn't have large impact on any popular CLIP dependant pipelines Fixes #22505 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22608/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22608", "html_url": "https://github.com/huggingface/transformers/pull/22608", "diff_url": "https://github.com/huggingface/transformers/pull/22608.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22608.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22607
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22607/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22607/comments
https://api.github.com/repos/huggingface/transformers/issues/22607/events
https://github.com/huggingface/transformers/pull/22607
1,657,051,463
PR_kwDOCUB6oc5NwCBq
22,607
Revert error back into warning for byte fallback conversion.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Handles https://github.com/huggingface/transformers/pull/22264#issuecomment-1498681408 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22607/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22607", "html_url": "https://github.com/huggingface/transformers/pull/22607", "diff_url": "https://github.com/huggingface/transformers/pull/22607.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22607.patch", "merged_at": 1680782430000 }
https://api.github.com/repos/huggingface/transformers/issues/22606
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22606/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22606/comments
https://api.github.com/repos/huggingface/transformers/issues/22606/events
https://github.com/huggingface/transformers/pull/22606
1,656,974,135
PR_kwDOCUB6oc5Nvxx8
22,606
update_pip_test_mapping
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,680
1,680
1,680
COLLABORATOR
null
# What does this PR do? #22180 added a new script to add/update the attribute `pipeline_model_mapping` (for pipeline testing) in a systematic way. This PR uses that script to update this attributes for new and existing model test files. It turns out that `translation` was missing when I first time adding this attribute in #21516. Fortunately, I am so persistent to continuously improve things, and find+fix this problem as a consequence 🚀 🐛 .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22606/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22606/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22606", "html_url": "https://github.com/huggingface/transformers/pull/22606", "diff_url": "https://github.com/huggingface/transformers/pull/22606.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22606.patch", "merged_at": 1680796567000 }
https://api.github.com/repos/huggingface/transformers/issues/22605
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22605/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22605/comments
https://api.github.com/repos/huggingface/transformers/issues/22605/events
https://github.com/huggingface/transformers/issues/22605
1,656,951,218
I_kwDOCUB6oc5iwxGy
22,605
UnboundLocalError: local variable 'params_docstring' referenced before assignment
{ "login": "xingyueye", "id": 40205112, "node_id": "MDQ6VXNlcjQwMjA1MTEy", "avatar_url": "https://avatars.githubusercontent.com/u/40205112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xingyueye", "html_url": "https://github.com/xingyueye", "followers_url": "https://api.github.com/users/xingyueye/followers", "following_url": "https://api.github.com/users/xingyueye/following{/other_user}", "gists_url": "https://api.github.com/users/xingyueye/gists{/gist_id}", "starred_url": "https://api.github.com/users/xingyueye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xingyueye/subscriptions", "organizations_url": "https://api.github.com/users/xingyueye/orgs", "repos_url": "https://api.github.com/users/xingyueye/repos", "events_url": "https://api.github.com/users/xingyueye/events{/privacy}", "received_events_url": "https://api.github.com/users/xingyueye/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "For Reproduction:\r\nIt occurs when we define a new subClass with no definition", "No one will be able to help without a clear reproducer.", "For example, we define a subclass of BaseModelOutputWithPoolingAndCrossAttentions, but with no args explanations. \r\n```\r\nclass NewBaseModelOutputWithPoolingAndCrossAttentions(BaseModelOutputWithPoolingAndCrossAttentions):\r\n final_text_self_embedding: Optional[torch.FloatTensor] = None\r\n final_text_visual_embedding: Optional[torch.FloatTensor] = None\r\n text_visual_states: Optional[Tuple[torch.FloatTensor]] = None\r\n```\r\n`lines = output_docstring.split(\"\\n\")` would return a null result, then `if i < len(lines):` would not be executed.", "Why would you use our internal tools for the documentation if you are not documenting the class?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
### System Info https://github.com/huggingface/transformers/blob/v4.27.4/src/transformers/utils/doc.py#L130 BUG about 'params_docstring' is reported ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction It occurs when "if i < len(lines):" not match ### Expected behavior Fix bugs or add assertion
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22605/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22604
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22604/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22604/comments
https://api.github.com/repos/huggingface/transformers/issues/22604/events
https://github.com/huggingface/transformers/pull/22604
1,656,907,459
PR_kwDOCUB6oc5NvkY8
22,604
[WIP] Add PoNet
{ "login": "lxchtan", "id": 30597959, "node_id": "MDQ6VXNlcjMwNTk3OTU5", "avatar_url": "https://avatars.githubusercontent.com/u/30597959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lxchtan", "html_url": "https://github.com/lxchtan", "followers_url": "https://api.github.com/users/lxchtan/followers", "following_url": "https://api.github.com/users/lxchtan/following{/other_user}", "gists_url": "https://api.github.com/users/lxchtan/gists{/gist_id}", "starred_url": "https://api.github.com/users/lxchtan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lxchtan/subscriptions", "organizations_url": "https://api.github.com/users/lxchtan/orgs", "repos_url": "https://api.github.com/users/lxchtan/repos", "events_url": "https://api.github.com/users/lxchtan/events{/privacy}", "received_events_url": "https://api.github.com/users/lxchtan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ArthurZucker and @younesbelkada ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22604). All of your documentation changes will be reflected on that endpoint.", "Hey! great work🔥 \r\n\r\nWould you be open to put this model on the hub following [this tutorial](https://huggingface.co/docs/transformers/custom_models)! This model seems very similar to a Bert model, so it makes more sense! Especially for all the additional ressources that you want to add", "Thanks for your advice! I've followed the tutorial and put the codes to the hub.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,686
1,686
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Implementation of PoNet model (https://arxiv.org/abs/2110.02442). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker and @younesbelkada. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada Library: - pipelines: @Narsil - tokenizers: @ArthurZucker Documentation: @sgugger, @stevhliu and @MKhalusova Maintained examples (not research project or legacy): - PyTorch: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22604/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22604", "html_url": "https://github.com/huggingface/transformers/pull/22604", "diff_url": "https://github.com/huggingface/transformers/pull/22604.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22604.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22603
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22603/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22603/comments
https://api.github.com/repos/huggingface/transformers/issues/22603/events
https://github.com/huggingface/transformers/pull/22603
1,656,889,725
PR_kwDOCUB6oc5Nvg3L
22,603
move preprocess_logits_for_metrics before _nested_gather in trainer.e…
{ "login": "ChenyangLiu", "id": 6317575, "node_id": "MDQ6VXNlcjYzMTc1NzU=", "avatar_url": "https://avatars.githubusercontent.com/u/6317575?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ChenyangLiu", "html_url": "https://github.com/ChenyangLiu", "followers_url": "https://api.github.com/users/ChenyangLiu/followers", "following_url": "https://api.github.com/users/ChenyangLiu/following{/other_user}", "gists_url": "https://api.github.com/users/ChenyangLiu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ChenyangLiu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenyangLiu/subscriptions", "organizations_url": "https://api.github.com/users/ChenyangLiu/orgs", "repos_url": "https://api.github.com/users/ChenyangLiu/repos", "events_url": "https://api.github.com/users/ChenyangLiu/events{/privacy}", "received_events_url": "https://api.github.com/users/ChenyangLiu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sorry. In my training loop, the `preprocess_logits_for_metrics` do not use `labels`. I ignore the `labels` is gathered before. In the new commit, the code is\r\n```\r\n# Update containers on host\r\nif loss is not None:\r\n losses = self._nested_gather(loss.repeat(batch_size))\r\n losses_host = losses if losses_host is None else torch.cat((losses_host, losses), dim=0)\r\nif labels is not None:\r\n labels = self._pad_across_processes(labels) \r\nif inputs_decode is not None:\r\n inputs_decode = self._pad_across_processes(inputs_decode)\r\n inputs_decode = self._nested_gather(inputs_decode)\r\n inputs_host = (\r\n inputs_decode\r\n if inputs_host is None\r\n else nested_concat(inputs_host, inputs_decode, padding_index=-100)\r\n )\r\nif logits is not None:\r\n logits = self._pad_across_processes(logits)\r\nif self.preprocess_logits_for_metrics is not None and logits is not None:\r\n logits = self.preprocess_logits_for_metrics(logits, labels)\r\nif labels is not None:\r\n labels = self._nested_gather(labels)\r\n labels_host = labels if labels_host is None else nested_concat(labels_host, labels, padding_index=-100)\r\nif logits is not None:\r\n logits = self._nested_gather(logits)\r\n preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)\r\n```\r\nlabels and logits should be padded first, then be preprocessed before gathering. In my use case, I trained BLOOM with 32 batch size, the gathered logits size is (32, 1024, 250000+), which takes 15G+ gpu memory and cause to OOM during evaluating. ", "@sgugger Done. Rewrite the code with your suggestion. Thanks." ]
1,680
1,681
1,681
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #22602 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22603/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22603", "html_url": "https://github.com/huggingface/transformers/pull/22603", "diff_url": "https://github.com/huggingface/transformers/pull/22603.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22603.patch", "merged_at": 1681908828000 }
https://api.github.com/repos/huggingface/transformers/issues/22602
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22602/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22602/comments
https://api.github.com/repos/huggingface/transformers/issues/22602/events
https://github.com/huggingface/transformers/issues/22602
1,656,848,849
I_kwDOCUB6oc5iwYHR
22,602
Preprocess/transform logits before gathering them for computing metrics.
{ "login": "ChenyangLiu", "id": 6317575, "node_id": "MDQ6VXNlcjYzMTc1NzU=", "avatar_url": "https://avatars.githubusercontent.com/u/6317575?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ChenyangLiu", "html_url": "https://github.com/ChenyangLiu", "followers_url": "https://api.github.com/users/ChenyangLiu/followers", "following_url": "https://api.github.com/users/ChenyangLiu/following{/other_user}", "gists_url": "https://api.github.com/users/ChenyangLiu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ChenyangLiu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenyangLiu/subscriptions", "organizations_url": "https://api.github.com/users/ChenyangLiu/orgs", "repos_url": "https://api.github.com/users/ChenyangLiu/repos", "events_url": "https://api.github.com/users/ChenyangLiu/events{/privacy}", "received_events_url": "https://api.github.com/users/ChenyangLiu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As replied on the PR, this is incorrect. You should probably use a custom training loop powered by Accelerate to be able to move the logits when you want." ]
1,680
1,681
1,681
CONTRIBUTOR
null
### Feature request In `trainer.evaluation_loop`, `preprocess_logits_for_metrics` should be executed before `_nested_gather` to avoid GPU OOM. ### Motivation `preprocess_logits_for_metrics` processes logits after gathering them when do distributed learning. When training with large batch_size, token_length or vocab_size, gathering all logits to 1 node will cause to out of GPU memory. This preprocessing should be executed before `_nested_gather`. ### Your contribution The main modification would be this in `trainer.evaluation_loop`: ``` if logits is not None: logits = self._pad_across_processes(logits) if self.preprocess_logits_for_metrics is not None: logits = self.preprocess_logits_for_metrics(logits, labels) logits = self._nested_gather(logits) preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22602/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22601
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22601/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22601/comments
https://api.github.com/repos/huggingface/transformers/issues/22601/events
https://github.com/huggingface/transformers/issues/22601
1,656,817,432
I_kwDOCUB6oc5iwQcY
22,601
Incorrect question answering initialization
{ "login": "imbalu007", "id": 629329, "node_id": "MDQ6VXNlcjYyOTMyOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/629329?v=4", "gravatar_id": "", "url": "https://api.github.com/users/imbalu007", "html_url": "https://github.com/imbalu007", "followers_url": "https://api.github.com/users/imbalu007/followers", "following_url": "https://api.github.com/users/imbalu007/following{/other_user}", "gists_url": "https://api.github.com/users/imbalu007/gists{/gist_id}", "starred_url": "https://api.github.com/users/imbalu007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/imbalu007/subscriptions", "organizations_url": "https://api.github.com/users/imbalu007/orgs", "repos_url": "https://api.github.com/users/imbalu007/repos", "events_url": "https://api.github.com/users/imbalu007/events{/privacy}", "received_events_url": "https://api.github.com/users/imbalu007/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is a bug indeed. Do you want to make a quick PR with your fix (labels should indeed be hardcoded at 2 for question answering)?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
NONE
null
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.15.0-1031-azure-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Select deberta-mnli model and perform finetuning for question answering on any dataset 2. Errors out with this message ` File "XXXXXX/lib/python3.8/site-packages/transformers/models/deberta/modeling_deberta.py", line 1416, in forward start_logits, end_logits = logits.split(1, dim=-1) ValueError: too many values to unpack (expected 2) ` Models finetuned on mnli have 3 classes by default in their config file (because mnli dataset has 3 classes). When these models are repurposed for question answering task, the classification head is initialized from the *config file* [here](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/deberta/modeling_deberta.py#L1362 ) But for question answering task, 2 outputs per token are expected [here](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/deberta/modeling_deberta.py#L1416). So there is a mismatch between model head which is initialized with 3 labels. This is likely causing an issue with deberta-mnli when used for question answering. It might potentially cause a similar issue for models trained on datasets other than mnli and having number of labels != 2 ### Expected behavior For question answering task, should the num_labels be hardcoded to 2?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22601/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22600
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22600/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22600/comments
https://api.github.com/repos/huggingface/transformers/issues/22600/events
https://github.com/huggingface/transformers/issues/22600
1,656,778,110
I_kwDOCUB6oc5iwG1-
22,600
Add support for Ascend NPU
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,680
1,684
1,684
CONTRIBUTOR
null
### Feature request It would be nice if the Transformers suite could be used directly on the Ascend NPU without modifying the source code. ### Motivation In China, Ascend NPU is the second choice after Nvidia GPU and has been adpoted by many companies, such as Alibaba, ByteDance, Meituan, etc. Huawei officially released an adapter called [`torch_npu`](https://github.com/Ascend/pytorch/blob/master/README.en.md), to adapt PyTorch on Ascend NPU. `torch_npu` is user friendly to developers, so that we can still enjoy the same PyTorch experience that we accustomed to today. The native Transformers suite requires minor modifications to run on the Ascend NPU, so it's reasonable to support the Ascend NPU to be a member of Transformers community. ### Your contribution I can assist in adding support if you want, see this PR (#22644 )
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22600/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22600/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22599
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22599/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22599/comments
https://api.github.com/repos/huggingface/transformers/issues/22599/events
https://github.com/huggingface/transformers/issues/22599
1,656,741,709
I_kwDOCUB6oc5iv99N
22,599
No module named 'transformers' after installing from source
{ "login": "fishfree", "id": 1741341, "node_id": "MDQ6VXNlcjE3NDEzNDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1741341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fishfree", "html_url": "https://github.com/fishfree", "followers_url": "https://api.github.com/users/fishfree/followers", "following_url": "https://api.github.com/users/fishfree/following{/other_user}", "gists_url": "https://api.github.com/users/fishfree/gists{/gist_id}", "starred_url": "https://api.github.com/users/fishfree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fishfree/subscriptions", "organizations_url": "https://api.github.com/users/fishfree/orgs", "repos_url": "https://api.github.com/users/fishfree/repos", "events_url": "https://api.github.com/users/fishfree/events{/privacy}", "received_events_url": "https://api.github.com/users/fishfree/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "same error\r\n", "Having the same issue as well.", "Me too. Not sure what is going on, but it looks like in site-packages, the transformers-4.28.0.dev0.dist-info directory is created, but not the transformers directory itself!", "... and confirmed, if I roll back using\r\n`git checkout 2194943a3443b924e4cd09f37402230b771008f0`\r\nthen everything installs fine. Something seems to have broken in the past 3-4 commits.", "same", "Steps to reproduce (after uninstalling any version of transformers that you might have):\r\n1. `git clone https://github.com/huggingface/transformers.git`\r\n2. `cd transformers`\r\n3. `pip install .`\r\n4. `python3 -c \"from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))\"`\r\nResulting error\r\n```\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\nModuleNotFoundError: No module named 'transformers'\r\n```\r\n\r\nIt looks like the change that broke things is https://github.com/huggingface/transformers/pull/22539. If I roll back to the previous change to setup.py, the install works.\r\ngit checkout 80d1319e1b9dde71b8af641ad1427113058a0af7 --> pip3 install . --> WORKS\r\ngit checkout 4169dc84bf0072a26f10096a187907d661dcc383 --> pip3 install . --> FAILS\r\n\r\nMaybe there is a new installation method?\r\n", "Thanks for letting us know. I guess that's what happens when you try to clean up to follow the official PEP rules... We'll revert the PR!", "I cannot reproduce this in a virtual environment. Maybe you are using the system `python` and `pip` on Ubuntu, which are installed in `dist-packages` rather than `site-packages`. There is a similar issue oobabooga/text-generation-webui#753. Upgrade your `pip` and `setuptools`, or use a virtual environment will resolve this.\r\n\r\nIn the documentation [Installation](https://huggingface.co/docs/transformers/installation#installation):\r\n\r\n> # Install with pip\r\n> \r\n> You **should** install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you’re unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies.\r\n> \r\n> Start by creating a virtual environment in your project directory:\r\n> \r\n> ```bash\r\n> python -m venv .env\r\n> \r\n> ```\r\n\r\nThe issue here is the users do not follow the installation guide for using a virtual environment. We may need to add `pip3 install --upgrade pip setuptools` in the [Install from source](https://huggingface.co/docs/transformers/installation#install-from-source) documentation.\r\n\r\n> # Install from source\r\n> \r\n> Install 🤗 Transformers from source with the following command:\r\n> \r\n> ```bash\r\n> pip install git+https://github.com/huggingface/transformers\r\n> ```\r\n\r\nto\r\n\r\n```bash\r\npip install --ugprade pip setuptools # reinstall pip and do not use the apt packages\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\n------\r\n\r\nSolution 1: upgrade (reinstall) `pip` and `setuptools` when using the system apt package.\r\n\r\n```bash\r\ndocker run -it --rm -h ubuntu --pull always ubuntu:22.04\r\napt update && apt install git python3-dev python3-pip -y\r\npython3 -m pip install --upgrade pip setuptools\r\npython3 -m pip install git+https://github.com/huggingface/transformers@4169dc84bf0072a26f10096a187907d661dcc383\r\n```\r\n\r\n<details>\r\n<summary>Outputs: upgrade (reinstall) `pip` and `setuptools`</summary>\r\n\r\n```console\r\n$ docker run -it --rm -h ubuntu --pull always ubuntu:22.04\r\n22.04: Pulling from library/ubuntu\r\nDigest: sha256:67211c14fa74f070d27cc59d69a7fa9aeff8e28ea118ef3babc295a0428a6d21\r\nStatus: Image is up to date for ubuntu:22.04\r\nroot@ubuntu:/# apt update && apt install git python3-dev python3-pip -y\r\n\r\nroot@ubuntu:/# which -a python3\r\n/usr/bin/python3\r\n/bin/python3\r\nroot@ubuntu:/# which -a pip3\r\n/usr/bin/pip3\r\n/bin/pip3\r\nroot@ubuntu:/# python3 -m pip install --upgrade pip setuptools\r\nRequirement already satisfied: pip in /usr/lib/python3/dist-packages (22.0.2)\r\nCollecting pip\r\n Downloading pip-23.0.1-py3-none-any.whl (2.1 MB)\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 817.6 kB/s eta 0:00:00\r\nRequirement already satisfied: setuptools in /usr/lib/python3/dist-packages (59.6.0)\r\nCollecting setuptools\r\n Downloading setuptools-67.6.1-py3-none-any.whl (1.1 MB)\r\n ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 540.5 kB/s eta 0:00:00\r\nInstalling collected packages: setuptools, pip\r\n Attempting uninstall: setuptools\r\n Found existing installation: setuptools 59.6.0\r\n Not uninstalling setuptools at /usr/lib/python3/dist-packages, outside environment /usr\r\n Can't uninstall 'setuptools'. No files were found to uninstall.\r\n Attempting uninstall: pip\r\n Found existing installation: pip 22.0.2\r\n Not uninstalling pip at /usr/lib/python3/dist-packages, outside environment /usr\r\n Can't uninstall 'pip'. No files were found to uninstall.\r\nSuccessfully installed pip-23.0.1 setuptools-67.6.1\r\nWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\r\nroot@ubuntu:/# which -a pip3\r\n/usr/local/bin/pip3\r\n/usr/bin/pip3\r\n/bin/pip3\r\nroot@ubuntu:/# python3 -m pip install git+https://github.com/huggingface/transformers\r\nCollecting git+https://github.com/huggingface/transformers@4169dc84bf0072a26f10096a187907d661dcc383\r\n Cloning https://github.com/huggingface/transformers (to revision 4169dc84bf0072a26f10096a187907d661dcc383) to /tmp/pip-req-build-w_o1neea\r\n Running command git clone --filter=blob:none --quiet https://github.com/huggingface/transformers /tmp/pip-req-build-w_o1neea\r\n Running command git rev-parse -q --verify 'sha^4169dc84bf0072a26f10096a187907d661dcc383'\r\n Running command git fetch -q https://github.com/huggingface/transformers 4169dc84bf0072a26f10096a187907d661dcc383\r\n Running command git checkout -q 4169dc84bf0072a26f10096a187907d661dcc383\r\n Resolved https://github.com/huggingface/transformers to commit 4169dc84bf0072a26f10096a187907d661dcc383\r\n Installing build dependencies ... done\r\n Getting requirements to build wheel ... done\r\n Installing backend dependencies ... done\r\n Preparing metadata (pyproject.toml) ... done\r\nRequirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (2.28.2)\r\nRequirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (0.13.3)\r\nRequirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (3.11.0)\r\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (4.65.0)\r\nRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (1.24.2)\r\nRequirement already satisfied: huggingface-hub<1.0,>=0.11.0 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (0.13.4)\r\nRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (23.0)\r\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (2023.3.23)\r\nRequirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.0.dev0) (6.0)\r\nRequirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.11.0->transformers==4.28.0.dev0) (4.5.0)\r\nRequirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.28.0.dev0) (3.1.0)\r\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.28.0.dev0) (2022.12.7)\r\nRequirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.28.0.dev0) (3.4)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.28.0.dev0) (1.26.15)\r\nBuilding wheels for collected packages: transformers\r\n Building wheel for transformers (pyproject.toml) ... done\r\n Created wheel for transformers: filename=transformers-4.28.0.dev0-py3-none-any.whl size=6862948 sha256=b8dbe24b1d39a4ae836e24e0b4b7ab27b4e024408b7129a4b1c4aad4a41fc4d7\r\n Stored in directory: /root/.cache/pip/wheels/98/63/05/ec5c37d387d2db776a20dac49e1b830aca7fbc2394956367ad\r\nSuccessfully built transformers\r\nInstalling collected packages: transformers\r\nSuccessfully installed transformers-4.28.0.dev0\r\nWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\r\nroot@ubuntu:/# python3 -c 'import transformers; print(transformers.__version__)'\r\nNone of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\r\n4.28.0.dev0\r\n```\r\n\r\n</details>\r\n\r\n------\r\n\r\nSolution 2: use a virtual environment (already there in the documentation).\r\n\r\n```bash\r\ndocker run -it --rm -h ubuntu --pull always ubuntu:22.04\r\napt update && apt install git python3-dev python3-venv -y\r\npython3 -m venv venv\r\nsource venv/bin/activate\r\npython3 -m pip install git+https://github.com/huggingface/transformers@4169dc84bf0072a26f10096a187907d661dcc383\r\n```\r\n\r\n<details>\r\n<summary>Outputs: use virtual environment</summary>\r\n\r\n```console\r\n$ docker run -it --rm -h ubuntu --pull always ubuntu:22.04\r\n22.04: Pulling from library/ubuntu\r\nDigest: sha256:67211c14fa74f070d27cc59d69a7fa9aeff8e28ea118ef3babc295a0428a6d21\r\nStatus: Image is up to date for ubuntu:22.04\r\nroot@ubuntu:/# apt update && apt install git python3-dev python3-venv -y\r\n\r\nroot@ubuntu:/# which -a python3\r\n/usr/bin/python3\r\n/bin/python3\r\nroot@ubuntu:/# which -a pip3\r\n/usr/bin/pip3\r\n/bin/pip3\r\nroot@ubuntu:/# python3 -m venv venv\r\nroot@ubuntu:/# source venv/bin/activate\r\n(venv) root@ubuntu:/# which -a python3\r\n/venv/bin/python3\r\n/usr/bin/python3\r\n/bin/python3\r\n(venv) root@ubuntu:/# which -a pip3 \r\n/venv/bin/pip3\r\n/usr/bin/pip3\r\n/bin/pip3\r\n(venv) root@ubuntu:/# python3 -m pip install git+https://github.com/huggingface/transformers@4169dc84bf0072a26f10096a187907d661dcc383\r\nCollecting git+https://github.com/huggingface/transformers@4169dc84bf0072a26f10096a187907d661dcc383\r\n Cloning https://github.com/huggingface/transformers (to revision 4169dc84bf0072a26f10096a187907d661dcc383) to /tmp/pip-req-build-u7lmhx_v\r\n Running command git clone --filter=blob:none --quiet https://github.com/huggingface/transformers /tmp/pip-req-build-u7lmhx_v\r\n Running command git rev-parse -q --verify 'sha^4169dc84bf0072a26f10096a187907d661dcc383'\r\n Running command git fetch -q https://github.com/huggingface/transformers 4169dc84bf0072a26f10096a187907d661dcc383\r\n Running command git checkout -q 4169dc84bf0072a26f10096a187907d661dcc383\r\n Resolved https://github.com/huggingface/transformers to commit 4169dc84bf0072a26f10096a187907d661dcc383\r\n Installing build dependencies ... done\r\n Getting requirements to build wheel ... done\r\n Installing backend dependencies ... done\r\n Preparing metadata (pyproject.toml) ... done\r\nCollecting numpy>=1.17\r\n Using cached numpy-1.24.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)\r\nCollecting regex!=2019.12.17\r\n Using cached regex-2023.3.23-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (769 kB)\r\nCollecting tokenizers!=0.11.3,<0.14,>=0.11.1\r\n Using cached tokenizers-0.13.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB)\r\nCollecting requests\r\n Using cached requests-2.28.2-py3-none-any.whl (62 kB)\r\nCollecting packaging>=20.0\r\n Using cached packaging-23.0-py3-none-any.whl (42 kB)\r\nCollecting pyyaml>=5.1\r\n Using cached PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (682 kB)\r\nCollecting huggingface-hub<1.0,>=0.11.0\r\n Using cached huggingface_hub-0.13.4-py3-none-any.whl (200 kB)\r\nCollecting filelock\r\n Using cached filelock-3.11.0-py3-none-any.whl (10.0 kB)\r\nCollecting tqdm>=4.27\r\n Using cached tqdm-4.65.0-py3-none-any.whl (77 kB)\r\nCollecting typing-extensions>=3.7.4.3\r\n Using cached typing_extensions-4.5.0-py3-none-any.whl (27 kB)\r\nCollecting idna<4,>=2.5\r\n Using cached idna-3.4-py3-none-any.whl (61 kB)\r\nCollecting certifi>=2017.4.17\r\n Using cached certifi-2022.12.7-py3-none-any.whl (155 kB)\r\nCollecting charset-normalizer<4,>=2\r\n Using cached charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (199 kB)\r\nCollecting urllib3<1.27,>=1.21.1\r\n Using cached urllib3-1.26.15-py2.py3-none-any.whl (140 kB)\r\nBuilding wheels for collected packages: transformers\r\n Building wheel for transformers (pyproject.toml) ... done\r\n Created wheel for transformers: filename=transformers-4.28.0.dev0-py3-none-any.whl size=6862948 sha256=24db4f2655cd212b0097dd4fd88f2bcad3e1236a0bf700988eefdaad9583d0e9\r\n Stored in directory: /root/.cache/pip/wheels/98/63/05/ec5c37d387d2db776a20dac49e1b830aca7fbc2394956367ad\r\nSuccessfully built transformers\r\nInstalling collected packages: tokenizers, urllib3, typing-extensions, tqdm, regex, pyyaml, packaging, numpy, idna, filelock, charset-normalizer, certifi, requests, huggingface-hub, transformers\r\nSuccessfully installed certifi-2022.12.7 charset-normalizer-3.1.0 filelock-3.11.0 huggingface-hub-0.13.4 idna-3.4 numpy-1.24.2 packaging-23.0 pyyaml-6.0 regex-2023.3.23 requests-2.28.2 tokenizers-0.13.3 tqdm-4.65.0 transformers-4.28.0.dev0 typing-extensions-4.5.0 urllib3-1.26.15\r\n(venv) root@ubuntu:/# python3 -c 'import transformers; print(transformers.__version__)'\r\nNone of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\r\n4.28.0.dev0\r\n```\r\n\r\n</details>\r\n" ]
1,680
1,680
1,680
NONE
null
### System Info Ubuntu 22.04 in Windwos WSL 2. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I just followed the doc [here](https://huggingface.co/docs/transformers/installation#install-from-source). However, error occured as below: ``` wu@DESKTOP-COM:~/llama.cpp/transformers$ python Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'transformers' ``` ### Expected behavior No error occured.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22599/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22599/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22598
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22598/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22598/comments
https://api.github.com/repos/huggingface/transformers/issues/22598/events
https://github.com/huggingface/transformers/issues/22598
1,656,530,407
I_kwDOCUB6oc5ivKXn
22,598
BertTokenizerFast.from_pretrained() reproducibly freezing during download
{ "login": "lstein", "id": 111189, "node_id": "MDQ6VXNlcjExMTE4OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/111189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lstein", "html_url": "https://github.com/lstein", "followers_url": "https://api.github.com/users/lstein/followers", "following_url": "https://api.github.com/users/lstein/following{/other_user}", "gists_url": "https://api.github.com/users/lstein/gists{/gist_id}", "starred_url": "https://api.github.com/users/lstein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lstein/subscriptions", "organizations_url": "https://api.github.com/users/lstein/orgs", "repos_url": "https://api.github.com/users/lstein/repos", "events_url": "https://api.github.com/users/lstein/events{/privacy}", "received_events_url": "https://api.github.com/users/lstein/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you upgrade `huggingface_hub` and possibly `transformers` too to the last version? There were some bugs on Windows recently fixed.", "Problem persists with `transformers` 4.27.4 and `huggingface-hub` 0.13.3. The code is freezing in file `huggingface_hub/file_download.py` at line 1296, where it tries to obtain a file lock on the path:\r\n```\r\n.cache\\huggingface\\hub\\models--bert-base-uncased\\blobs\\W/\"fb140275c155a9c7c5a3b3e0e77a9e839594a938.lock \r\n```\r\nThe lock file is never created on the file system as far as I can tell. The filelock module is working on my system, but apparently FileLock() does not like filenames that start with the quotation mark. If I try to lock a file that starts with the double quote, I get the same freeze experienced with `from_pretrained()`. By any chance did the format of the blob hashes change recently?\r\n\r\nAlso, at least one other model has the same problem. I confirmed this with CLIPTokenizer.", "same problem in Windows 10 latest version when I use \"from_pretrained(\"openai/whisper-tiny\").\".\r\nThe same code worked 2~3 days ago.\r\n", "Not sure if its similar. First time ever using it with the help of GPT4. Install Uninstall several times. I tried it in Pycharm and Jupyter NB. Windows 11.\r\n\r\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM, TrainingArguments, Trainer\r\nfrom datasets import load_dataset\r\n\r\ntxt_file = \"path/to/your/text/file.txt\"\r\n\r\ndataset = load_dataset(\"text\", data_files={\"train\": txt_file})\r\n\r\nmodel_checkpoint = \"distilbert-base-uncased\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\r\nmodel = AutoModelForMaskedLM.from_pretrained(model_checkpoint)\r\n\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], truncation=True, padding=\"max_length\", max_length=128)\r\n\r\ntokenized_dataset = dataset.map(tokenize_function, batched=True)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"output\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=3,\r\n per_device_train_batch_size=8,\r\n save_steps=10_000,\r\n save_total_limit=2,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=tokenized_dataset[\"train\"],\r\n)\r\n\r\ntrainer.train()\r\n\r\nmodel.save_pretrained(\"fine_tuned_model\")\r\ntokenizer.save_pretrained(\"fine_tuned_model\")\r\n\r\n\r\nIt seems the script runs indefinitely and nothing happens. Tried many examples too from the Huggingface page. Hopefully there is a fix to it. \r\n\r\nOli\r\n\r\n", "> same problem in Windows 10 latest version when I use \"from_pretrained(\"openai/whisper-tiny\").\". The same code worked 2~3 days ago.\r\n\r\nNow my code is running properly. I haven't do any changes to my code. I think the problem is solved internally.", "Yes, there was an internal change in the Hub that made those downloads stop working. That change was reverted so now it should work again if I understand correctly. cc @Wauplin ", "Yes exactly. Sorry for the inconvenience, it should be back to normal now. See related issue in `huggingface_hub`: https://github.com/huggingface/huggingface_hub/issues/1423.\r\n\r\n@sgugger you can close this one as well now ", "Let us know if the problem persist and I'll reopen!", "Fixed. Thanks!", "fixed. Merci" ]
1,680
1,680
1,680
NONE
null
### System Info - `transformers` version: 4.26.1 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.10 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction On Windows systems, the following script hangs forever and never downloads the model to cache: ``` from transformers import BertTokenizerFast model = BertTokenizerFast.from_pretrained('bert-base-uncased') ``` The same script runs to completion on Linux and Macintosh using the same version of transformers. Multiple users of the InvokeAI application are having similar problems. ### Expected behavior I expect the second statement to run to completion, download and cache the BERT model, and return the instantiated model object.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22598/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22597
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22597/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22597/comments
https://api.github.com/repos/huggingface/transformers/issues/22597/events
https://github.com/huggingface/transformers/pull/22597
1,656,441,253
PR_kwDOCUB6oc5NuDRn
22,597
[WIP] ONNX Multinomial operator supports only one input. As temporary solut…
{ "login": "SatyaJandhyalaAtMS", "id": 55203776, "node_id": "MDQ6VXNlcjU1MjAzNzc2", "avatar_url": "https://avatars.githubusercontent.com/u/55203776?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SatyaJandhyalaAtMS", "html_url": "https://github.com/SatyaJandhyalaAtMS", "followers_url": "https://api.github.com/users/SatyaJandhyalaAtMS/followers", "following_url": "https://api.github.com/users/SatyaJandhyalaAtMS/following{/other_user}", "gists_url": "https://api.github.com/users/SatyaJandhyalaAtMS/gists{/gist_id}", "starred_url": "https://api.github.com/users/SatyaJandhyalaAtMS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SatyaJandhyalaAtMS/subscriptions", "organizations_url": "https://api.github.com/users/SatyaJandhyalaAtMS/orgs", "repos_url": "https://api.github.com/users/SatyaJandhyalaAtMS/repos", "events_url": "https://api.github.com/users/SatyaJandhyalaAtMS/events{/privacy}", "received_events_url": "https://api.github.com/users/SatyaJandhyalaAtMS/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "You're free to do this on your own to have the model work with ONNX, but this is not the kind of fix we can merge into Transformers as it will hurt every user on other hardware.", "I am not intending to merge this PR. I agree that this is a hack not a fix. The reason to create this PR is only to share that change(s) with the other teams I am working with." ]
1,680
1,680
1,680
CONTRIBUTOR
null
…ion commentout multinominal call. # What does this PR do? **This temporarily change is not intended to merge. The purpose of this PR is to share the change with the other teams I am working with.** Fixes # (issue) ONNX [Multinomial](https://github.com/onnx/onnx/blob/main/docs/Operators.md#multinomial) operator only supports one input. The sample_size is only an attribute. It should be known when creating/exporting the ONNX model. torch.onnx.export fails with an error. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22597/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22597", "html_url": "https://github.com/huggingface/transformers/pull/22597", "diff_url": "https://github.com/huggingface/transformers/pull/22597.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22597.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/22596
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22596/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22596/comments
https://api.github.com/repos/huggingface/transformers/issues/22596/events
https://github.com/huggingface/transformers/pull/22596
1,656,396,056
PR_kwDOCUB6oc5Nt5cl
22,596
Move labels to the same device as logits for LlamaForSequenceClassification and Blip2
{ "login": "xssChauhan", "id": 9297805, "node_id": "MDQ6VXNlcjkyOTc4MDU=", "avatar_url": "https://avatars.githubusercontent.com/u/9297805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xssChauhan", "html_url": "https://github.com/xssChauhan", "followers_url": "https://api.github.com/users/xssChauhan/followers", "following_url": "https://api.github.com/users/xssChauhan/following{/other_user}", "gists_url": "https://api.github.com/users/xssChauhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/xssChauhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xssChauhan/subscriptions", "organizations_url": "https://api.github.com/users/xssChauhan/orgs", "repos_url": "https://api.github.com/users/xssChauhan/repos", "events_url": "https://api.github.com/users/xssChauhan/events{/privacy}", "received_events_url": "https://api.github.com/users/xssChauhan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger I have refreshed my permissions, but I still do not see the option of rerunning the pipeline on CircleCI. Is it possible that I cant do that?", "You can just push an empty commit: `git commit -m \"Trigger CI\" --allow-empty`", "@sgugger Is such a long waiting time for CircleCI report expected?", "Could you try again? Tests still aren't run.", "@sgugger I also added code for Blip2, and the tests now pass.", "Perfect, thanks!" ]
1,680
1,680
1,680
CONTRIBUTOR
null
# What does this PR do? Fixes issue #22561 by moving the labels to the same device as the logits they are compared to for `LlamaForSequenceClassification`, `Blip2`. @sgugger Could you review this once?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22596/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22596", "html_url": "https://github.com/huggingface/transformers/pull/22596", "diff_url": "https://github.com/huggingface/transformers/pull/22596.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22596.patch", "merged_at": 1680870236000 }
https://api.github.com/repos/huggingface/transformers/issues/22595
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22595/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22595/comments
https://api.github.com/repos/huggingface/transformers/issues/22595/events
https://github.com/huggingface/transformers/issues/22595
1,656,035,327
I_kwDOCUB6oc5itRf_
22,595
`device_map="auto"` doesn't use all available GPUs when `load_in_8bit=True`
{ "login": "yukw777", "id": 2057325, "node_id": "MDQ6VXNlcjIwNTczMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/2057325?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yukw777", "html_url": "https://github.com/yukw777", "followers_url": "https://api.github.com/users/yukw777/followers", "following_url": "https://api.github.com/users/yukw777/following{/other_user}", "gists_url": "https://api.github.com/users/yukw777/gists{/gist_id}", "starred_url": "https://api.github.com/users/yukw777/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yukw777/subscriptions", "organizations_url": "https://api.github.com/users/yukw777/orgs", "repos_url": "https://api.github.com/users/yukw777/repos", "events_url": "https://api.github.com/users/yukw777/events{/privacy}", "received_events_url": "https://api.github.com/users/yukw777/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "If I specify `max_memory`, the parameters do get distributed according to it.\r\n\r\n```python\r\nmodel = transformers.LlamaForCausalLM.from_pretrained(\r\n \"path/to/converted/llama-65B\",\r\n load_in_8bit=True,\r\n device_map=\"auto\",\r\n max_memory={0: \"10GB\", 1: \"10GB\", 2: \"48GB\", 3: \"48GB\"}\r\n)\r\n```", "cc @younesbelkada ", "You can also specify `device_map=\"balanced\"` to get the parameters evenly dispatched.", "Hmm maybe this is unrelated to `load_in_8bit`, can you try without that and let us know?\r\non the other hand I second what @Xmaster6y said, you can use `balanced` in this case", "balanced and auto are the same thing FYI.", "Yup, I knew `auto` and `balanced` were the same, but tried `balanced` for good measure. Same behavior :/.\r\n\r\nI just verified that without `load_in_8bit`, the parameters are distributed evenly among the GPUs as expected.", "So this is specifically for `load_in_8bit`. I think @younesbelkada made a fix after the last patch. Could you try an install from source?", "I already installed from source.\r\n\r\n```\r\n$ pip freeze | grep transformers\r\ntransformers @ git+https://github.com/huggingface/transformers.git@15641892985b1d77acc74c9065c332cd7c3f7d7f\r\n```", "I can't dig too deeply into this until later, and I don't have more than 2 GPUs to test, but I can say that the actual size calculations and dispatch are all done in [accelerate](https://github.com/huggingface/accelerate), and the calculation changed as little as 3 weeks ago, so make sure you have the latest installed.\r\n\r\nIf that doesn't fix it, and you want to dig into it, I'd recommend just sprinkling some `print`s around the relevant `accelerate` and `transformers` functions, just to get some visibility into what it thinks it's calculating.\r\n\r\nIt'll be somewhere in\r\n`transformers/modeling_utils.py` which calls\r\n`get_balanced_memory` and `infer_auto_device_map` in `accelerate/utils/modeling.py`\r\n", "Thanks @kooshi! It seems like something was fixed since accelerate 0.8. Installing accelerate from source resolved this issue.", "I'm still running into this", "If you want all GPUs to be used I think you should probably use `device_map = \"sequential\"`" ]
1,680
1,701
1,680
NONE
null
### System Info - `transformers` version: 4.28.0.dev0 - Platform: Linux-4.18.0-305.65.1.el8_4.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.4 - Huggingface_hub version: 0.13.3 - PyTorch version (GPU?): 2.0.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @sgugger @kooshi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction On a machine with more than 2 GPUs (I have 4*A40s) ```python model = transformers.LlamaForCausalLM.from_pretrained( "path/to/converted/llama-65B", load_in_8bit=True, device_map="auto" ) ``` You'll see that only the first two GPUs are filled up. Possibly related to https://github.com/huggingface/transformers/pull/22377. ### Expected behavior All 4 GPUs should get parameters.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22595/timeline
completed
null
null