url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/17774
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17774/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17774/comments
https://api.github.com/repos/huggingface/transformers/issues/17774/events
https://github.com/huggingface/transformers/issues/17774
1,275,993,209
I_kwDOCUB6oc5MDhx5
17,774
RHO loss
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,658
1,658
CONTRIBUTOR
null
### Feature request https://github.com/oatml/rho-loss Abstract > Training on web-scale data can take months. But most computation and time is wasted on redundant and noisy points that are already learnt or not learnable. To accelerate training, we introduce Reducible Holdout Loss Selection (RHO-LOSS), a simple but principled technique which selects approximately those points for training that most reduce the model's generalization loss. As a result, RHO-LOSS mitigates the weaknesses of existing data selection methods: techniques from the optimization literature typically select 'hard' (e.g. high loss) points, but such points are often noisy (not learnable) or less task-relevant. Conversely, curriculum learning prioritizes 'easy' points, but such points need not be trained on once learned. In contrast, RHO-LOSS selects points that are learnable, worth learning, and not yet learnt. RHO-LOSS trains in far fewer steps than prior art, improves accuracy, and speeds up training on a wide range of datasets, hyperparameters, and architectures (MLPs, CNNs, and BERT). On the large web-scraped image dataset Clothing-1M, RHO-LOSS trains in 18x fewer steps and reaches 2% higher final accuracy than uniform data shuffling. ### Motivation It's always nice to speed up training :) ### Your contribution I am not sure if I am able to add this to transformers on myself but would be happy to give it a try with advices how to design the API
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17774/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17774/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17773
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17773/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17773/comments
https://api.github.com/repos/huggingface/transformers/issues/17773/events
https://github.com/huggingface/transformers/issues/17773
1,275,981,010
I_kwDOCUB6oc5MDezS
17,773
Converting a tensor to a Python boolean might cause the trace to be incorrect when converting gpt2 to onnx format
{ "login": "HaoboGu", "id": 8640918, "node_id": "MDQ6VXNlcjg2NDA5MTg=", "avatar_url": "https://avatars.githubusercontent.com/u/8640918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HaoboGu", "html_url": "https://github.com/HaoboGu", "followers_url": "https://api.github.com/users/HaoboGu/followers", "following_url": "https://api.github.com/users/HaoboGu/following{/other_user}", "gists_url": "https://api.github.com/users/HaoboGu/gists{/gist_id}", "starred_url": "https://api.github.com/users/HaoboGu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HaoboGu/subscriptions", "organizations_url": "https://api.github.com/users/HaoboGu/orgs", "repos_url": "https://api.github.com/users/HaoboGu/repos", "events_url": "https://api.github.com/users/HaoboGu/events{/privacy}", "received_events_url": "https://api.github.com/users/HaoboGu/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "@lewtun for ONNX here\r\n\r\n@HaoboGu before we can answer the issue could you please add a complete reproducible code snippet here?", "@patrickvonplaten \r\n\r\nSure\r\n\r\n```python\r\nfrom transformers import GPT2Config\r\nfrom transformers.models.gpt2.modeling_gpt2 import GPT2LMHeadModel\r\nimport torch\r\n\r\n\r\ndef convert_gpt2_model_to_onnx() -> str:\r\n \"\"\"\r\n convert pytorch model to onnx format\r\n \"\"\"\r\n # Load model and set the model to eval mode\r\n model: GPT2LMHeadModel = GPT2LMHeadModel.from_pretrained('sshleifer/tiny-gpt2')\r\n model.eval()\r\n\r\n # batch_size, input_ids_length and past_sequence_length are dynamic axes\r\n # We have to initialize a random input(the value doesn't matter) for the model, because the converting requires execution of the model\r\n # See: https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html\r\n batch_size = 3\r\n input_ids_length = 2\r\n past_sequence_length = 1\r\n config: GPT2Config = model.config\r\n num_attention_heads = config.n_head\r\n hidden_size = config.n_embd\r\n num_layer = config.n_layer\r\n vocab_size = config.vocab_size\r\n config.is_decoder = True\r\n\r\n # past(`past_key_values` in model) is a list, its length is num_layer.\r\n # each element in the list is a tuple(key, value), and key/value's shape is past_shape,\r\n # aka [batch_size, n_heads, past_sequence_length, embd_size_each_head]\r\n past_shape = [batch_size, num_attention_heads,\r\n past_sequence_length, int(hidden_size/num_attention_heads)]\r\n past = [(torch.rand(past_shape, dtype=torch.float32, device='cpu'), torch.rand(past_shape, dtype=torch.float32, device='cpu'))\r\n for _ in range(num_layer)]\r\n\r\n # input_ids is a [batch_length, input_ids_length] tensor\r\n input_ids = torch.randint(\r\n low=0,\r\n high=vocab_size - 1,\r\n size=(batch_size, input_ids_length),\r\n dtype=torch.long,\r\n device='cpu',\r\n )\r\n\r\n # attention_mask is a 0/1 tensor of [batch_size, past_sequence_length + input_ids_length]\r\n attention_mask = torch.ones(\r\n [batch_size, past_sequence_length + input_ids_length]).to(torch.long)\r\n\r\n # token_type_ids is not needed in our case, its size is [batch_size, input_ids_length]\r\n token_type_ids = torch.zeros([batch_size, input_ids_length]).to(torch.long)\r\n\r\n # position_ids, size is [batch_size, input_ids_length]\r\n position_ids = attention_mask.long().cumsum(-1) - 1\r\n position_ids.masked_fill_(position_ids < 0, 0)\r\n position_ids = position_ids[:, past_sequence_length:].to(torch.long)\r\n\r\n # Run the model and get output from the model\r\n output = model(input_ids, past_key_values=past, attention_mask=attention_mask,\r\n position_ids=position_ids, return_dict=True, use_cache=True)\r\n\r\n # Set output names\r\n output_names = ['logits']\r\n for i in range(num_layer):\r\n output_names.append(\"present_key\" + str(i))\r\n output_names.append(\"present_value\" + str(i))\r\n\r\n # Set input_names\r\n input_names = ['input_ids']\r\n for i in range(num_layer):\r\n input_names.append(\"past_key\" + str(i))\r\n input_names.append(\"past_value\" + str(i))\r\n input_names += ['attention_mask', 'position_ids']\r\n\r\n # Set dynamic axes\r\n dynamic_axes = {}\r\n dynamic_axes['input_ids'] = {0: 'batch_size', 1: 'input_ids_length'}\r\n dynamic_axes['attention_mask'] = {0: 'batch_size', 1: 'total_length'}\r\n dynamic_axes['position_ids'] = {0: 'batch_size', 1: 'input_ids_length'}\r\n dynamic_axes['logits'] = {0: 'batch_size', 1: 'input_ids_length'}\r\n for i in range(num_layer):\r\n dynamic_axes['past_key' +\r\n str(i)] = {0: 'batch_size', 2: 'past_sequence_length'}\r\n dynamic_axes['past_value' +\r\n str(i)] = {0: 'batch_size', 2: 'past_sequence_length'}\r\n dynamic_axes['present_key' +\r\n str(i)] = {0: 'batch_size', 2: 'total_length'}\r\n dynamic_axes['present_value' +\r\n str(i)] = {0: 'batch_size', 2: 'total_length'}\r\n\r\n # The first input is required, and other inputs can be passed to torch.onnx.export using dict\r\n inputs = (input_ids, {\r\n 'attention_mask': attention_mask,\r\n 'position_ids': position_ids,\r\n 'past_key_values': past,\r\n })\r\n\r\n # Do export using torch.onnx.export\r\n exported_model = \"converted_model.onnx\"\r\n torch.onnx.export(\r\n model,\r\n args=inputs,\r\n f=exported_model,\r\n export_params=True,\r\n verbose=False,\r\n input_names=input_names,\r\n output_names=output_names,\r\n dynamic_axes=dynamic_axes,\r\n opset_version=11,\r\n )\r\n return exported_model\r\n\r\nconvert_gpt2_model_to_onnx()\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@patrickvonplaten any ideas?", "Hi @HaoboGu thanks for sharing the code snippet! \r\n\r\nJust so I understand a bit better - is there a specific reason why you're trying to avoid the warnings? ", "@lewtun I just worry about the warning may lead to unpredictable results when I use the model.", "Hey @HaoboGu, I don't think the warning would lead to unpredictable results (or can you clarify a bit more) :-) Can we maybe just leave it? ", "@patrickvonplaten yeah, no problem if it's expected" ]
1,655
1,658
1,658
NONE
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Darwin-21.5.0-x86_64-i386-64bit - Python version: 3.7.9 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @patil-suraj @patrickvonplaten @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to convert transformers GPT2LMHeadModel to onnx format using `torch.onnx.export` I got the following warning: ``` /Users/Project/.venv/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py:797: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if batch_size <= 0: ``` I tried to print `batch_size` here, and I got `tensor(3)`(which should be `3` here?) ### Expected behavior ```shell No warnings when converting ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17773/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17773/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17772
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17772/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17772/comments
https://api.github.com/repos/huggingface/transformers/issues/17772/events
https://github.com/huggingface/transformers/pull/17772
1,275,961,706
PR_kwDOCUB6oc455pnc
17,772
[WIP] Adding Omnivore Model to HF
{ "login": "AnugunjNaman", "id": 42839570, "node_id": "MDQ6VXNlcjQyODM5NTcw", "avatar_url": "https://avatars.githubusercontent.com/u/42839570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnugunjNaman", "html_url": "https://github.com/AnugunjNaman", "followers_url": "https://api.github.com/users/AnugunjNaman/followers", "following_url": "https://api.github.com/users/AnugunjNaman/following{/other_user}", "gists_url": "https://api.github.com/users/AnugunjNaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/AnugunjNaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnugunjNaman/subscriptions", "organizations_url": "https://api.github.com/users/AnugunjNaman/orgs", "repos_url": "https://api.github.com/users/AnugunjNaman/repos", "events_url": "https://api.github.com/users/AnugunjNaman/events{/privacy}", "received_events_url": "https://api.github.com/users/AnugunjNaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@AnugunjNaman thanks for working on this. Couple of pointers from me:\r\n\r\n* I will work on a separate PR for the TF port for a cleaner separation. \r\n* > The uploaded weights at my hub are correct fot SwinT only. After final changes I will port rest of them.\r\n\r\n Shouldn't this be added to the Facebook organization by one of the HF team members? \r\n\r\n* > OmnivoreForVisionClassification\r\n \r\n I think it'd be fair to have a clear separation here for RGB images, RGB-D images, and videos even if the backbone is the same. We could likely add detailed comments / documentation to let the users know that the same backbone is being used but for API consistency we've developed separate classes. WDYT @NielsRogge? ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17772). All of your documentation changes will be reflected on that endpoint.", "> Shouldn't this be added to the Facebook organization by one of the HF team members?\r\n\r\nThis will be done once everything works. I wanted to test so I uploaded a smaller model on my hub.\r\n\r\n> * I will work on a separate PR for the TF port for a cleaner separation.\r\n\r\nI agree it would be good to have different PR.\r\n\r\n> I think it'd be fair to have a clear separation here for RGB images, RGB-D images, and videos even if the backbone is the same. We could likely add detailed comments / documentation to let the users know that the same backbone is being used but for API consistency we've developed separate classes. WDYT @NielsRogge?\r\n\r\nIf we do this not sure how the training part goes, but yeah implementation will be smooth it that case :) \r\n\r\n", "> If we do this not sure how the training part goes, but yeah implementation will be smooth it that case :)\r\n\r\nGood point. In that case, it might make sense to expose the class `OmnivoreForVisionClassification`. There's a trade-off here between API consistency and confusion. Let's see what Niels has to say. ", "Some first comments:\r\n* I like the name `OmnivoreForVisionClassification`.\r\n* Pinging @sgugger and @LysandreJik regarding the id2label question. So for context, Omnivore is a single model that is trained on 3 modalities at the same time (images, video and single-view 3D images). The model just takes tensors of shape `(batch_size, time, num_channels, height, width)` as input and returns logits of shape `(batch_size, num_labels)`. However, the model has 3 different classification heads for the 3 types of data, and one needs to indicate which modality is provided to the model during a forward pass in order for it to know which head to use. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@AnugunjNaman @NielsRogge how is the PR going? I am developing a video classification fine-tuning framework, would love to use this model if it gets merged into main!", "I’m not sure when I will be able to finish it. My current job offer was rescinded so I’m looking for a new one. Probably when that gets sorted out so maybe a month or so. Sorry mate!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,661
1,661
CONTRIBUTOR
null
This PR added `Omnivore Model` to HF. The model use single backbone of `SwinTransformer` adapted for 3D and classifies `image`, `video` and `RGBD scene` using same backbone weights with different head for each classification. **TODO and Problems**: - [ ] Solve Docstring Test for Final Classification Model. Currently, we have only single `id2label` and `label2id`. Here we need pair of three of them for each image, video and rgbd. I have done them using `<input_type>_id2label` and `<input_type>_label2id` where `input_type` is one of image, video and rgbd. This breaks the docstring test. - [ ] The above problem also create another problem where during final prediction say for image `model.config.image_id2label[pred_id]` fails when `pred_id` is integer, we need to convert it to string as during loading of pretrained weights, the config doesn't convert them into `<int, string>` pairs. - [ ] I'm unsure on how to do the feature extractor for this model. How to load video (couldn't find something like PIL and more several type of transformations are used) and rgbd part. @NielsRogge Can you please do this part 🙏. Sorry for the trouble. - [ ] Finally, I want to have a review on naming of final model for classification. It's not exactly image classification model only. i have named it here OmnivoreForVisionClassification (support three vision modalities). We might need to create separate task for it since even pipeline test is broken if kept in image classification task. Apart of these @NielsRogge Please review and suggest any other changes the changes. 🙂 @sayakpaul. You can move for TF Model from here. Although I believe once above queries and TODO are done adding TF model should be more straightforward. The uploaded weights at my hub are correct fot SwinT only. After final changes I will port rest of them.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17772/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17772", "html_url": "https://github.com/huggingface/transformers/pull/17772", "diff_url": "https://github.com/huggingface/transformers/pull/17772.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17772.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17771
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17771/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17771/comments
https://api.github.com/repos/huggingface/transformers/issues/17771/events
https://github.com/huggingface/transformers/pull/17771
1,275,956,641
PR_kwDOCUB6oc455oqW
17,771
Added OPT to models exportable with ONNX
{ "login": "0xrushi", "id": 6279035, "node_id": "MDQ6VXNlcjYyNzkwMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/6279035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0xrushi", "html_url": "https://github.com/0xrushi", "followers_url": "https://api.github.com/users/0xrushi/followers", "following_url": "https://api.github.com/users/0xrushi/following{/other_user}", "gists_url": "https://api.github.com/users/0xrushi/gists{/gist_id}", "starred_url": "https://api.github.com/users/0xrushi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0xrushi/subscriptions", "organizations_url": "https://api.github.com/users/0xrushi/orgs", "repos_url": "https://api.github.com/users/0xrushi/repos", "events_url": "https://api.github.com/users/0xrushi/events{/privacy}", "received_events_url": "https://api.github.com/users/0xrushi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17771). All of your documentation changes will be reflected on that endpoint.", "Great work!\r\n\r\nIt works for most use cases, but I did discover that `causal-lm-with-past` isn't working.\r\n\r\n`python -m transformers.onnx --model=facebook/opt-350m --feature=causal-lm-with-past onnx/opt-350m/`\r\n\r\nyields\r\n\r\n```\r\n2022-07-04 07:44:09.810891: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected\r\nUsing framework PyTorch: 1.11.0+cu113\r\nOverriding 1 configuration item(s)\r\n\t- use_cache -> True\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py\", line 107, in <module>\r\n main()\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py\", line 94, in main\r\n args.output,\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py\", line 335, in export\r\n return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py\", line 142, in export_pytorch\r\n model_inputs = config.generate_dummy_inputs(preprocessor, framework=TensorType.PYTORCH)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/opt/configuration_opt.py\", line 213, in generate_dummy_inputs\r\n self.num_attention_heads,\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/opt/configuration_opt.py\", line 184, in num_attention_heads\r\n return self._config.n_head\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py\", line 253, in __getattribute__\r\n return super().__getattribute__(key)\r\nAttributeError: 'OPTConfig' object has no attribute 'n_head'\r\n```\r\n\r\nThat's because the naming is a bit different for the OPT config, but it can be fixed simply by replacing:\r\n\r\n```\r\n @property\r\n def num_layers(self) -> int:\r\n return self._config.n_layer\r\n\r\n @property\r\n def num_attention_heads(self) -> int:\r\n return self._config.n_head\r\n```\r\n\r\nwith\r\n\r\n```\r\n @property\r\n def num_layers(self) -> int:\r\n return self._config.num_hidden_layers\r\n\r\n @property\r\n def num_attention_heads(self) -> int:\r\n return self._config.num_attention_heads\r\n```\r\n\r\nin the configuration_opt.py file.\r\n\r\nBut when I tried that, I ran into another problem:\r\n\r\n```\r\npython -m transformers.onnx --model=facebook/opt-350m --feature=causal-lm-with-past onnx/opt-350m/\r\n\r\n\r\n2022-06-17 20:59:09.712912: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected\r\nError using standard tokenizer settings, testing specifying use_fast=False\r\nUsing framework PyTorch: 1.11.0+cu113\r\nOverriding 1 configuration item(s)\r\n\t- use_cache -> True\r\n/usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:513: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if input_shape[-1] > 1:\r\n/usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:64: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.\r\n mask = torch.full((tgt_len, tgt_len), torch.tensor(float(\"-inf\")))\r\n/usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:69: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if past_key_values_length > 0:\r\n/usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:203: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):\r\n/usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:210: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attention_mask.size() != (bsz, 1, tgt_len, src_len):\r\n/usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py:242: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py\", line 112, in <module>\r\n main()\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py\", line 99, in main\r\n args.output,\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py\", line 335, in export\r\n return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py\", line 198, in export_pytorch\r\n opset_version=opset,\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/onnx/__init__.py\", line 309, in export\r\n export_modules_as_functions)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py\", line 122, in export\r\n custom_opsets=custom_opsets, export_modules_as_functions=export_modules_as_functions)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py\", line 724, in _export\r\n dynamic_axes=dynamic_axes)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py\", line 507, in _model_to_graph\r\n module=module)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py\", line 230, in _optimize_graph\r\n torch._C._jit_pass_onnx_set_dynamic_input_shape(graph, dynamic_axes, input_names)\r\nRuntimeError: Dynamic shape axis should be no more than the shape dimension for past_sequence + sequence\r\n```\r\n\r\nAnd that's where I got stuck, and I don't really know how to solve it...\r\n\r\n\r\nAnother problem, that hopefully will be fixed soon, is that the fast tokenizer doesn't works, and local models tries to use the fast tokenizer when exporting to onnx:\r\n\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\nmodel_name=\"facebook/opt-125m\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)\r\npt_model = AutoModelForCausalLM.from_pretrained(model_name)\r\n\r\ntokenizer.save_pretrained(\"local-pt-checkpoint\")\r\npt_model.save_pretrained(\"local-pt-checkpoint\")\r\n```\r\nthen\r\n`\r\npython -m transformers.onnx --model=local-pt-checkpoint --preprocessor=tokenizer onnx/opt-350m/\r\n`\r\nResults in:\r\n\r\n```\r\n2022-07-04 08:02:21.613420: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py\", line 107, in <module>\r\n main()\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py\", line 64, in main\r\n preprocessor = AutoTokenizer.from_pretrained(args.model)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/auto/tokenization_auto.py\", line 580, in from_pretrained\r\n return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py\", line 1810, in from_pretrained\r\n **kwargs,\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py\", line 1948, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/tokenization_gpt2_fast.py\", line 152, in __init__\r\n \"Currenty GPT2's fast tokenizer does NOT support adding a BOS token.\"\r\nValueError: Currenty GPT2's fast tokenizer does NOT support adding a BOS token.Instead you should use GPT2's slow tokenizer class `GPT2Tokenizer` as follows: \r\n`GPT2Tokenizer.from_pretrained('local-pt-checkpoint')`\r\nor\r\n`AutoTokenizer.from_pretrained('local-pt-checkpoint', use_fast=False)`\r\nThis issue will be fixed soon, see: https://github.com/huggingface/tokenizers/pull/1005. so that the fast tokenizer works correctly.\r\n```\r\n\r\nJust to try it out, I added this to the src/transformers/onnx__main__.py file (row 63):\r\n\r\n```\r\n elif args.preprocessor == \"tokenizer\":\r\n try:\r\n preprocessor = AutoTokenizer.from_pretrained(args.model)\r\n except:\r\n logger.info(f\"Error using standard tokenizer settings, testing specifying use_fast=False\")\r\n preprocessor = AutoTokenizer.from_pretrained(args.model, use_fast=False)\r\n```\r\n\r\nWhich fixed the issue, but it's not something we want there and, I just wanted to see if there was any other issues as well.\r\n\r\n", "@lewtun @rushic24 \r\n\r\nSeems like the problem has to do with OPT handling past_key_values differently overall, I can't even make past key values work with the normal pytorch model.\r\n\r\nI made a notebook to demonstrate, that passing past_key_values works for gpt2, but when doing the exact same thing with opt, it raises an error.\r\n\r\nHere is the notebook:\r\n\r\nhttps://colab.research.google.com/drive/14A-hm-aFzW64ZIxghJDVJaU6a4ZLGCBi?usp=sharing\r\n\r\nHere is the error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n[<ipython-input-17-d5f8e80cb463>](https://localhost:8080/#) in <module>()\r\n 9 print()\r\n 10 print(gpt_inputs[\"input_ids\"])\r\n---> 11 gpt_outputs = model(gpt_inputs.input_ids,return_dict=True,past_key_values=gpt_outputs.past_key_values)\r\n 12 \r\n 13 print(gpt_outputs.logits.shape)\r\n\r\n4 frames\r\n[/usr/local/lib/python3.7/dist-packages/transformers/models/opt/modeling_opt.py](https://localhost:8080/#) in _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length)\r\n 527 expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])\r\n 528 combined_attention_mask = (\r\n--> 529 expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask\r\n 530 )\r\n 531 \r\n\r\nRuntimeError: The size of tensor a (5) must match the size of tensor b (10) at non-singleton dimension 3\r\n\r\n```\r\nIf anyone has any ideas on how to fix it, that would be appreciated. I think I will be able to look into this more deeply next week.", "I had a deeper look into the past_key_values error, but am really struggling to understand code in the modeling_opt.py file.\r\n\r\nThe function that raises the error looks like this:\r\n\r\n```\r\n # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask\r\n def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):\r\n # create causal mask\r\n # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]\r\n combined_attention_mask = None\r\n if input_shape[-1] > 1:\r\n combined_attention_mask = _make_causal_mask(\r\n input_shape, inputs_embeds.dtype, past_key_values_length=past_key_values_length\r\n ).to(inputs_embeds.device)\r\n\r\n if attention_mask is not None:\r\n # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]\r\n expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])\r\n combined_attention_mask = (\r\n expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask\r\n )\r\n\r\n return combined_attention_mask\r\n```\r\n\r\nAn what causes the error is this line:\r\n```\r\n combined_attention_mask = (\r\n expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask\r\n )\r\n```\r\n\r\nSince expanded_attn_mask and combined_attention_mask are different shapes, and can't be added together.\r\n\r\nI tried printing both out before the error is rasied, and expanded_attn_mask looks like this:\r\n\r\n```\r\ntensor([[[[0., 0., 0., 0., 0.],\r\n [0., 0., 0., 0., 0.],\r\n [0., 0., 0., 0., 0.],\r\n [0., 0., 0., 0., 0.],\r\n [0., 0., 0., 0., 0.]]]])\r\n```\r\n\r\nand combined_attention_mask looks like this:\r\n\r\n```\r\ntensor([[[[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,\r\n 0.0000e+00, -3.4028e+38, -3.4028e+38, -3.4028e+38, -3.4028e+38],\r\n [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,\r\n 0.0000e+00, 0.0000e+00, -3.4028e+38, -3.4028e+38, -3.4028e+38],\r\n [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,\r\n 0.0000e+00, 0.0000e+00, 0.0000e+00, -3.4028e+38, -3.4028e+38],\r\n [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,\r\n 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, -3.4028e+38],\r\n [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,\r\n 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00]]]])\r\n```\r\n\r\nI got stuck here, not really sure what's even going on...", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Gently pinging @younesbelkada who worked on the OPT port and may be able to shed some insight on why the generations don't work with past key-value pairs", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey @ViktorThink @lewtun sorry for the delay, taking a look right now! \r\n\r\nPinging also @ArthurZucker here", "Hey all! Thanks for your awesome work! \r\n\r\nRegarding the `past_key_value`, given that the integration tests worked properly, I am not sure why it doesn't work but will have a look to see what is wrong. \r\n", "Hey @rushic24 @ViktorThink I believe several fixes have landed for the OPT models, so would you like to revisit this PR now? In particular, can you test if the `causal-lm-with-past` feature is now working as expected? \r\n\r\nIf you want a fast way to test this, you can use `optimum` as follows (pointing to a local `model.onnx` file instead of a Hub checkpoint): https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForCausalLM.forward.example", "@lewtun I did a very simple test with past key values in this colab: https://colab.research.google.com/drive/1UO7uioZZs2Gu6nroQRgPilSpYQhNLiSZ?usp=sharing\r\n\r\nI didn't try to export the model, but since past doesn't seem work in pytorch format, exporting it to Onnx shouldn't be possible.\r\n\r\nIt works for GPT models, but not for OPT. The link you sent leads to a 404 error.", "> @lewtun I did a very simple test with past key values in this colab: https://colab.research.google.com/drive/1UO7uioZZs2Gu6nroQRgPilSpYQhNLiSZ?usp=sharing\r\n> \r\n> I didn't try to export the model, but since past doesn't seem work in pytorch format, exporting it to Onnx shouldn't be possible.\r\n> \r\n> It works for GPT models, but not for OPT. The link you sent leads to a 404 error.\r\n\r\nThanks for sharing a reproducible example @ViktorThink - this indeed looks like a bug! Pinging @younesbelkada to have a look since he implemented this model :)", "I'll have a look when I can! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey, there doesn't seem to be a bug with OPT's `past_key_value` scheme as the following works : \r\n```python \r\nimport torch\r\nfrom transformers import AutoTokenizer, OPTModel, set_seed\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/opt-1.3b\")\r\nmodel = OPTModel.from_pretrained(\"facebook/opt-1.3b\")\r\n\r\ninputs = tokenizer(\"No I'm not missing the \", return_tensors=\"pt\")\r\n\r\ninput_ids = inputs.input_ids\r\nattention_mask = inputs.attention_mask\r\n\r\nwith torch.no_grad():\r\n model.config.use_cache = False\r\n set_seed(0)\r\n output = model(input_ids, attention_mask = attention_mask, use_cache =False)\r\n print(output.last_hidden_state[:,-1,:])\r\n\r\n model.config.use_cache = True\r\n output_1 = model(input_ids[:,:-1], use_cache = True, attention_mask = attention_mask[:,:-1])\r\n pkv = output_1.past_key_values\r\n output_2 = model(input_ids[:,-1:], past_key_values = pkv , attention_mask = attention_mask, use_cache = True)\r\n print(output_2.last_hidden_state[:,-1,:])\r\n torch.testing.assert_allclose(output.logits[:,-1,], output_2.logits[:,-1,:], rtol = 1e-4, atol = 1e-4)\r\n```\r\nThis is the expected format as inside the `generate` function, we are passing only the last inputs, while the full attention mask was already created.\r\nThe only issue I see here is consistency : the behaviours are different for `gpt2` and `opt`. 😅 \r\n\r\nMoreover, the default attention mask created when the `past_key_values` are given also seem wrong : \r\n\r\n```python \r\noutput_2 = model(input_ids[:,-1:], past_key_values = pkv , use_cache = True)\r\n...\r\nValueError: Attention mask should be of size (1, 1, 0, 7), but is torch.Size([1, 1, 1, 1])\r\n``` \r\nShould probably work, as the length of the previous sequence is given. \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,675
1,675
CONTRIBUTOR
null
# What does this PR do? ```python # !python setup.py install ``` ```python # !pip install -e ".[dev]" ``` ```python # pip install onnxruntime ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float32) # the fast tokenizer currently does not work correctly tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m", use_fast=False) prompt = "Hello, I'm am conscious and" input_ids = tokenizer(prompt, return_tensors="pt").input_ids generated_ids = model.generate(input_ids) tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ``` ["Hello, I'm am conscious and I'm a bit of a noob. I'm looking"] ```python ``` ```python !python -m transformers.onnx --model=facebook/opt-350m onnx/opt-350m/ ``` Using framework PyTorch: 1.11.0+cu102 Overriding 1 configuration item(s) - use_cache -> False /root/transformers/src/transformers/models/opt/modeling_opt.py:513: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if input_shape[-1] > 1: /root/transformers/src/transformers/models/opt/modeling_opt.py:64: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mask = torch.full((tgt_len, tgt_len), torch.tensor(float("-inf"))) /root/transformers/src/transformers/models/opt/modeling_opt.py:203: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /root/transformers/src/transformers/models/opt/modeling_opt.py:210: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /root/transformers/src/transformers/models/opt/modeling_opt.py:242: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): Validating ONNX model... -[✓] ONNX model output names match reference model ({'last_hidden_state'}) - Validating ONNX Model output "last_hidden_state": -[✓] (2, 8, 512) matches (2, 8, 512) -[x] values not close enough (atol: 1e-05) Traceback (most recent call last): File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/root/transformers/src/transformers/onnx/__main__.py", line 107, in <module> main() File "/root/transformers/src/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/root/transformers/src/transformers/onnx/convert.py", line 441, in validate_model_outputs "Outputs values doesn't match between reference model and ONNX exported model: " ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 4.57763671875e-05 ```python from transformers.models.opt import OPTConfig, OPTOnnxConfig config = OPTConfig() onnx_config = OPTOnnxConfig(config) output_keys = list(onnx_config.outputs.keys()) print(output_keys) ``` ['last_hidden_state'] ```python from onnxruntime import InferenceSession ``` ```python import onnxruntime as ort from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") ort_session = ort.InferenceSession("onnx/opt-350m/model.onnx") inputs = tokenizer("Using OPT in ONNX!", return_tensors="np") outputs = ort_session.run(["last_hidden_state"], dict(inputs)) print(outputs) ``` [array([[[ -1.3539658 , -0.25787818, -0.3093884 , ..., -1.311745 , 0.26136506, -1.4270447 ], [ -0.51148593, -5.1948047 , 3.1015701 , ..., 1.9010596 , -2.0694203 , 0.96382034], [ -1.4861462 , -4.3613157 , -2.8032331 , ..., -0.65176994, -6.0503354 , -0.08128738], ..., [ -2.290329 , -9.395232 , 3.9363523 , ..., -0.5923378 , -3.7993686 , 0.13608676], [ -4.7750826 , -12.562761 , 1.9932727 , ..., -4.361832 , -2.3446696 , 1.2666583 ], [ -3.7153127 , -6.4608436 , -3.683312 , ..., -2.824885 , -0.75467056, -1.9532645 ]]], dtype=float32)] ![image](https://user-images.githubusercontent.com/6279035/174467137-4c0f151e-16b3-436b-b412-2a39b7f361af.png) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17771/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17771", "html_url": "https://github.com/huggingface/transformers/pull/17771", "diff_url": "https://github.com/huggingface/transformers/pull/17771.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17771.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17770
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17770/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17770/comments
https://api.github.com/repos/huggingface/transformers/issues/17770/events
https://github.com/huggingface/transformers/pull/17770
1,275,880,597
PR_kwDOCUB6oc455aGJ
17,770
Flax implementation of DPT
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? I tried to implement DPT (Dense Prediction with Transformers) in Flax during my free time! 🚀 By the way it is the first Segmentation and Depth Estimation model implemented in Flax! Nits/TODOs: - [x] Figure out how to properly call `BatchNorm` and `Dropout` inside a `Sequential` - [ ] Test equivalency tests - [ ] Write documentation - For now they're just copy/pasted Quetions: - Why the loss is not implemented in `modeling_dpt.py` ? I can probably help on that since I have already implemented the loss for a university project: https://github.com/antocad/FocusOnDepth/blob/master/FOD/Loss.py cc @NielsRogge @sanchit-gandhi @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17770/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17770", "html_url": "https://github.com/huggingface/transformers/pull/17770", "diff_url": "https://github.com/huggingface/transformers/pull/17770.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17770.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17769
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17769/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17769/comments
https://api.github.com/repos/huggingface/transformers/issues/17769/events
https://github.com/huggingface/transformers/pull/17769
1,275,775,493
PR_kwDOCUB6oc455Ga_
17,769
Improve error message Union not allowed
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
COLLABORATOR
null
I am working with a lot of custom Dataclasses inside the `HfArgumentParser`, and while my Python code was technically correct (using `Union`), I did get the brief error message _Only `Union[X, NoneType]` (i.e., `Optional[X]`) is allowed for `Union`_. From the error message it was not clear what triggered it, and it took me a while to figure out why I could not use Union. My assumption now is that we cannot use Union due to limitations of `argparse` (I am not sure yet how I can allow for floats and ints though). This PR simply clarifies the error message a bit and gives the field name of the offending item. Minimal test case: ```python from typing import Union from dataclasses import dataclass, field from transformers import HfArgumentParser @dataclass class OtherArguments: validation_size: Union[float, int] = field( default=0.2 # might be a float for percentage of training set or int for absolute split ) if __name__ == '__main__': dataclass_tester = OtherArguments(0.5) parser = HfArgumentParser((OtherArguments, )) oargs = parser.parse_args_into_dataclasses() ``` EDIT: my current solution to allow for floats and ints is below, but the PR is still useful in itself I think. ```python from argparse import ArgumentTypeError from typing import Union from dataclasses import dataclass, field from transformers import HfArgumentParser def float_or_int(arg: str): # I am aware that this is very naive (e.g. scientific notations), but it works for my purposes. # Other suggestions welcome though likely_float = "." in arg try: arg = float(arg) except ValueError: raise ArgumentTypeError(f"{arg} is not a float-able input") if not likely_float: arg = int(arg) return arg @dataclass class OtherArguments: validation_size: float_or_int = field( default=0.2, metadata={"help": "If a validation set is not present in your dataset, it will be created automatically from" " the training set. You can set the ratio train/valid here (float) or an exact number of" " samples that you wish to include in the validation set (int)."} ) if __name__ == '__main__': parser = HfArgumentParser((OtherArguments, )) oargs = parser.parse_args_into_dataclasses()[0] print(oargs.validation_size) print(type(oargs.validation_size)) ``` @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17769/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17769/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17769", "html_url": "https://github.com/huggingface/transformers/pull/17769", "diff_url": "https://github.com/huggingface/transformers/pull/17769.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17769.patch", "merged_at": 1655836021000 }
https://api.github.com/repos/huggingface/transformers/issues/17768
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17768/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17768/comments
https://api.github.com/repos/huggingface/transformers/issues/17768/events
https://github.com/huggingface/transformers/pull/17768
1,275,692,636
PR_kwDOCUB6oc45411e
17,768
Translation italian: multilingual.mdx
{ "login": "nickprock", "id": 11136646, "node_id": "MDQ6VXNlcjExMTM2NjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/11136646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nickprock", "html_url": "https://github.com/nickprock", "followers_url": "https://api.github.com/users/nickprock/followers", "following_url": "https://api.github.com/users/nickprock/following{/other_user}", "gists_url": "https://api.github.com/users/nickprock/gists{/gist_id}", "starred_url": "https://api.github.com/users/nickprock/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nickprock/subscriptions", "organizations_url": "https://api.github.com/users/nickprock/orgs", "repos_url": "https://api.github.com/users/nickprock/repos", "events_url": "https://api.github.com/users/nickprock/events{/privacy}", "received_events_url": "https://api.github.com/users/nickprock/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @nickprock! I just had a chance to have a look at your PR, can I ask you if you could fix a couple of things?\r\n- \"Tuttavia non tutti gli usi dei modelli multilingua sono diversi\" this sentence is a bit difficult to understand written like this, I would try something like: \"Non tutti gli utilizzi dei modelli multilingue sono però diversi\"\r\n- I see that sometimes you use 'multilingua' and sometimes 'multilingue' maybe I would standardise it with 'multilingue', what do you think?\r\n- \"non le utilizzano\" -> \"non li utilizzano\"\r\n- \"del input_ids\" -> \"dell'input_ids\"\r\n- \"perchè\" -> \"perché\"\r\n- \"Questo tensorre dovrebbe\" -> \"Questo tensore dovrebbe\"\r\n- \"identificare ul linguaggio\" -> \"identificare il linguaggio\"\r\n- I would translate also these parts: (Many-to-many multilingual machine translation, 50 languages), (Many-to-one multilingual machine translation, 50 languages), ...\r\n- \"Applica il tokenizer sul testo\" -> \"Applica il tokenizer al testo\"\r\n- \"MBart fforza\" -> \"MBart forza\"\r\n- \"nel target language\" -> \"nella lingua target\" or \"nella lingua obiettivo\"\r\nThanks!! 🎉", "Thanks @mfumanelli I will try ti fix it tomorrow.", "Hi @mfumanelli, how would you translate \"Masked Language Modeling\"? I would leave it in English, \"Modello di linguaggio mascherato\" doesn't sounds good for me.\r\nThanks", "Yes @nickprock, maybe you can write only the first time you mention it \"Modello di linguaggio mascherato (Masked Language Model, in inglese)\", and from then on call it by its English name. What do you think?", "I agree", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I fixed and I'm waiting for the review", "cc @omarespejel " ]
1,655
1,658
1,658
CONTRIBUTOR
null
# What does this PR do? * added multilingual.mdx * updated _toctree.yml See issue: [#17459](https://github.com/huggingface/transformers/issues/17459) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @omarespejel @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17768/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17768", "html_url": "https://github.com/huggingface/transformers/pull/17768", "diff_url": "https://github.com/huggingface/transformers/pull/17768.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17768.patch", "merged_at": 1658164148000 }
https://api.github.com/repos/huggingface/transformers/issues/17767
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17767/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17767/comments
https://api.github.com/repos/huggingface/transformers/issues/17767/events
https://github.com/huggingface/transformers/issues/17767
1,275,668,726
I_kwDOCUB6oc5MCSj2
17,767
Snacky Brain Bites for HF Transformers
{ "login": "zolekode", "id": 25635679, "node_id": "MDQ6VXNlcjI1NjM1Njc5", "avatar_url": "https://avatars.githubusercontent.com/u/25635679?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zolekode", "html_url": "https://github.com/zolekode", "followers_url": "https://api.github.com/users/zolekode/followers", "following_url": "https://api.github.com/users/zolekode/following{/other_user}", "gists_url": "https://api.github.com/users/zolekode/gists{/gist_id}", "starred_url": "https://api.github.com/users/zolekode/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zolekode/subscriptions", "organizations_url": "https://api.github.com/users/zolekode/orgs", "repos_url": "https://api.github.com/users/zolekode/repos", "events_url": "https://api.github.com/users/zolekode/events{/privacy}", "received_events_url": "https://api.github.com/users/zolekode/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,658
1,658
CONTRIBUTOR
null
### Feature request Nugget Republic has these very cool nuggets based on the HF course. I would like to submit a PR to integrate these nuggets into either the Readme or into the official documentation if allowed. Can anyone point me to the right place or person? Here are a few examples of said nuggets (4 / 11 nuggets): https://app.flexudy.com/story?mId=W7YKXTS0&dId=FMCSLFZH https://app.flexudy.com/story?mId=W7YKXTS0&dId=X1PL9FZG https://app.flexudy.com/story?mId=W7YKXTS0&dId=AFG8XX5S https://app.flexudy.com/story?mId=W7YKXTS0&dId=UAYHMCSL ### Motivation Simple. To make HF Transformers even more accessible to the general public. ### Your contribution Sure thing!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17767/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17767/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17766
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17766/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17766/comments
https://api.github.com/repos/huggingface/transformers/issues/17766/events
https://github.com/huggingface/transformers/issues/17766
1,275,640,249
I_kwDOCUB6oc5MCLm5
17,766
How to checkpoint TFAutoModelForSequenceClassification every k batches
{ "login": "preethiseshadri518", "id": 60128552, "node_id": "MDQ6VXNlcjYwMTI4NTUy", "avatar_url": "https://avatars.githubusercontent.com/u/60128552?v=4", "gravatar_id": "", "url": "https://api.github.com/users/preethiseshadri518", "html_url": "https://github.com/preethiseshadri518", "followers_url": "https://api.github.com/users/preethiseshadri518/followers", "following_url": "https://api.github.com/users/preethiseshadri518/following{/other_user}", "gists_url": "https://api.github.com/users/preethiseshadri518/gists{/gist_id}", "starred_url": "https://api.github.com/users/preethiseshadri518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/preethiseshadri518/subscriptions", "organizations_url": "https://api.github.com/users/preethiseshadri518/orgs", "repos_url": "https://api.github.com/users/preethiseshadri518/repos", "events_url": "https://api.github.com/users/preethiseshadri518/events{/privacy}", "received_events_url": "https://api.github.com/users/preethiseshadri518/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @preethiseshadri518 👋 We have an outstanding set of issues related to the `SavedModel` format, resulting in the errors you see. There are ways to work around it, by manually specifying portions of the graph at save time -- you can find them if you search in closed issues here :)\r\n\r\nA much simpler workaround, if it suits your problem, is to store/load the weights with [`save_weights`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#save_weights) and [`load_weights`](https://www.tensorflow.org/api_docs/python/tf/keras/Model#load_weights), which is what our API uses. Let us know if it sorted your problem!", "Hi Joao! Yes, this solves the issue. If I use `save_weights` and save them in the h5 format, then I can use `TFAutoModelForSequenceClassification.from_pretrained()`. Thanks for your comment!" ]
1,655
1,655
1,655
NONE
null
I am following the Huggingface Tensorflow Notebook to [train a BERT model on MNLI](https://github.com/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb) (the task itself is irrelevant). I am interested in checkpointing models locally every k batches, so that I end with 20 or 30 intermediate checkpoints over the course of training. The notebook uses model.fit() to train the model, but my understanding is that using a [callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint) to save intermediate checkpoints saves a SavedModel file. I am currently not saving intermediate checkpoints, only the final model, and use model.save_pretrained(ckpt). However, I now want the model to be checkpointed as a part of training. If I try loading a checkpoint directory in the SavedModel format using `model = tf.keras.models.load_model()`, the expected input format is not compatible with how I feed in inputs for `model = TFAutoModelForSequenceClassification.from_pretrained()`. I get the following message (32 is batch size, I think 109 is the input token length for the longest example in the batch). <img width="686" alt="Screen Shot 2022-06-17 at 7 20 14 PM" src="https://user-images.githubusercontent.com/60128552/174418985-4331fca5-4b1b-4486-90f2-4f98b98a35d1.png"> My question is how to checkpoint a `TFAutoModelForSequenceClassification` model with a specified frequency (every k batches). Either I have to change how I am feeding the input data into `model = tf.keras.models.load_model()` or change how I am saving the intermediate checkpoints, but I am having trouble figuring out what to do and how to proceed. Note: I am not interested in pushing models to the hub, only saving models locally.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17766/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17765
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17765/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17765/comments
https://api.github.com/repos/huggingface/transformers/issues/17765/events
https://github.com/huggingface/transformers/pull/17765
1,275,612,739
PR_kwDOCUB6oc454mvZ
17,765
Enable torchdynamo with torch_tensorrt(fx path)
{ "login": "frank-wei", "id": 6955737, "node_id": "MDQ6VXNlcjY5NTU3Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/6955737?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frank-wei", "html_url": "https://github.com/frank-wei", "followers_url": "https://api.github.com/users/frank-wei/followers", "following_url": "https://api.github.com/users/frank-wei/following{/other_user}", "gists_url": "https://api.github.com/users/frank-wei/gists{/gist_id}", "starred_url": "https://api.github.com/users/frank-wei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frank-wei/subscriptions", "organizations_url": "https://api.github.com/users/frank-wei/orgs", "repos_url": "https://api.github.com/users/frank-wei/repos", "events_url": "https://api.github.com/users/frank-wei/events{/privacy}", "received_events_url": "https://api.github.com/users/frank-wei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi, @stas00 just a friendly ping. I updated the installation part and it will be easy to repro if needed.", "So I followed your instructions except I used the .deb package installer.\r\n\r\n(oh and please link to https://docs.nvidia.com/deeplearning/tensorrt/archives/index.html so that the user will know how to install tensorrt)\r\n\r\nwhy do I get:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/optimizations/backends.py\", line 45, in inner\r\n return fn(model, **kwargs)\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/optimizations/backends.py\", line 313, in fx2trt\r\n from torch_tensorrt.fx.fx2trt import InputTensorSpec\r\nModuleNotFoundError: No module named 'torch_tensorrt'\r\n```\r\n\r\nIs it the module from `tensorrt-8.2.5.1-cp38-none-linux_x86_64.whl` \r\n\r\noh I see it failed to build:\r\n\r\n```\r\n/hf/00pytorch/TensorRT/py [pytorch/TensorRT|master]> python setup.py install --fx-only\r\nCould not find bazel in PATH\r\n```\r\n\r\ninstalled `bazel` and it still fails:\r\n\r\n```\r\npip install bazel\r\nCollecting bazel\r\n Downloading bazel-0.0.0.20200723.tar.gz (1.4 kB)\r\nBuilding wheels for collected packages: bazel\r\n Building wheel for bazel (setup.py) ... done\r\n Created wheel for bazel: filename=bazel-0.0.0.20200723-py3-none-any.whl size=1708 sha256=518429e9ce158eb7e4ffc2cefa782eb7935d39d317d67801c5ae9b7346af0500\r\n Stored in directory: /home/stas/.cache/pip/wheels/9b/80/e4/8d16b3eeeda264ac8105dd7fa29a124431113b2f1f5dd703bc\r\nSuccessfully built bazel\r\nInstalling collected packages: bazel\r\nSuccessfully installed bazel-0.0.0.20200723\r\n(py38-pt112) /hf/00pytorch/TensorRT/py [pytorch/TensorRT|master]> python setup.py install --fx-only\r\nCould not find bazel in PATH\r\n```\r\nso it's not a python package that it wants but a system-wide `bazel`? there is no apt package - probably need to add a new apt repo? this doc appears to be really outdated https://docs.bazel.build/versions/main/install-ubuntu.html\r\n\r\nIn any case this obviously requires explicit instructions.\r\n\r\nI will wait for your instructions before proceeding.\r\n", "Thanks for your time and efforts @stas00 !\r\n1. Yes, the TRT seems that bring the new user some troubles when they try their first time to install. I just found a way to install python version of TRT so you do not need to download TRT tarball and unzip the stuffs (this python installation will install all the dependent libs like tensorRT lib and cuDNN lib). I added this instructions to our doc as a PR. https://github.com/pytorch/TensorRT/pull/1145/\r\n```\r\n $ pip3 install nvidia-pyindex\r\n $ pip3 install nvidia-tensorrt==8.2.4.2\r\n```\r\n2. I am having a PR to **disable** the bazel check https://github.com/pytorch/TensorRT/pull/1147. (merged)\r\nBut that is a bit weird for bazel installation. I am on centOS and conda envrioment. Here is command `conda install -c conda-forge bazel`. It looks like your bazel installation location is not added to $PATH but `which bazel` can help check. Now, with my diff [1147](https://github.com/pytorch/TensorRT/pull/1147) (merged), we should not need bazel. \r\nNow below instruction is the complete instruction about install TRT, pytorch, torch_tensorrt.fx which I just verified work.\r\n```\r\n $ conda create --name python_env python=3.8\r\n $ conda activate python_env\r\n # Recommend to install PyTorch 1.12 and later\r\n $ conda install pytorch torchvision torchtext cudatoolkit=11.3 -c pytorch-nightly\r\n # Install TensorRT python package\r\n $ pip3 install nvidia-pyindex\r\n $ pip3 install nvidia-tensorrt==8.2.4.2\r\n $ git clone https://github.com/pytorch/TensorRT.git\r\n $ cd TensorRT/py && python setup.py install --fx-only && cd ..\r\n # check torch_tensorrt.fx is installed\r\n $ python -c \"import torch_tensorrt.fx\"\r\n ```\r\n Hope it solves your problem. ", "`conda install -c conda-forge bazel` did the trick, The same with pip was giving nothing with `which bazel` - not a PATH issue, but a package issue I think, but probably related\r\n\r\n---------------\r\n\r\n```\r\n $ pip3 install nvidia-pyindex\r\n $ pip3 install nvidia-tensorrt==8.2.4.2\r\n```\r\n\r\nThat did the trick. The tests have run successfully.\r\n\r\nso let's update the OP with the above 2 fixes.", "ah, one more user-facing documentation nit - if you want users to use your magic code you will want to provide some enticement. A small benchmark table that shows what these features do usually goes a long way to get a user excited to try them. So this is something else to consider. It's not a show stopper, but as you can see if the docs aren't added right away they never get added, so it's best to do it in one go. It's still a recommendation and I'm fine merging it as is, it's just not going to be used much w/o enticing docs.\r\n", "> ah, one more user-facing documentation nit - if you want users to use your magic code you will want to provide some enticement. A small benchmark table that shows what these features do usually goes a long way to get a user excited to try them. So this is something else to consider. It's not a show stopper, but as you can see if the docs aren't added right away they never get added, so it's best to do it in one go. It's still a recommendation and I'm fine merging it as is, it's just not going to be used much w/o enticing docs.\r\n\r\nI will try to add the doc there. But it is better to have @anijain2305 to include the AOT part.:-)", "> I will try to add the doc there. But it is better to have @anijain2305 to include the AOT part.:-)\r\n\r\nYeah, I was hoping that you'd only need to add the incremental part relevant for this PR.", "re: CI - yes and it's complicated\r\n\r\nbasically the live CI that you see reporting in this PR runs only CPU tests since CircleCI doesn't have gpus.\r\n\r\nthen we have another set of CI workflows that runs on our machine via github actions and that's where we test all the complex/slow cases.\r\n\r\nAnd yes, I completely forgot that part of this PR we need to setup our CI to install all these packages as well so that these tests will be run.\r\n\r\nSo once we polished this let's not forget that part. We will have to run all those instructions on our pt-nightly docker image - but actually there is a problem with this idea - how will the docker builder be able to download tensorRT packages if they require an NVIDIA user account?", "re: CI\r\nActually, circleCI has gpu resource to use(V100, T4, P4). I just added to our project :-) https://github.com/pytorch/TensorRT/pull/1137\r\nThese 2 commands are our saver\r\n```\r\n $ pip3 install nvidia-pyindex\r\n $ pip3 install nvidia-tensorrt==8.2.4.2\r\n```\r\nDo you think we need to have @require_torchtensorrt.fx ? So it will help us to check if torch_tensorrt.fx is installed in the test?", "\r\n\r\n\r\n\r\n\r\n> Actually, circleCI has gpu resource to use(V100, T4, P4). I just added to our project :-) [pytorch/TensorRT#1137](https://github.com/pytorch/TensorRT/pull/1137) \r\n\r\nThat's great to know - thank you very much - I will pass this info on\r\n\r\n> These 2 commands are our saver\r\n> \r\n> ```\r\n> $ pip3 install nvidia-pyindex\r\n> $ pip3 install nvidia-tensorrt==8.2.4.2\r\n> ```\r\n\r\nAh, right! so no need for nvidia user account! super - let's use that in the instructions then.\r\n\r\n> Do you think we need to have @require_torchtensorrt.fx ? So it will help us to check if torch_tensorrt.fx is installed in the test?\r\n\r\nAbsolutely, yes!", "@stas00 , just wondering if the circleci is flaky? Some tests errors are not related. For ex.\r\nrun_example_torch, check_code_quanlity", "It appears that the CI is very broken at the moment, I asked and will know more tomorrow morning.\r\n\r\nThank you for the heads up, @frank-wei - it doesn't look like any of the failures are related to your work. Especially since the live CI won't run any of your tests.", "ok, so for the quality one - please rebase this PR on main. Thank you.\r\n\r\nThe other issue I don't have an answer for yet.\r\n\r\n**update: I rebased - let's see with the update.**", "ok, so to fix `check_code_quality` you need to run `make style` and push\r\n\r\nafter rebasing most of the CI failures are now coming from this PR:\r\n\r\n```\r\n\r\n==================================== ERRORS ====================================\r\n______________ ERROR collecting tests/deepspeed/test_deepspeed.py ______________\r\nImportError while importing test module '/home/circleci/transformers/tests/deepspeed/test_deepspeed.py'.\r\nHint: make sure your test modules/packages have valid Python names.\r\nTraceback:\r\n/usr/local/lib/python3.7/importlib/__init__.py:127: in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\ntests/deepspeed/test_deepspeed.py:26: in <module>\r\n from tests.trainer.test_trainer import TrainerIntegrationCommon # noqa\r\ntests/trainer/test_trainer.py:586: in <module>\r\n class TrainerIntegrationTest(TestCasePlus, TrainerIntegrationCommon):\r\ntests/trainer/test_trainer.py:1803: in TrainerIntegrationTest\r\n @require_torch_tensorrt_fx\r\nsrc/transformers/testing_utils.py:499: in require_torch_tensorrt_fx\r\n return unittest.skipUnless(is_torch_tensorrt_fx_available(), \"test requires Torch-TensorRT FX\")(test_case)\r\nsrc/transformers/utils/import_utils.py:421: in is_torch_tensorrt_fx_available\r\n return importlib.util.find_spec(\"torch_tensorrt.fx\") is not None\r\n/usr/local/lib/python3.7/importlib/util.py:94: in find_spec\r\n parent = __import__(parent_name, fromlist=['__path__'])\r\nE ModuleNotFoundError: No module named 'torch_tensorrt'\r\n```\r\n\r\nLet me know if you need help with sorting it out.", "Thanks @stas00 , I fixed the import check. ", "Is there a simple way to support dynamic shape on fx2trt on torch Dynamo? If not yet, may be you want to specify it in the doc? If yes you may want to say how we provide the Tensorrt \"profiles\" ?\n\nIn rapid experiments I did on HF + dynamo + fx2trt, even by increasing the dynamo cache, when I pushed plenty of different input sizes, at some point it raised plenty of OOM exceptions and stopped working. May be trt profiles would have worked.", "> Is there a simple way to support dynamic shape on fx2trt on torch Dynamo? If not yet, may be you want to specify it in the doc? If yes you may want to say how we provide the Tensorrt \"profiles\" ?\r\n> \r\n> In rapid experiments I did on HF + dynamo + fx2trt, even by increasing the dynamo cache, when I pushed plenty of different input sizes, at some point it raised plenty of OOM exceptions and stopped working. May be trt profiles would have worked.\r\n\r\nIt is not supported yet for dynamic shape in my implementation. I plan to support dynamic batch size in the next step(probably set it to default). ", "One of the tests is failing for me:\r\n\r\n```\r\n$ CUDA_VISIBLE_DEVICES=0 RUN_SLOW=1 pyt tests/trainer/test_trainer.py -k test_torchdynamo_memory -sv\r\n # AOT Autograd recomputaion and nvfuser recomputation optimization\r\n # aggressively fuses the operations and reduce the memory footprint.\r\n> self.assertGreater(orig_peak_mem, peak_mem * 2)\r\nE AssertionError: 100664832 not greater than 201330688\r\n```\r\n\r\nlet me know what details you need - this is on A100.\r\n\r\noh, it actually crashed before that:\r\n\r\n```\r\n========== TorchDynamo Stack Trace ==========\r\nTraceback (most recent call last):\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/convert_frame.py\", line 295, in _convert_frame_assert\r\n code = transform_code_object(frame.f_code, transform)\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/bytecode_transformation.py\", line 338, in transform_code_object\r\n transformations(instructions, code_options)\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/convert_frame.py\", line 261, in transform\r\n tracer = InstructionTranslator(\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/symbolic_convert.py\", line 1220, in __init__\r\n self.symbolic_locals = collections.OrderedDict(\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/symbolic_convert.py\", line 1221, in <genexpr>\r\n (k, VariableBuilder(self, LocalSource(k))(f_locals[k]))\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/variables/builder.py\", line 104, in __call__\r\n return self._wrap(value).clone(**self.options())\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/variables/builder.py\", line 130, in _wrap\r\n return self.wrap_tensor(value)\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/variables/builder.py\", line 327, in wrap_tensor\r\n tensor_variable = TensorVariable.create(\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/variables/tensor.py\", line 121, in create\r\n cls.wrap_to_fake_tensor, fake_mode=tx.fake_mode\r\n File \"/mnt/nvme0/code/github/00pytorch/torchdynamo/torchdynamo/symbolic_convert.py\", line 1136, in fake_mode\r\n return self._fake_mode\r\nAttributeError: 'InstructionTranslator' object has no attribute '_fake_mode'\r\n```\r\n\r\nThis is not great, shouldn't the test have failed here and not in a misleading later place of comparison?", "The failure may due to the torchdynamo outdated? Could you install the newest torchdynamo? Here are the command to install it:\r\n```\r\ngit clone https://github.com/pytorch/functorch\r\ncd functorch\r\nrm -rf build\r\npip install -e .[aot]\r\n\r\ncd ..\r\ngit clone https://github.com/pytorch/torchdynamo\r\ncd torchdynamo\r\npip install -r requirements.txt\r\npython setup.py develop\r\n```\r\n\r\nIt looks good from my testing:\r\n```\r\n(mypy38-fx-only) [wwei6@devgpu005.ftw6 /data/users/wwei6/Work/transformers] CUDA_VISIBLE_DEVICES=6 pytest tests/trainer/test_trainer.py -k test_torchdynamo_memory -sv\r\n===================================================================================== test session starts =====================================================================================\r\nplatform linux -- Python 3.8.13, pytest-7.1.2, pluggy-1.0.0 -- /data/users/wwei6/miniconda3/envs/mypy38-fx-only/bin/python\r\ncachedir: .pytest_cache\r\nbenchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)\r\nhypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/data/users/wwei6/Work/transformers/.hypothesis/examples')\r\nrootdir: /data/users/wwei6/Work/transformers, configfile: setup.cfg\r\nplugins: benchmark-3.4.1, hydra-core-1.1.2, hypothesis-6.49.1\r\ncollected 70 items / 69 deselected / 1 selected \r\n\r\ntests/trainer/test_trainer.py::TrainerIntegrationTest::test_torchdynamo_memory PyTorch: setting up devices\r\nThe default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).\r\nPASSED\r\n\r\n====================================================================================== warnings summary =======================================================================================\r\n../../miniconda3/envs/mypy38-fx-only/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:4\r\n /data/users/wwei6/miniconda3/envs/mypy38-fx-only/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:4: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.\r\n if not hasattr(tensorboard, \"__version__\") or LooseVersion(\r\n\r\n../../miniconda3/envs/mypy38-fx-only/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:6\r\n /data/users/wwei6/miniconda3/envs/mypy38-fx-only/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:6: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.\r\n ) < LooseVersion(\"1.15\"):\r\n\r\ntests/trainer/test_trainer.py::TrainerIntegrationTest::test_torchdynamo_memory\r\n /data/users/wwei6/miniconda3/envs/mypy38-fx-only/lib/python3.8/site-packages/torch/nn/utils/_stateless.py:5: DeprecationWarning: The `torch.nn.utils._stateless` code is deprecated now that it is publicly available. Please use `torch.nn.utils.stateless instead.\r\n warnings.warn(\"The `torch.nn.utils._stateless` code is deprecated now that \"\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n======================================================================== 1 passed, 69 deselected, 3 warnings in 7.51s =========================================================================\r\n```", "I suspected that was the case, but what I was trying to say is that the test should have failed on the torchdynamo error and not the mismatch in values, i.e. something is trapping the real error and the user could be not the wiser that their torchdynamo is broken - e.g. when there are a lot of logs.\r\n\r\nIt needs to assert on the actual error. Does it make sense?", "> I suspected that was the case, but what I was trying to say is that the test should have failed on the torchdynamo error and not the mismatch in values, i.e. something is trapping the real error and the user could be not the wiser that their torchdynamo is broken - e.g. when there are a lot of logs.\r\n> \r\n> It needs to assert on the actual error. Does it make sense?\r\n\r\nhm.. that is something out of my expertise as it relates with torchdynamo. If it is torch_tensorrt related, I'd love to help.\r\n\r\nFor the CI test error, it seems that test is flaky? I did not find useful any information. Could you help guide/triage that? Thanks.\r\n\r\n\r\n\r\n", "> Hi @frank-wei, I had to rebuild the whole environment against pt-nightly and now everything works.\r\n> \r\n> I think it'd be good to save the instructions in the OP somewhere so that it's easier for the user and us to be able to rebuild the environment.\r\n> \r\n> Would you like to maintain a section or a file on your side that contains the instructions in the OP and we could point to it?\r\n> \r\n> Other than that, I will just ask Sylvain to have a quick review and we can merge this.\r\n> \r\n> Thank you for your patience.\r\n\r\n\r\n\r\n> Hi @frank-wei, I had to rebuild the whole environment against pt-nightly and now everything works.\r\n> \r\n> I think it'd be good to save the instructions in the OP somewhere so that it's easier for the user and us to be able to rebuild the environment.\r\n> \r\n> Would you like to maintain a section or a file on your side that contains the instructions in the OP and we could point to it?\r\n> \r\n> Other than that, I will just ask Sylvain to have a quick review and we can merge this.\r\n> \r\n> Thank you for your patience.\r\n\r\nThanks @stas00 , do you think I can add a 3 pointers for installations of torchdynamo, functorch, torch_tensorrt in docs/source/en/perf_train_gpu_one.mdx ?\r\nTorchdynamo: https://github.com/pytorch/torchdynamo#requirements-and-setup\r\nFunctorch:https://github.com/pytorch/functorch#install\r\nTorch-TensorRT(FX):https://github.com/pytorch/TensorRT/blob/master/docsrc/tutorials/getting_started_with_fx_path.rst#installation", "I think that works, @frank-wei ", "> I think that works, @frank-wei\r\n\r\nCool. Update finished.", "@stas00 @sgugger please check the change. The failed test seems flaky and not related.", "@stas00 Are you good with this last iteration (as long as all tests pass?)", "Let me run the tests.", "All tests pass. Good to merge once the CI is green.\r\n\r\nI created a new task https://github.com/huggingface/transformers/issues/18127 to handle the CI requirements." ]
1,655
1,657
1,657
CONTRIBUTOR
null
# What does this PR do? Adding support for TorchDynamo with torch_tensor(fx2trt module). Detailed context available at #17724 This diff is about adding extra inference backend based on #17308 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## To reproduce and set up the environment ``` # install torch-nightly conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch-nightly # install functorch (and reinstall after `git pull` later if need to sync up) git clone https://github.com/pytorch/functorch cd functorch rm -rf build pip install -e .[aot] cd .. git clone https://github.com/pytorch/torchdynamo cd torchdynamo pip install -r requirements.txt python setup.py develop # install TensorRT pip install nvidia-pyindex pip install nvidia-tensorrt==8.2.4.2 # install torch_tensorrt (fx path) cd .. git clone https://github.com/pytorch/TensorRT.git cd TensorRT/py python setup.py install --fx-only ``` cc HF @stas00 cc Meta @yinghai @Chillee cc NV @ncomly-nvidia @narendasan
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17765/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17765/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17765", "html_url": "https://github.com/huggingface/transformers/pull/17765", "diff_url": "https://github.com/huggingface/transformers/pull/17765.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17765.patch", "merged_at": 1657730609000 }
https://api.github.com/repos/huggingface/transformers/issues/17764
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17764/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17764/comments
https://api.github.com/repos/huggingface/transformers/issues/17764/events
https://github.com/huggingface/transformers/pull/17764
1,275,332,820
PR_kwDOCUB6oc453otQ
17,764
Fix cache for GPT-Neo-X
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot! If it's not too much work could you maybe try running this test: https://github.com/huggingface/transformers/blob/522a9ece4baeb5abfec8953ef76469a530e987d5/tests/models/gpt_neox/test_modeling_gpt_neox.py#L144 \r\n\r\nI think right now it's not run (seems like the test function was removed)", "_The documentation is not available anymore as the PR was closed or merged._", "I can also do it also otherwise :-)", "Added tests that were removed and had a corresponding `check` function. Thanks for flagging this!" ]
1,655
1,655
1,655
COLLABORATOR
null
# What does this PR do? As pointed out on #17745, there is a problem with the logic in the cache for GPT-Neo-X. Can confirm I can use it for generation after this PR, but not before. Fixes #17745
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17764/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17764/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17764", "html_url": "https://github.com/huggingface/transformers/pull/17764", "diff_url": "https://github.com/huggingface/transformers/pull/17764.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17764.patch", "merged_at": 1655729016000 }
https://api.github.com/repos/huggingface/transformers/issues/17763
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17763/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17763/comments
https://api.github.com/repos/huggingface/transformers/issues/17763/events
https://github.com/huggingface/transformers/pull/17763
1,275,297,877
PR_kwDOCUB6oc453hIi
17,763
Add type hints Yoso Pytorch
{ "login": "F02934", "id": 56677617, "node_id": "MDQ6VXNlcjU2Njc3NjE3", "avatar_url": "https://avatars.githubusercontent.com/u/56677617?v=4", "gravatar_id": "", "url": "https://api.github.com/users/F02934", "html_url": "https://github.com/F02934", "followers_url": "https://api.github.com/users/F02934/followers", "following_url": "https://api.github.com/users/F02934/following{/other_user}", "gists_url": "https://api.github.com/users/F02934/gists{/gist_id}", "starred_url": "https://api.github.com/users/F02934/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/F02934/subscriptions", "organizations_url": "https://api.github.com/users/F02934/orgs", "repos_url": "https://api.github.com/users/F02934/repos", "events_url": "https://api.github.com/users/F02934/events{/privacy}", "received_events_url": "https://api.github.com/users/F02934/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hello @Rocketknight1 I finished with yoso pytorch. I don't know why this pull request contain commits from previous pull request ", "I updated the file after running make fixup and now it passed all checks", "@F02934 Thanks for this! Don't panic about the other files being changed - if you want, you should be able to fix that by pulling the latest version of main, then rebasing your PR branch onto main and finally force-pushing. I don't think it should cause any problems if you don't, though, except for cosmetic ones in the Github interface. \r\n\r\nAlso, your type hints look good, but would you be willing to annotate the other model classes in the file too (the ones starting with `YosoFor...`)?", "@Rocketknight1 thank you! I will just leave as it is because I'm afraid to mess up. \nI will finish yoso tomorrow!", "Hi @Rocketknight1 I checked the `YosoFor` but they were already done. On [this Colab notebook](https://colab.research.google.com/drive/1EvZTslb50yfRqIcXjCZFrbod4HrPdA0G?usp=sharing) you shared I checked the missing type hints for Yoso pytorch and only \"YosoForQuestionAnswering\" and \"YosoForTokenClassification\" were missing which I added in this PR. So I think everything done. But correct me if I'm wrong!", "Sorry, you're completely right! The type hints for Yoso are ready to go.\r\n\r\nI investigated the extra Italian documentation added by this PR, though - I think the problem there is that your PR branch was created as a branch of an existing PR branch, which was probably working on translation fixups. As a result, it sort of carries changes from both branches!\r\n\r\nThe simplest way to fix this would be to close this PR, make a new branch starting from `main` this time, and then just copy the changes in `modeling_yoso.py` to that branch, and finally open a PR from that new branch. Is that okay? I'll try to review it quickly if you do, since I've already checked your type hints, lol", "@Rocketknight1 alright. I will do it right now!" ]
1,655
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? Add missing Type Hints for Yoso pytorch #16059 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17763/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17763", "html_url": "https://github.com/huggingface/transformers/pull/17763", "diff_url": "https://github.com/huggingface/transformers/pull/17763.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17763.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17762
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17762/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17762/comments
https://api.github.com/repos/huggingface/transformers/issues/17762/events
https://github.com/huggingface/transformers/issues/17762
1,275,276,487
I_kwDOCUB6oc5MAyzH
17,762
feat: pipeline registry for supporting custom pipelines
{ "login": "aarnphm", "id": 29749331, "node_id": "MDQ6VXNlcjI5NzQ5MzMx", "avatar_url": "https://avatars.githubusercontent.com/u/29749331?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aarnphm", "html_url": "https://github.com/aarnphm", "followers_url": "https://api.github.com/users/aarnphm/followers", "following_url": "https://api.github.com/users/aarnphm/following{/other_user}", "gists_url": "https://api.github.com/users/aarnphm/gists{/gist_id}", "starred_url": "https://api.github.com/users/aarnphm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aarnphm/subscriptions", "organizations_url": "https://api.github.com/users/aarnphm/orgs", "repos_url": "https://api.github.com/users/aarnphm/repos", "events_url": "https://api.github.com/users/aarnphm/events{/privacy}", "received_events_url": "https://api.github.com/users/aarnphm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,655
1,656
1,656
CONTRIBUTOR
null
### Feature request I propose a simple registry abstraction to allow users to dynamically register custom pipelines to transformers. ```python audio_classification_tmpl = { "impl": AudioClassificationPipeline, "tf": (), "pt": (AutoModelForAudioClassification,) if is_torch_available() else (), "default": {"model": {"pt": "superb/wav2vec2-base-superb-ks"}}, "type": "audio", } PipelineRegistry.register_pipeline("audio-classification", audio_classification_tmpl) ``` A pseudo example for `PipelineRegistry` implementation: ```python class PipelineRegistry: SUPPORTED_TASKS: dict[str, dict[str, PiplineBase | dict[str, Any]] @classmethod def registry_pipeline(cls, task: str, task_metadata: dict[str, Any]: cls.SUPPORTED_TASKS[task] = task_metadata ``` For any custom pipeline user can simply do ```python from transformers.pipelines import PipelineRegistry my_custom_task_tmpl = { "impl": CustomPipeline, "tf": (), "pt": (AutoModelForAudioClassification,) if is_torch_available() else (), "default": {"model": {"pt": "my_custom_wav2vec"}}, "type": "custom", } PipelineRegistry.register_pipeline("custom-task", my_custom_task_tmpl) ``` ### Motivation Currently, pipelines abstraction provides users with quick and easy way to run any given tasks. However, it is very difficult to create and adding support for custom pipelines. According to [docs](https://huggingface.co/docs/transformers/add_new_pipeline#adding-it-to-the-list-of-supported-tasks), If users want to add a new pipeline, one would have to come in and modify `transformers` source code. This is often less than ideal. It would be nice for pipeline to have a "registry" abstraction where transformers can allow users to register their custom pipeline to transformers without the hassle of editing the source code. ### Your contribution #17905
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17762/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17761
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17761/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17761/comments
https://api.github.com/repos/huggingface/transformers/issues/17761/events
https://github.com/huggingface/transformers/pull/17761
1,275,227,513
PR_kwDOCUB6oc453SIA
17,761
[WIP] Flax BLOOM implementation + demo
{ "login": "haileyschoelkopf", "id": 65563625, "node_id": "MDQ6VXNlcjY1NTYzNjI1", "avatar_url": "https://avatars.githubusercontent.com/u/65563625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haileyschoelkopf", "html_url": "https://github.com/haileyschoelkopf", "followers_url": "https://api.github.com/users/haileyschoelkopf/followers", "following_url": "https://api.github.com/users/haileyschoelkopf/following{/other_user}", "gists_url": "https://api.github.com/users/haileyschoelkopf/gists{/gist_id}", "starred_url": "https://api.github.com/users/haileyschoelkopf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haileyschoelkopf/subscriptions", "organizations_url": "https://api.github.com/users/haileyschoelkopf/orgs", "repos_url": "https://api.github.com/users/haileyschoelkopf/repos", "events_url": "https://api.github.com/users/haileyschoelkopf/events{/privacy}", "received_events_url": "https://api.github.com/users/haileyschoelkopf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "A note on the initial status of this PR:\r\n\r\n- This first commit contains much of the code and structure of the `modeling_flax_bloom.py` file, copied from the gpt-neo Flax implementation and edited in many places already to better match the PyTorch Bloom implementation.\r\n- There are many TODOs I've left in this file that I still need to get to. The code is still not in a runnable/finished state, \r\n\r\nNext steps:\r\n- Finish implementing all methods, in particular the FlaxBloomAttention `__call__` method, until code runs (see other TODOs in file for other things that need tweaking/fixing)\r\n- Determine how to deal with alibi tensors and how to deal with Bloom not having any hardcoded max length \r\n- Once code is working, start testing whether the implementation is the same as PyTorch\r\n- Make sure tensor parallelism is working correctly / accounted for properly (see issue #17653 , this still seems to be an open issue on how best to deal with it, but bigscience/bloom-350m has TP=1 so it can be used for testing at first without worrying about TP)\r\n\r\n\r\nLater on:\r\n- Add unit tests once the code is working at least reasonably well!\r\n- Make sure all functions are stateless / code works fine with `jit` - I'm relatively new to Flax/Jax so I definitely need to confirm correctness of code on this end", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17761). All of your documentation changes will be reflected on that endpoint.", "Thanks for the helpful comments, @sanchit-gandhi ! I'll do another revision through the code fixing these and adding some more things as soon as I have time.\r\n\r\nI think that the main thing that would need to be discussed soon is how to handle [AliBi](https://arxiv.org/abs/2108.12409) for position information, since it means that there is no specific max length for BLOOM inputs. I'm not too sure yet how to account for this given things like line 176, where the causal mask is made at the max length of the model and then sliced to get the mask for shorter sequences. (One idea I had was selecting a reasonable \"starting max length\", then if the model gets a longer input sequence the causal mask is extended either permanently or just for that forward pass).", "Would there be any issues in implementing it in the first of the two ways proposed (set to `max_length`, slicing as required)?\r\n\r\nThe problem I envision with the latter of the two approaches is that once the function is jit'd, providing a new input length to the model would result in XLA having to recompile. Each time XLA sees a new input shape, it needs to trace out (compile) the function again. So if we provide a new input shape for each forward pass, XLA will recompile every time (very slow)! The performance benefits of jit'ing a function come when we re-use a function that has already been jit'd, meaning we should try and use fixed input shapes where possible.", "Yeah, the recompilation is definitely something to try to avoid! \r\n\r\nBut the issue is that [the bigscience/bloom config](https://huggingface.co/bigscience/bloom/blob/main/config.json) doesn't have any seq_length attribute ([but bigscience/bloom-1b3 does--4096](https://huggingface.co/bigscience/bloom-1b3/blob/main/config.json)) and we want BLOOM to be able to handle sequences as long as a user wants since AliBi allows generalization to longer sequences. We could maybe just choose a reasonable default `max_length`, and then if the user passes a sequence that's too long, permanently double the size of the causal mask--this would allow for fewer recompilations, hopefully.\r\n\r\nBut I think we should keep the possibility open to using the model on very long sequences without problems--I don't know if any other models in Transformers use AliBi embeddings yet so that's a unique benefit of this model.", "Let's go with that to start - we can iterate and find an optimal solution as we progress. There's also the option of asking on one of the JAX/Flax Forums to see if the framework authors have any ideas if we're stuck!\r\n\r\nYou're right, this will be the first JAX/Flax model in Transformers to use AliBi embeddings! Will be very cool having a model with no theoretical `max_len`!", "Actually, I don't see a big problem with computing the `position_ids` for the embeddings on the fly if they depend only on the input length of `input_ids`\r\nIn general whenever the user passes a different input length of `input_ids` to the model will have to be recompiled it anyways so I don't see an issue with generating the position_ids and the causal_mask from the `input_ids` either no? ", "Or am I misunderstanding something here?", "If just generating the causal mask at every forward pass is acceptable and wouldn't incur a speed penalty, then that should work fine!\r\n\r\nAnd yes, I don't think that we need to pass position_ids into the model, and we can just compute the alibi embedding within the forward pass (the pytorch implementation does this.)\r\n\r\nsorry for the delay on this--I'll work on it in the next 2 days.", "> If just generating the causal mask at every forward pass is acceptable and wouldn't incur a speed penalty, then that should work fine!\r\n> \r\n> And yes, I don't think that we need to pass position_ids into the model, and we can just compute the alibi embedding within the forward pass (the pytorch implementation does this.)\r\n> \r\n> sorry for the delay on this--I'll work on it in the next 2 days.\r\n\r\nGreat! Yeah, I just talked to @sanchit-gandhi offline - I think what we want to do here to only recompile when the model has to be recompiled anyways which translates into doing the folowing:\r\n\r\nAllow `position_ids` to be passed but default them to `None` . If `None` they will be computed on the fly depending on the shape of `input_ids` and the values of `attention_mask` (the same would hold true for the causal_mask). Let me know if this doesn't make sense @haileyschoelkopf or if you have any other questions, more than happy to help :-)\r\n", "Hey @haileyschoelkopf! This looks good with regards to the fused key-query-value matmuls in https://github.com/huggingface/transformers/pull/17761/commits/faddb8d446bfc8db0c1c77f29d9847a25f70b5a2! Just as a heads-up, for gradient checkpointing, you can follow the PR at https://github.com/huggingface/transformers/pull/17843. Feel free to reach out if there's anything you wish to discuss, very happy to help with any questions!", "Added gradient checkpointing, thanks for the pointer @sanchit-gandhi ! \r\n\r\nSorry that I haven't been able to push things forward on this PR faster, ended up being busier the past few weeks than expected... EDIT: saw the other PR. @younesbelkada , FYI, there is gradient checkpointing code on this PR now if you need it.", "Thank you @haileyschoelkopf for jumping on this so quickly and getting the structure for the model in place! This PR was completed in https://github.com/huggingface/transformers/pull/18022\r\n\r\nLet me know if there's anything else you'd like to have a go at adding in JAX/Flax! Or if you'd like to have a go at porting another model to JAX/Flax I can make some suggestions!", "Thanks so much for all the helpful comments @sanchit-gandhi on this PR and apologies I wasn't able to iterate quicker on it!\r\n\r\nIf I have more time to add another JAX model I'll ping you for sure :) ", "Very sorry that we rushed this PR so much @haileyschoelkopf! Very much looking forward to other PRs if you'd like :-)", "Of course, will ping you if so :)" ]
1,655
1,661
1,659
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> This PR will add a Flax implementation of BLOOM, and also I'd be happy to help contribute a tutorial / showcase of how to fine-tune BLOOM as well as discussed in #17703 :) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. --> linked above - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). --> documentation in progress - [ ] Did you write any new necessary tests? --> will add once code is closer to completion ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten and @sanchit-gandhi @patil-suraj I believe were interested in collaborating. happy to discuss how best to do this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17761/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17761", "html_url": "https://github.com/huggingface/transformers/pull/17761", "diff_url": "https://github.com/huggingface/transformers/pull/17761.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17761.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17760
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17760/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17760/comments
https://api.github.com/repos/huggingface/transformers/issues/17760/events
https://github.com/huggingface/transformers/pull/17760
1,275,157,419
PR_kwDOCUB6oc453Crx
17,760
Flax sharded
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The sharding tests are run for every model which should be avoided", "Will merge `flax` before `tf` as the TF one still needs a few modification (mostly cleaning the documentation) " ]
1,655
1,655
1,655
COLLABORATOR
null
# What does this PR do? Adds support for FLAX sharding checkpoints
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17760/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17760/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17760", "html_url": "https://github.com/huggingface/transformers/pull/17760", "diff_url": "https://github.com/huggingface/transformers/pull/17760.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17760.patch", "merged_at": 1655874276000 }
https://api.github.com/repos/huggingface/transformers/issues/17759
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17759/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17759/comments
https://api.github.com/repos/huggingface/transformers/issues/17759/events
https://github.com/huggingface/transformers/pull/17759
1,275,121,980
PR_kwDOCUB6oc4527E8
17,759
BLOOM enhance alibi creation
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Great thanks ! I think that you are right :) will merge it as soon as the lights are all green 🟢 ", "It looks like a bad rebase happened, moved the PR at: #17866" ]
1,655
1,656
1,656
CONTRIBUTOR
null
# What does this PR do? Thanks to @justheuristic 's contribution alibi tensor is better created/communicated during the forward pass. The tests seem to pass but still stays as an experimental feature. cc @justheuristic This probably will break with accelerate offloading because when initialising alibi tensor we do it only once at the beginning of the forward pass with the device of the first hidden states. In the previous version we used to dynamically change alibi's `device` which was fine for accelerate offloading
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17759/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17759/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17759", "html_url": "https://github.com/huggingface/transformers/pull/17759", "diff_url": "https://github.com/huggingface/transformers/pull/17759.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17759.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17758
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17758/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17758/comments
https://api.github.com/repos/huggingface/transformers/issues/17758/events
https://github.com/huggingface/transformers/pull/17758
1,275,118,249
PR_kwDOCUB6oc4526TP
17,758
BLOOM enhance alibi creation
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17758). All of your documentation changes will be reflected on that endpoint." ]
1,655
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? Thanks to @justheuristic 's contribution alibi tensor is better created/communicated during the forward pass. The tests seem to pass but still stays as an experimental feature. cc @justheuristic This probably will break with accelerate offloading but not sure..
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17758/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17758", "html_url": "https://github.com/huggingface/transformers/pull/17758", "diff_url": "https://github.com/huggingface/transformers/pull/17758.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17758.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17757
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17757/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17757/comments
https://api.github.com/repos/huggingface/transformers/issues/17757/events
https://github.com/huggingface/transformers/issues/17757
1,275,116,514
I_kwDOCUB6oc5MALvi
17,757
Problem during the training with the parameter train_dataset. (Dict/Tensor problem)
{ "login": "dgrnd4", "id": 69434832, "node_id": "MDQ6VXNlcjY5NDM0ODMy", "avatar_url": "https://avatars.githubusercontent.com/u/69434832?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dgrnd4", "html_url": "https://github.com/dgrnd4", "followers_url": "https://api.github.com/users/dgrnd4/followers", "following_url": "https://api.github.com/users/dgrnd4/following{/other_user}", "gists_url": "https://api.github.com/users/dgrnd4/gists{/gist_id}", "starred_url": "https://api.github.com/users/dgrnd4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dgrnd4/subscriptions", "organizations_url": "https://api.github.com/users/dgrnd4/orgs", "repos_url": "https://api.github.com/users/dgrnd4/repos", "events_url": "https://api.github.com/users/dgrnd4/events{/privacy}", "received_events_url": "https://api.github.com/users/dgrnd4/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@NielsRogge have you seen this error? This comes from the ViT fine-tuning tutorial.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,659
1,659
NONE
null
Hi there, I'm following the [tutorial](https://huggingface.co/blog/fine-tune-vit), trying to fine-tune the net on Stanford Dog Dataset. I am facing this problem: once that I try to do `trainer.train()` this error appear: ```/usr/local/lib/python3.7/dist-packages/transformers/optimization.py:310: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning FutureWarning, ---Running training Num examples = 4160 Num Epochs = 4 Instantaneous batch size per device = 16 Total train batch size (w. parallel, distributed & accumulation) = 16 Gradient Accumulation steps = 1 Total optimization steps = 1040 ValueError Traceback (most recent call last) [<ipython-input-32-0f10542a3dd8>](https://86s6jsm55e-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220615-060045-RC00_455067423#) in <module>() 1 # RESULTS 2 ----> 3 train_results = trainer.train() 4 trainer.save_model() 5 trainer.log_metrics("train", train_results.metrics) 13 frames [/usr/local/lib/python3.7/dist-packages/transformers/models/vit/feature_extraction_vit.py](https://86s6jsm55e-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220615-060045-RC00_455067423#) in __call__(self, images, return_tensors, **kwargs) 125 if not valid_images: 126 raise ValueError( --> 127 "Images must of type `PIL.Image.Image`, `np.ndarray` or `torch.Tensor` (single example), " 128 "`List[PIL.Image.Image]`, `List[np.ndarray]` or `List[torch.Tensor]` (batch of examples)." 129 ) ``` The error is the following: **```ValueError: Images must of type `PIL.Image.Image`, `np.ndarray` or `torch.Tensor` (single example), `List[PIL.Image.Image]`, `List[np.ndarray]` or `List[torch.Tensor]` (batch of examples).```** I think that the problem is on the definition of dataset['train'] that isn't an iterable or something like that: do you have any recommendation? I tried literally every kind of change of type but still cannot train it!!! The dataset that I have are np.array with dim (256,256,3) that I'm processing with: ``` def transform(arr_x=x_te, arr_y=y_te): inputs = extractor([x for x in arr_x], return_tensors='pt') inputs['labels'] = [y for y in arr_y] return inputs # <class 'transformers.feature_extraction_utils.BatchFeature'> ``` After that I create: ``` dd = datasets.DatasetDict({"train": Dataset.from_dict({'pixel_values': arr_transf_te['pixel_values'], 'labels':arr_transf_te['labels'] })}) ``` And then I do the transform: ``` dataset = dd.with_transform(transform) ``` Once I try to: ``` trainer = Trainer( model=model, args=training_args, data_collator=collate_fn, compute_metrics=compute_metrics, train_dataset = dataset['train'] , # type datasets.arrow_dataset.Dataset tokenizer = extractor, ) ``` In: ``` train_results = trainer.train() ``` I get the error that I show you above This is my [project](https://colab.research.google.com/drive/1CueCyVjyh6sRJF2gcs0WdF0u33rtzmLI?usp=sharing): if you can look at it would be AMAZING!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17757/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17756
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17756/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17756/comments
https://api.github.com/repos/huggingface/transformers/issues/17756/events
https://github.com/huggingface/transformers/issues/17756
1,275,043,572
I_kwDOCUB6oc5L_570
17,756
GPT-NeoX missing Tokenizer
{ "login": "mrseeker", "id": 1099127, "node_id": "MDQ6VXNlcjEwOTkxMjc=", "avatar_url": "https://avatars.githubusercontent.com/u/1099127?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrseeker", "html_url": "https://github.com/mrseeker", "followers_url": "https://api.github.com/users/mrseeker/followers", "following_url": "https://api.github.com/users/mrseeker/following{/other_user}", "gists_url": "https://api.github.com/users/mrseeker/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrseeker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrseeker/subscriptions", "organizations_url": "https://api.github.com/users/mrseeker/orgs", "repos_url": "https://api.github.com/users/mrseeker/repos", "events_url": "https://api.github.com/users/mrseeker/events{/privacy}", "received_events_url": "https://api.github.com/users/mrseeker/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Error I am receiving:\r\n```\r\nImportError: cannot import name 'GPTNeoXTokenizer' from 'transformers' (/opt/conda/lib/python3.8/site-packages/transformers/init.py)\r\n```", "Just tried your code sample and it works fine on my side. Are you sure you have `tokenizers` installed in your env? GPT-Neo-X does not have a slow tokenizer, so it requires this library.", "> Just tried your code sample and it works fine on my side. Are you sure you have `tokenizers` installed in your env? GPT-Neo-X does not have a slow tokenizer, so it requires this library.\r\n\r\nJust checked:\r\ntokenizers 0.12.1", "Only the fast tokenizer is available for GPT-NeoX-20B.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hello, @mrseeker did you resolve your issue? I have same problem, trying to get working version of transformers...", "I have the exact same issue.\r\n\r\n```\r\n│ /usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:699 in │\r\n│ from_pretrained │\r\n│ │\r\n│ 696 │ │ │ │ tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate) │\r\n│ 697 │ │ │ │\r\n│ 698 │ │ │ if tokenizer_class is None: │\r\n│ ❱ 699 │ │ │ │ raise ValueError( │\r\n│ 700 │ │ │ │ │ f\"Tokenizer class {tokenizer_class_candidate} does not exist or is n │\r\n│ 701 │ │ │ │ ) │\r\n│ 702 │ │ │ return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *input \r\n```", "For those having an issue with this, the resolution is as follows:\r\n\r\nUse AutoTokenizer**Fast**, AutoTokenizer is not supported by NeoX.", "if you use fastchat,modify fastchat/model/mode_adapter.py like this\r\n\r\n\r\ndef load_model(self, model_path: str, from_pretrained_kwargs: dict):\r\n tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)\r\n model = AutoModelForCausalLM.from_pretrained(\r\n model_path, low_cpu_mem_usage=True,trust_remote_code=True, **from_pretrained_kwargs\r\n )\r\n return model, tokenizer\r\n\r\n\r\nwill fix this issue" ]
1,655
1,685
1,658
NONE
null
### System Info ```shell - `transformers` version: 4.21.0.dev0 - Platform: Linux-5.13.0-44-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes, deepspeed ``` ### Who can help? @patil-suraj @SaulLu ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When I try to load the model using the following script, it hangs and tells you that the tokenizer GPTNeoXTokenizer does not exist. ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` This does not happen when using the "Fast" version. ### Expected behavior ```shell Model should work, tokenizer should load. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17756/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17756/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17755
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17755/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17755/comments
https://api.github.com/repos/huggingface/transformers/issues/17755/events
https://github.com/huggingface/transformers/pull/17755
1,275,015,187
PR_kwDOCUB6oc452kOA
17,755
CLI: use hub's `create_commit`
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Some CI runs have hub 0.7.0 cached, figuring how to best update it" ]
1,655
1,655
1,655
MEMBER
null
# What does this PR do? This PR changes the method to open PRs in `pt-to-tf` to the permanent method defined by the hub -- `create_commit`. It also updates the commit description (it now supports line breaks 🎉 ) and adds a flag to add extra description (so I can programmatically tag the right HF maintainer in certain repos) We can see an example PR [here](https://huggingface.co/joaogante/test_text/discussions/9) -- @Rocketknight1 confirms that the notification got to him! After this PR gets merged, we can announce `pt-to-tf`, as it no longer depends on unreleased functionality 🚀
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17755/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17755/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17755", "html_url": "https://github.com/huggingface/transformers/pull/17755", "diff_url": "https://github.com/huggingface/transformers/pull/17755.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17755.patch", "merged_at": 1655913021000 }
https://api.github.com/repos/huggingface/transformers/issues/17754
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17754/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17754/comments
https://api.github.com/repos/huggingface/transformers/issues/17754/events
https://github.com/huggingface/transformers/issues/17754
1,274,996,694
I_kwDOCUB6oc5L_ufW
17,754
Text classification pipeline outputs differ with 4.20
{ "login": "davidmezzetti", "id": 561939, "node_id": "MDQ6VXNlcjU2MTkzOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/561939?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidmezzetti", "html_url": "https://github.com/davidmezzetti", "followers_url": "https://api.github.com/users/davidmezzetti/followers", "following_url": "https://api.github.com/users/davidmezzetti/following{/other_user}", "gists_url": "https://api.github.com/users/davidmezzetti/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidmezzetti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidmezzetti/subscriptions", "organizations_url": "https://api.github.com/users/davidmezzetti/orgs", "repos_url": "https://api.github.com/users/davidmezzetti/repos", "events_url": "https://api.github.com/users/davidmezzetti/events{/privacy}", "received_events_url": "https://api.github.com/users/davidmezzetti/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[ { "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false } ]
[ "Just wanted to check in on this and see if it's considered an issue or the new way the pipeline works. ", "@davidmezzetti Didn't see this issue, but I didn't see the regression.\r\n\r\nShould have been fixed here https://github.com/huggingface/transformers/pull/17906. Sorry it had time to ship with `4.20`. It will be reverted back in the next release.\r\n\r\nWe are really keen to not break anything format wise while we are in `v4`. But for `v5` harmonizing return types of pipelines is definitely on the agenda I want to push (some return lists, lists of lists, but we're not super consistent across pipelines.).\r\n\r\n", "Thank you for responding and the update!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,660
1,660
CONTRIBUTOR
null
### System Info ```shell transformers 4.20.0 ``` ### Who can help? @Narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction # 4.19.4 ```python from transformers import pipeline nlp = pipeline("text-classification") nlp("happy", return_all_scores=True) ``` Output: ``` [[{'label': 'NEGATIVE', 'score': 0.0001246821484528482}, {'label': 'POSITIVE', 'score': 0.9998753070831299}]] ``` # 4.20.0 ```python from transformers import pipeline nlp = pipeline("text-classification") nlp("happy", return_all_scores=True) ``` Output: ``` [{'label': 'NEGATIVE', 'score': 0.0001246821484528482}, {'label': 'POSITIVE', 'score': 0.9998753070831299}] ``` Running with top_k=None also produces a single list (only difference is labels are ordered by score desc). ```python from transformers import pipeline nlp = pipeline("text-classification") nlp("happy", top_k=None) ``` Output: ``` [{'label': 'POSITIVE', 'score': 0.9998753070831299}, {'label': 'NEGATIVE', 'score': 0.0001246821484528482}] ``` 4.19.4 returns a list of lists when return_all_scores=True. 4.20.0 only produces a single list. It looks like this logic changed the outputs in the `__call__` method. ```python if isinstance(args[0], str) and isinstance(result, dict): # This pipeline is odd, and return a list when single item is run return [result] else: return result ``` Previously, it was this: ```python if isinstance(args[0], str): # This pipeline is odd, and return a list when single item is run return [result] else: return result ``` ### Expected behavior ```shell When passing a single text element, the pipeline up to 4.20 would return a list. If this change was expected, I can work with it but figured it was worth bringing to your attention in case it wasn't intentional. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17754/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17754/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17753
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17753/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17753/comments
https://api.github.com/repos/huggingface/transformers/issues/17753/events
https://github.com/huggingface/transformers/pull/17753
1,274,982,470
PR_kwDOCUB6oc452dK3
17,753
Attempt to change Push CI to workflow_run
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I will merge this tomorrow (so unlikely to have other PRs merged), and revert it if anything breaks (hopefully not, I am running out of ideas 😄 )", "This works well, one good example to check is\r\n\r\n[TF: BART compatible with XLA generation](https://github.com/huggingface/transformers/commit/132402d752044301b37e54405832738b16f49df6)" ]
1,655
1,655
1,655
COLLABORATOR
null
# What does this PR do? This is a fix for #17692 : - Use the correct properties to get the information (branch/commit-SHA, etc.) for both `push` and `workflow_run` event type - Even if we consider only the `workflow_run` event triggered by a push to `main` branch, we still need to use `github.event.workflow_run.head_sha` to get the correct SHA (otherwise, in the case where 2 PRs merged into `main` in a very short time period, the first one will get the latest commit SHA) - Currently, the push CI could still be triggered by `push` event if the branch is **NOT** `main`. The main purpose is for testing a particular branch. This is why I need to consider both event type. **NOTE**: I have verified the change extensively in my own (dummy) repo. However, the part regarding the actual CI tests + the part of slack report are not verified. **The part regarding preparing the necessary information for slack report is verified.** Since the `workflow_run` could be launched only when the PR is merged into `main`, I hope there is no other unexpected issue in this PR 🙏.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17753/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17753", "html_url": "https://github.com/huggingface/transformers/pull/17753", "diff_url": "https://github.com/huggingface/transformers/pull/17753.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17753.patch", "merged_at": 1655534104000 }
https://api.github.com/repos/huggingface/transformers/issues/17752
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17752/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17752/comments
https://api.github.com/repos/huggingface/transformers/issues/17752/events
https://github.com/huggingface/transformers/issues/17752
1,274,958,610
I_kwDOCUB6oc5L_lMS
17,752
`Trainer` has a weird way of determining whether a TPU device is present
{ "login": "andy971022", "id": 42510606, "node_id": "MDQ6VXNlcjQyNTEwNjA2", "avatar_url": "https://avatars.githubusercontent.com/u/42510606?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andy971022", "html_url": "https://github.com/andy971022", "followers_url": "https://api.github.com/users/andy971022/followers", "following_url": "https://api.github.com/users/andy971022/following{/other_user}", "gists_url": "https://api.github.com/users/andy971022/gists{/gist_id}", "starred_url": "https://api.github.com/users/andy971022/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andy971022/subscriptions", "organizations_url": "https://api.github.com/users/andy971022/orgs", "repos_url": "https://api.github.com/users/andy971022/repos", "events_url": "https://api.github.com/users/andy971022/events{/privacy}", "received_events_url": "https://api.github.com/users/andy971022/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "I'm not sure what the bug is: when you have `torch_xla` installed, the `Trainer` will use it (the terminology uses TPU internally but it works on GPU and CPU as well). Could you give us a reproducer of your error?", "@sgugger \r\nThank you for the response! Below is an example how I reproduced the problem\r\n\r\nThe only difference is the presence of the `torch_xla` package, installed via \r\n`!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.11-cp37-cp37m-linux_x86_64.whl`\r\nModels, datasets, nor the versions of the installed packages matter.\r\n\r\n## Error\r\n```\r\n!pip list | grep torch\r\n\r\npytorch-lightning 1.5.7\r\ntorch 1.11.0\r\ntorch-xla 1.11\r\ntorchmetrics 0.6.2\r\ntorchvision 0.10.0+cu111\r\n```\r\n### Error message\r\n`RuntimeError: tensorflow/compiler/xla/xla_client/computation_client.cc:273 : Missing XLA configuration`\r\nIn certain scenarios, which I don't know how to reproduce right now, it also gives a error resembling `package 'xm' is not found`.\r\n\r\n## No error\r\nThe template code gives a known error as expected`TypeError: forward() got an unexpected keyword argument 'labels'` . It's ok to ignore this because this is a piece of crude code.\r\n\r\n```\r\n!pip uninstall torch_xla -y\r\n!pip list | grep torch\r\nFound existing installation: torch-xla 1.11\r\nUninstalling torch-xla-1.11:\r\n Successfully uninstalled torch-xla-1.11\r\npytorch-lightning 1.5.7\r\ntorch 1.11.0\r\ntorchmetrics 0.6.2\r\ntorchvision 0.10.0+cu111\r\n```\r\n\r\n\r\n## Training \r\n``` python\r\nfrom datasets import load_dataset\r\n \r\nfrom transformers import (\r\n AutoModel, AutoTokenizer, TrainingArguments, Trainer\r\n )\r\nimport gc\r\nimport torch\r\n\r\nmodel_name = \"distilroberta-base\"\r\nds = load_dataset('rotten_tomatoes', split='train')\r\n\r\ndefault_train_args = {\r\n \"learning_rate\": 6e-5,\r\n \"per_device_train_batch_size\": 64,\r\n \"per_device_eval_batch_size\": 128,\r\n \"num_train_epochs\": 7,\r\n \"weight_decay\": 1e-6,\r\n \"evaluation_strategy\": \"steps\",\r\n \"eval_steps\": 50,\r\n \"save_strategy\": \"epoch\",\r\n \"remove_unused_columns\": False,\r\n }\r\n\r\nmodel = AutoModel.from_pretrained(model_name)\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\ntrainer = Trainer(model=model, args=TrainingArguments(\"./\", **default_train_args), train_dataset=ds)\r\ntrainer.train()\r\n```", "I have reproduced this error in multiple environments, here is one of the example, a Vertex AI Notebook.\r\n\r\n```\r\nEnvironment version\r\nM82\r\nMachine type \r\nn1-standard-16 (16 vCPUs, 60 GB RAM)\r\nGPU \r\nNVIDIA Tesla T4 x 1\r\n```", "Should be fixed via https://github.com/huggingface/transformers/pull/17802", "Thank you all so much!", "I get the same error while training a SWIN transformer. It's probably related to the pytorch version. I had a older GCP VM with PyTorch 1.9 and its working fine in it\r\nBut with newer Vertex VMs (Pytorch 1.11) it's giving me the same error. \r\n\r\nUninstalling torch-xla does the work. ", "> I get the same error while training a SWIN transformer. It's probably related to the pytorch version. I had a older GCP VM with PyTorch 1.9 and its working fine in it\n> But with newer Vertex VMs (Pytorch 1.11) it's giving me the same error. \n> \n> Uninstalling torch-xla does the work. \n\nYes, exactly the same scenario. The main reason is that Google's Pytorch 1.11 image is now pre-installed with `torch_xla` in addition to `Trainer`'s unconventional way of checking TPU devices.", "Can you try with installing transformers from git to see if the problem still exists?\r\n\r\nE.g.:\r\n\r\n`pip install git+https://github.com/huggingface/transformers`", "@muellerzr \r\nHi, I ran the same code on the previously mentioned notebook instance with both the dev version `transformers` and `torch_xla` installed, but it stuck indefinitely instead of prompting the expected error `TypeError: forward() got an unexpected keyword argument 'labels'`. The only way I can pause this is restarting/shutdown the notebook.\r\n\r\n<img width=\"869\" alt=\"image\" src=\"https://user-images.githubusercontent.com/42510606/176066277-a902bb69-a2a2-466a-ba65-0eabb154c428.png\">\r\n" ]
1,655
1,656
1,656
NONE
null
### System Info ```shell transformer > 4.15.0 Vertex AI Notebook with Pytorch 1.11 using A100 ``` ### Who can help? I am very unlucky to have encountered this issue where a TPU device is assumed to be present on the machine, which it doesn't. It prompted this error `RuntimeError: tensorflow/compiler/xla/xla_client/computation_client.cc:274 : Missing XLA configuration` and hinted me that something related to TPU is causing the error. After some debugging, I realized that https://github.com/huggingface/transformers/blob/3c7e56fbb11f401de2528c1dcf0e282febc031cd/src/transformers/utils/import_utils.py#L395 is simply checking if `torch_xla` is present as opposed to actually checking whether a TPU device is present. I managed to get it work by simply removing the `torch_xla` package. Yet, I also find it bizarre that there is no way to manually turn off TPU training. I hope that the library can be made to actually check the presence of the TPU. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Launch an A100 notebook on GCP Vertex AI and train any model using `Trainer`. ### Expected behavior ```shell No error. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17752/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17751
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17751/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17751/comments
https://api.github.com/repos/huggingface/transformers/issues/17751/events
https://github.com/huggingface/transformers/pull/17751
1,274,912,440
PR_kwDOCUB6oc452OCr
17,751
Use multiple workers for DataLoader at prediction step for Trainer
{ "login": "greg2451", "id": 51173502, "node_id": "MDQ6VXNlcjUxMTczNTAy", "avatar_url": "https://avatars.githubusercontent.com/u/51173502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/greg2451", "html_url": "https://github.com/greg2451", "followers_url": "https://api.github.com/users/greg2451/followers", "following_url": "https://api.github.com/users/greg2451/following{/other_user}", "gists_url": "https://api.github.com/users/greg2451/gists{/gist_id}", "starred_url": "https://api.github.com/users/greg2451/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/greg2451/subscriptions", "organizations_url": "https://api.github.com/users/greg2451/orgs", "repos_url": "https://api.github.com/users/greg2451/repos", "events_url": "https://api.github.com/users/greg2451/events{/privacy}", "received_events_url": "https://api.github.com/users/greg2451/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,665
1,655
CONTRIBUTOR
null
# What does this PR do? Fixes #17749 by adding a parameter in the DataLoader init call of the test data_loader, so that we can use multiple workers for data preparation during prediction time. @sgugger, I tag you because this is a Trainer related PR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17751/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17751", "html_url": "https://github.com/huggingface/transformers/pull/17751", "diff_url": "https://github.com/huggingface/transformers/pull/17751.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17751.patch", "merged_at": 1655477626000 }
https://api.github.com/repos/huggingface/transformers/issues/17750
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17750/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17750/comments
https://api.github.com/repos/huggingface/transformers/issues/17750/events
https://github.com/huggingface/transformers/pull/17750
1,274,861,968
PR_kwDOCUB6oc452DEQ
17,750
Improve performance docs
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sounds good @sgugger, I added the \"Coming soon\" and restructured the ToC.", "Communications are difficult. \r\n\r\nwhat I proposed is to add normal documents - not \"Coming Soon\"\r\n\r\ninside those documents they should all explicitly point the user to read the first document that is already filled out and that we will expand this other currently mostly empty doc with the missing material.\r\n\r\nIn other words e.g,, perf_infer_gpu_many.mdx should have:\r\n\r\n1. read perf_train_gpu_one.mdx first\r\n2. this document will be completed soon.\r\n\r\nIf the docs remain \"Coming Soon\" nobody will read them and will miss out on the already rich performance docs we have.\r\n\r\nAnd the impetus for this change was that we went from a complete solution - all performance notes in one doc, to very incomplete solution, making it look like we only have advise for those with 1 gpu and doing training.\r\n\r\nThe original discussion back in winter was that all performance docs will be filled out, but it was dropped after the first document and no signs of new docs coming any time soon. So this proposal was my attempt to rescue the situation.", "Ok I added references to each document and a small text explaining what will come there. Is that what you had in mind @stas00?" ]
1,655
1,655
1,655
MEMBER
null
# What does this PR do? As discussed with @stas00 this PR does the following things: - adds files for missing sections so contributors can better find the place to add content. - adds a disclaimer at the beginning stating that a lot of general training information is in the single GPU training section - fixes the link of CPU inference Looking at the ToC I was wondering whether we should add subsections like it is done for the tasks to make the main ToC a bit slimmer. Otherwise we have the main performance docs (the entry point) there plus all the other sections (~8-10). What do you think? ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17750/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17750/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17750", "html_url": "https://github.com/huggingface/transformers/pull/17750", "diff_url": "https://github.com/huggingface/transformers/pull/17750.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17750.patch", "merged_at": 1655988714000 }
https://api.github.com/repos/huggingface/transformers/issues/17749
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17749/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17749/comments
https://api.github.com/repos/huggingface/transformers/issues/17749/events
https://github.com/huggingface/transformers/issues/17749
1,274,833,332
I_kwDOCUB6oc5L_Gm0
17,749
Test DataLoader never uses multiple workers
{ "login": "greg2451", "id": 51173502, "node_id": "MDQ6VXNlcjUxMTczNTAy", "avatar_url": "https://avatars.githubusercontent.com/u/51173502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/greg2451", "html_url": "https://github.com/greg2451", "followers_url": "https://api.github.com/users/greg2451/followers", "following_url": "https://api.github.com/users/greg2451/following{/other_user}", "gists_url": "https://api.github.com/users/greg2451/gists{/gist_id}", "starred_url": "https://api.github.com/users/greg2451/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/greg2451/subscriptions", "organizations_url": "https://api.github.com/users/greg2451/orgs", "repos_url": "https://api.github.com/users/greg2451/repos", "events_url": "https://api.github.com/users/greg2451/events{/privacy}", "received_events_url": "https://api.github.com/users/greg2451/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,655
1,655
1,655
CONTRIBUTOR
null
### Feature request / Bug Fix I realized that in the `get_test_dataloader` method of the `Trainer` class, for datasets that are not instances of `torch.utils.data.IterableDataset`, the `num_workers` argument is not given to the `DataLoader` init call. https://github.com/huggingface/transformers/blob/edb672ac5edcd92fadb15d3172a115eb5fe6f663/src/transformers/trainer.py#L936-L943 Whereas it is the case juste above if `test_dataset` is an instance of `torch.utils.data.IterableDataset`: https://github.com/huggingface/transformers/blob/edb672ac5edcd92fadb15d3172a115eb5fe6f663/src/transformers/trainer.py#L925-L931 Moreover, the DataLoader outputted in `get_eval_dataloader` is initialized with the num_workers argument https://github.com/huggingface/transformers/blob/edb672ac5edcd92fadb15d3172a115eb5fe6f663/src/transformers/trainer.py#L888-L896 I know this is not really a feature request, but I do not think it is not a bug either, sorry if I have posted at the wrong place. I read that when talking about the trainer class, @sgugger was generally the one to ping. ### Motivation Adding a line with `num_workers=self.args.dataloader_num_workers,` in the `DataLoader` init call of the `get_test_dataloader` method could fasten the prediction steps by using multiple workers to load the data. https://github.com/huggingface/transformers/blob/edb672ac5edcd92fadb15d3172a115eb5fe6f663/src/transformers/trainer.py#L936-L943 ### Your contribution I have submitted #17751
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17749/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17748
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17748/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17748/comments
https://api.github.com/repos/huggingface/transformers/issues/17748/events
https://github.com/huggingface/transformers/issues/17748
1,274,780,978
I_kwDOCUB6oc5L-50y
17,748
Add easy extensibility of `logits_processor` to `generate`
{ "login": "eranhirs", "id": 3372820, "node_id": "MDQ6VXNlcjMzNzI4MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/3372820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eranhirs", "html_url": "https://github.com/eranhirs", "followers_url": "https://api.github.com/users/eranhirs/followers", "following_url": "https://api.github.com/users/eranhirs/following{/other_user}", "gists_url": "https://api.github.com/users/eranhirs/gists{/gist_id}", "starred_url": "https://api.github.com/users/eranhirs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eranhirs/subscriptions", "organizations_url": "https://api.github.com/users/eranhirs/orgs", "repos_url": "https://api.github.com/users/eranhirs/repos", "events_url": "https://api.github.com/users/eranhirs/events{/privacy}", "received_events_url": "https://api.github.com/users/eranhirs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @eranhirs 👋 On the `generate` end, if I recall correctly (I can't find the discussion), we want to steer away from controlling generation from the `config` file -- cc @patrickvonplaten.\r\n\r\nMaybe we can pass generation kwargs to `Seq2SeqTrainer` -- WDYT @sgugger?", "Yes, we could add all generation kwargs to `predict` and `evaluate` in the `Seq2SeqTrainer`.", "@eranhirs would you like to open a PR? :D ", "Perfect, thanks! Yes I will open a PR 👍 ", "Could we also pass generate arguments to [Seq2SeqTrainerArguments](https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/training_args_seq2seq.py#L28) directly? This would be very cool so that we could use things like `logits_processor` in the validation step!\r\n\r\ncc @gante @sgugger ", "`Seq2SeqTrainingArguments` is only there to provide an easy way to have arguments in the CLI, and you can't pass instances of logits processors as CLI arguments, so I don't see what adding this here would add.", "> `Seq2SeqTrainingArguments` is only there to provide an easy way to have arguments in the CLI, and you can't pass instances of logits processors as CLI arguments, so I don't see what adding this here would add.\r\n\r\nWhat about adding the argument to `Seq2SeqTrainer` then? When `predict_with_generate=True`, we should be able to pass all the `generate` arguments we want, right?", "Or you could just pass them along when you call `evaluate` and `predict`. There are already 94 arguments in `Seq2SeqTrainingArguments`.", "What if I want to early stop with the metric being calculated with `logits_processor`, for example?", "You should then use Accelerate to be able to customize the training loop to your needs :-)" ]
1,655
1,658
1,656
CONTRIBUTOR
null
### Feature request It is easy to change the behavior of `generate` with the config. However, `logits_processor` is not read from the config, but only received [as a parameter](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py#L874). This is hard to accomplish because both [`Trainer`](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2798) and [`Seq2SeqTrainer`](https://github.com/huggingface/transformers/blob/main/examples/legacy/seq2seq/seq2seq_trainer.py#L223) don't tunnel this parameter. I guess my feature request is to either have it easily read from the config, or easily sent as a parameter to the different `Trainer` and `Seq2SeqTrainer` methods, such as `predict`. ### Motivation Generation of text is known to have many problems, as well described in this article https://huggingface.co/blog/how-to-generate . As we advance towards different types of textual inputs which are not simply natural language texts, such as semi-structured texts (e.g., linearized graphs), the ability to research new beam search ideas for different use cases is paramount. We run into this need in two of my latest research projects. The problem is that this is not always the main contribution, and if it becomes too complicated (e.g., stop using `Trainer` abstractions), researchers might not follow through. ### Your contribution I could submit a PR, but would like to first discuss its design.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17748/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17747
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17747/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17747/comments
https://api.github.com/repos/huggingface/transformers/issues/17747/events
https://github.com/huggingface/transformers/issues/17747
1,274,647,141
I_kwDOCUB6oc5L-ZJl
17,747
Problem with GPU
{ "login": "marcomameli1992", "id": 58846715, "node_id": "MDQ6VXNlcjU4ODQ2NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/58846715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcomameli1992", "html_url": "https://github.com/marcomameli1992", "followers_url": "https://api.github.com/users/marcomameli1992/followers", "following_url": "https://api.github.com/users/marcomameli1992/following{/other_user}", "gists_url": "https://api.github.com/users/marcomameli1992/gists{/gist_id}", "starred_url": "https://api.github.com/users/marcomameli1992/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcomameli1992/subscriptions", "organizations_url": "https://api.github.com/users/marcomameli1992/orgs", "repos_url": "https://api.github.com/users/marcomameli1992/repos", "events_url": "https://api.github.com/users/marcomameli1992/events{/privacy}", "received_events_url": "https://api.github.com/users/marcomameli1992/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "You could try to upgrade the **numpy** version with `pip install numpy --upgrade`", "I have updated the numpy package to the latest one (1.23.0) but It not work. I don't understand where is the problem.", "Are you using the correct version of pytorch that suits your machine? I have no other idea why it might not work...", "I using torch 1.11 for windows with gpu support", "I have no clue then", "Hi @marcomameli1992\r\n\r\nPlease don't put the issue description inside **\\`\\`\\`shell ... \\`\\`\\`** block, as this makes it hard to read 🙏 .\r\n\r\nRegarding the issue, it would be much easier if you can provide a **minimal** code snippet to reproduce the issue. Currently, we don't really know what goes wrong without detailed information.\r\n", "Dear I use the code here on [github](https://github.com/marcomameli1992/p2m) I make it public for simplicity.\r\n\r\nThe dataset that I use is from that [link](https://drive.google.com/file/d/1Z8gt4HdPujBNFABYrthhau9VZW10WWYe/view?usp=sharing)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @marcomameli1992 Sorry for this late reply.\r\n\r\nIn order for us to help (if you still need help), it would be very helpful to provide a **minimal** code snippet that reproduces the issue, which could be run directly.\r\n\r\nWith only a link to a GitHub repository page, we don't really know how to use it, and it also makes the debugging more difficult.\r\n", "With a very quick look, it looks like you create `pool = FeaturePooling(im)` which doesn't transform the image. Only when `pred_points = model_gcn(graph, pool)` which invokes\r\n\r\nhttps://github.com/marcomameli1992/p2m/blob/7e64071ce2a701044cab58f3c0e7877562157ab1/model/mesh_network.py#L31\r\n\r\nwill perform the data transformation. I believe this is not the usual good practice. You could try to perform the data transformation (feature extraction) - on CPU (with `numpy` or `torch`), then feed the extracted features into a model (after put them on GPU)." ]
1,655
1,659
1,659
NONE
null
### System Info Hello, I'm using windows 11, with RTX2080 and python 3.9. The packages configuration is based on pytorch 1.11 + transformer4.19.0 and when I try to use the transformer in my code with the GPU I receive an error that the data that I pass can't be converted to numpy array. I have posted the problem on the forum at that link: https://discuss.huggingface.co/t/vit-problem-with-gpu-usage-require-image-to-be-numpy/18678/2 I think that is a bug because the input of the network is on the gpu and I do not understand why it is necessary to convert to numpy array. How can I solve that bug? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I use the image from shapenet. The image are extracted from a saved tensor. ### Expected behavior ```shell I expect that when the gpu is used there are no problem on the conversion of the input to GPU. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17747/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17746
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17746/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17746/comments
https://api.github.com/repos/huggingface/transformers/issues/17746/events
https://github.com/huggingface/transformers/issues/17746
1,274,535,705
I_kwDOCUB6oc5L998Z
17,746
Is there Any difference of performance when finetuning bert use the huggingface or the google official code?
{ "login": "Doragd", "id": 26213546, "node_id": "MDQ6VXNlcjI2MjEzNTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/26213546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Doragd", "html_url": "https://github.com/Doragd", "followers_url": "https://api.github.com/users/Doragd/followers", "following_url": "https://api.github.com/users/Doragd/following{/other_user}", "gists_url": "https://api.github.com/users/Doragd/gists{/gist_id}", "starred_url": "https://api.github.com/users/Doragd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Doragd/subscriptions", "organizations_url": "https://api.github.com/users/Doragd/orgs", "repos_url": "https://api.github.com/users/Doragd/repos", "events_url": "https://api.github.com/users/Doragd/events{/privacy}", "received_events_url": "https://api.github.com/users/Doragd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Doragd 👋 As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other matters, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗", "all right. Thanks anyway. @gante " ]
1,655
1,655
1,655
NONE
null
Hi, I tend to finetune BERT with a simple text classification task. However, I got different results when using the huggingface library (torch 1.8.1+cu111) and google's official code (tf 1.15). I wonder if there is any optimization in huggingface for fine-tuning bert? By the way, I believe that I use the same hyper-parameters. But I got the higher performance using the huggingface library
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17746/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17745
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17745/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17745/comments
https://api.github.com/repos/huggingface/transformers/issues/17745/events
https://github.com/huggingface/transformers/issues/17745
1,274,534,380
I_kwDOCUB6oc5L99ns
17,745
GPT-NEOX RuntimeError
{ "login": "yupei9", "id": 63060915, "node_id": "MDQ6VXNlcjYzMDYwOTE1", "avatar_url": "https://avatars.githubusercontent.com/u/63060915?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yupei9", "html_url": "https://github.com/yupei9", "followers_url": "https://api.github.com/users/yupei9/followers", "following_url": "https://api.github.com/users/yupei9/following{/other_user}", "gists_url": "https://api.github.com/users/yupei9/gists{/gist_id}", "starred_url": "https://api.github.com/users/yupei9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yupei9/subscriptions", "organizations_url": "https://api.github.com/users/yupei9/orgs", "repos_url": "https://api.github.com/users/yupei9/repos", "events_url": "https://api.github.com/users/yupei9/events{/privacy}", "received_events_url": "https://api.github.com/users/yupei9/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @yupei9 - great catch! I think you're 100% right - do you want to open a PR to fix it? Also cc @sgugger ", "Ahah sorry, I had a PR ready already since I wanted to test if the fix worked." ]
1,655
1,655
1,655
NONE
null
Hi, when I ran the model GPT-NEOX, I got the "RuntimeError: batch1 dim 2 must match batch2 dim1" in modeling_gpt_neox.py, line 212. So I tried to debugg and fix this problem, I found the code "present = None if use_cache else (key, value)" in modeling_gpt_neox.py, line 146. Is that logical wrong? and the correct coding should be "present = None if not use_cache else (key, value)" ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17745/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17745/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17744
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17744/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17744/comments
https://api.github.com/repos/huggingface/transformers/issues/17744/events
https://github.com/huggingface/transformers/pull/17744
1,274,497,911
PR_kwDOCUB6oc4502Jt
17,744
Fix `top_k_top_p_filtering` having unintended behavior
{ "login": "unifyh", "id": 18213435, "node_id": "MDQ6VXNlcjE4MjEzNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unifyh", "html_url": "https://github.com/unifyh", "followers_url": "https://api.github.com/users/unifyh/followers", "following_url": "https://api.github.com/users/unifyh/following{/other_user}", "gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unifyh/subscriptions", "organizations_url": "https://api.github.com/users/unifyh/orgs", "repos_url": "https://api.github.com/users/unifyh/repos", "events_url": "https://api.github.com/users/unifyh/events{/privacy}", "received_events_url": "https://api.github.com/users/unifyh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks a lot for the fix @unifyh! " ]
1,655
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? - Fix `top_k_top_p_filtering` not passing `filter_value` to `TopPLogitsWarper` causing any top-p filtered logits to be -inf instead of specified value - Add corresponding test <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17744/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17744", "html_url": "https://github.com/huggingface/transformers/pull/17744", "diff_url": "https://github.com/huggingface/transformers/pull/17744.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17744.patch", "merged_at": 1655840155000 }
https://api.github.com/repos/huggingface/transformers/issues/17743
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17743/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17743/comments
https://api.github.com/repos/huggingface/transformers/issues/17743/events
https://github.com/huggingface/transformers/pull/17743
1,274,264,508
PR_kwDOCUB6oc450BWn
17,743
Bump notebook from 6.4.10 to 6.4.12 in /examples/research_projects/lxmert
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
CONTRIBUTOR
null
Bumps [notebook](http://jupyter.org) from 6.4.10 to 6.4.12. [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=notebook&package-manager=pip&previous-version=6.4.10&new-version=6.4.12)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17743/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17743", "html_url": "https://github.com/huggingface/transformers/pull/17743", "diff_url": "https://github.com/huggingface/transformers/pull/17743.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17743.patch", "merged_at": 1655482233000 }
https://api.github.com/repos/huggingface/transformers/issues/17742
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17742/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17742/comments
https://api.github.com/repos/huggingface/transformers/issues/17742/events
https://github.com/huggingface/transformers/pull/17742
1,274,262,714
PR_kwDOCUB6oc450A8S
17,742
Bump notebook from 6.4.10 to 6.4.12 in /examples/research_projects/visual_bert
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" } ]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
CONTRIBUTOR
null
Bumps [notebook](http://jupyter.org) from 6.4.10 to 6.4.12. [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=notebook&package-manager=pip&previous-version=6.4.10&new-version=6.4.12)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17742/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17742", "html_url": "https://github.com/huggingface/transformers/pull/17742", "diff_url": "https://github.com/huggingface/transformers/pull/17742.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17742.patch", "merged_at": 1655482217000 }
https://api.github.com/repos/huggingface/transformers/issues/17741
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17741/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17741/comments
https://api.github.com/repos/huggingface/transformers/issues/17741/events
https://github.com/huggingface/transformers/issues/17741
1,274,129,329
I_kwDOCUB6oc5L8aux
17,741
InvalidGitRepositoryError while running distillation train example
{ "login": "kaliaanup", "id": 8302569, "node_id": "MDQ6VXNlcjgzMDI1Njk=", "avatar_url": "https://avatars.githubusercontent.com/u/8302569?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kaliaanup", "html_url": "https://github.com/kaliaanup", "followers_url": "https://api.github.com/users/kaliaanup/followers", "following_url": "https://api.github.com/users/kaliaanup/following{/other_user}", "gists_url": "https://api.github.com/users/kaliaanup/gists{/gist_id}", "starred_url": "https://api.github.com/users/kaliaanup/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kaliaanup/subscriptions", "organizations_url": "https://api.github.com/users/kaliaanup/orgs", "repos_url": "https://api.github.com/users/kaliaanup/repos", "events_url": "https://api.github.com/users/kaliaanup/events{/privacy}", "received_events_url": "https://api.github.com/users/kaliaanup/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi, did you resolve this error?", "> Hi, did you resolve this error?\r\n\r\nI haven't. I am still waiting for a reply on this.", "> \r\n\r\nOK, I tried to upgrade the version of gitpython and gitdb2, but it doesn't work.", "Hello, has this issue been resolved?" ]
1,655
1,675
1,658
NONE
null
### System Info ```shell (akalia) akalia@data-workstation-akalia1-gpu-data-10:~/pretrained_rembert/distillation$ python train.py --student_type distilbert --student_config training_configs/distilbert-base-uncased.json --teacher_type bert --teacher_name bert-base-uncased --alpha_ce 5.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_clm 0.0 --mlm --freeze_pos_embs --dump_path serialization_dir/my_first_training --data_file data/binarized_text.bert-base-uncased.pickle --token_counts data/token_counts.bert-base-uncased.pickle --force 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - Initializing GPUs 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Number of nodes: 1 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Node ID : 0 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Local rank : 0 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - World size : 1 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - GPUs per node : 1 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Master : True 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Multi-node : False 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Multi-GPU : False 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - --- Global rank: 0 - Hostname : data-workstation-akalia1-gpu-data-10.dm.vpc 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - Experiment will be dumped and logged in serialization_dir/my_first_training 06/16/2022 22:03:08 - INFO - utils - PID: 3947 - Param: Namespace(force=True, dump_path='serialization_dir/my_first_training', data_file='data/binarized_text.bert-base-uncased.pickle', student_type='distilbert', student_config='training_configs/distilbert-base-uncased.json', student_pretrained_weights=None, teacher_type='bert', teacher_name='bert-base-uncased', temperature=2.0, alpha_ce=5.0, alpha_mlm=2.0, alpha_clm=0.0, alpha_mse=0.0, alpha_cos=1.0, mlm=True, mlm_mask_prop=0.15, word_mask=0.8, word_keep=0.1, word_rand=0.1, mlm_smoothing=0.7, token_counts='data/token_counts.bert-base-uncased.pickle', restrict_ce_to_mask=False, freeze_pos_embs=True, freeze_token_type_embds=False, n_epoch=3, batch_size=5, group_by_size=True, gradient_accumulation_steps=50, warmup_prop=0.05, weight_decay=0.0, learning_rate=0.0005, adam_epsilon=1e-06, max_grad_norm=5.0, initializer_range=0.02, fp16=False, fp16_opt_level='O1', n_gpu=1, local_rank=0, seed=56, log_interval=500, checkpoint_interval=4000, n_nodes=1, node_id=0, global_rank=0, world_size=1, n_gpu_per_node=1, multi_gpu=False, is_master=True, multi_node=False) Traceback (most recent call last): File "/home/akalia/pretrained_rembert/distillation/train.py", line 324, in <module> main() File "/home/akalia/pretrained_rembert/distillation/train.py", line 245, in main git_log(args.dump_path) File "/home/akalia/pretrained_rembert/distillation/utils.py", line 40, in git_log repo = git.Repo(search_parent_directories=True) File "/home/akalia/anaconda3/envs/akalia/lib/python3.9/site-packages/git/repo/base.py", line 224, in __init__ self.working_dir: Optional[PathLike] = self._working_tree_dir or self.common_dir File "/home/akalia/anaconda3/envs/akalia/lib/python3.9/site-packages/git/repo/base.py", line 307, in common_dir raise InvalidGitRepositoryError() git.exc.InvalidGitRepositoryError ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python python train.py \ --student_type distilbert \ --student_config training_configs/distilbert-base-uncased.json \ --teacher_type bert \ --teacher_name bert-base-uncased \ --alpha_ce 5.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_clm 0.0 --mlm \ --freeze_pos_embs \ --dump_path serialization_dir/my_first_training \ --data_file data/binarized_text.bert-base-uncased.pickle \ --token_counts data/token_counts.bert-base-uncased.pickle \ --force ``` ### Expected behavior ```shell A distilled model will be generated ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17741/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17740
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17740/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17740/comments
https://api.github.com/repos/huggingface/transformers/issues/17740/events
https://github.com/huggingface/transformers/pull/17740
1,274,027,178
PR_kwDOCUB6oc45zQQZ
17,740
Add UL2 (just docs)
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Shouldn't we also add a conversion script? Or isn't this required?", "> Shouldn't we also add a conversion script? Or isn't this required?\r\n\r\nIt should be the same as: https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py \r\n\r\ncc @DanielHesslow ", "Yeah I based the conversion off of that script and mostly added a bunch of hacks to work around limitations of my local system. At some point there should probably be a stable conversion script from t5x, but for now the script above is good enough.", "> Thanks for adding this! I'm curious why the empty module? Does one of the script complain if we don't add it?\r\n\r\nCopied it more or less from dialogpt after I noticed one check repo test was failing. Re-iterated and it looks like the only thing that is required is that the name is in `configuration_auto.py` - thanks for double-checking here!", "> Thanks for adding this! I'm curious why the empty module? Does one of the script complain if we don't add it?\r\n\r\nCopied it more or less from dialogpt after I noticed one check repo test was failing. Re-iterated and it looks like the only thing that is required is that the name is in `configuration_auto.py` - thanks for double-checking here!" ]
1,655
1,655
1,655
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds docs for UL2: https://huggingface.co/google/ul2 -> important model that deserves its own doc page IMO ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17740/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17740/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17740", "html_url": "https://github.com/huggingface/transformers/pull/17740", "diff_url": "https://github.com/huggingface/transformers/pull/17740.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17740.patch", "merged_at": 1655799891000 }
https://api.github.com/repos/huggingface/transformers/issues/17739
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17739/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17739/comments
https://api.github.com/repos/huggingface/transformers/issues/17739/events
https://github.com/huggingface/transformers/pull/17739
1,274,018,518
PR_kwDOCUB6oc45zOUE
17,739
[WIP] DETR TF implementation
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" }, { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17739). All of your documentation changes will be reflected on that endpoint." ]
1,655
1,658
null
COLLABORATOR
null
# What does this PR do? Add TF implementation of DETR model Dependent on the TF implementation of ResNets being merged in to provide a backbone: https://github.com/huggingface/transformers/pull/17427 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17739/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17739", "html_url": "https://github.com/huggingface/transformers/pull/17739", "diff_url": "https://github.com/huggingface/transformers/pull/17739.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17739.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17738
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17738/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17738/comments
https://api.github.com/repos/huggingface/transformers/issues/17738/events
https://github.com/huggingface/transformers/pull/17738
1,273,907,712
PR_kwDOCUB6oc45y2C9
17,738
deprecate is_torch_bf16_available
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "does it look good now, Sylvain? ", "Yes, all good :-) Thanks again!" ]
1,655
1,655
1,655
CONTRIBUTOR
null
This is a follow up to https://github.com/huggingface/transformers/pull/17734 where @pacman100 discovered that the IPEX PR made `is_torch_bf16_available` ambiguous, as it went from gpu-only checks to cpu or gpu which is undefined behavior. So this PR deprecates this function in favor of the very specific `is_torch_bf16_gpu_available` and `is_torch_bf16_cpu_available` that were added in https://github.com/huggingface/transformers/pull/17734 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17738/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17738", "html_url": "https://github.com/huggingface/transformers/pull/17738", "diff_url": "https://github.com/huggingface/transformers/pull/17738.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17738.patch", "merged_at": 1655728811000 }
https://api.github.com/repos/huggingface/transformers/issues/17737
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17737/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17737/comments
https://api.github.com/repos/huggingface/transformers/issues/17737/events
https://github.com/huggingface/transformers/pull/17737
1,273,806,074
PR_kwDOCUB6oc45ygDP
17,737
CLI: detect and store weights as float16
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17737). All of your documentation changes will be reflected on that endpoint.", "> Will TensorFlow automatically load stored FP16 weights in FP32 like PyTorch does?\r\n\r\nNo, if we inspect the variables after loading, they are FP16. However, as soon as we pass some input, we can see that the outputs of the internal computations are FP32, despite the weights being FP16.\r\n\r\nThe model error (vs PT) is exactly the same, before and after adding these lines :)\r\n\r\nThe documentation is not clear, but I believe `tf.keras.backend.set_floatx` sets the precision of the internal computations. For instance, if we don't reset to float32 (the default) after storing as float16, the error is much much larger (~1e3 times larger).", "Mmmm, in this case I would avoid storing the weights in FP16 on the Hub before adding something in `from_pretrained` that will convert them back to FP32 like it's done for PyTorch.", "> Mmmm, in this case I would avoid storing the weights in FP16 on the Hub before adding something in from_pretrained that will convert them back to FP32 like it's done for PyTorch.\r\n\r\n👍 I can give it a go (and leave this PR open meanwhile)" ]
1,655
1,658
null
MEMBER
null
# What does this PR do? Updates the `pt-to-tf` CLI to detect and store weights as float16. Battle-tested with OPT.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17737/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17737/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17737", "html_url": "https://github.com/huggingface/transformers/pull/17737", "diff_url": "https://github.com/huggingface/transformers/pull/17737.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17737.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17736
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17736/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17736/comments
https://api.github.com/repos/huggingface/transformers/issues/17736/events
https://github.com/huggingface/transformers/issues/17736
1,273,712,383
I_kwDOCUB6oc5L607_
17,736
importing 'LongT5Model' from 'transformers'
{ "login": "jorgeutd", "id": 30812821, "node_id": "MDQ6VXNlcjMwODEyODIx", "avatar_url": "https://avatars.githubusercontent.com/u/30812821?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jorgeutd", "html_url": "https://github.com/jorgeutd", "followers_url": "https://api.github.com/users/jorgeutd/followers", "following_url": "https://api.github.com/users/jorgeutd/following{/other_user}", "gists_url": "https://api.github.com/users/jorgeutd/gists{/gist_id}", "starred_url": "https://api.github.com/users/jorgeutd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jorgeutd/subscriptions", "organizations_url": "https://api.github.com/users/jorgeutd/orgs", "repos_url": "https://api.github.com/users/jorgeutd/repos", "events_url": "https://api.github.com/users/jorgeutd/events{/privacy}", "received_events_url": "https://api.github.com/users/jorgeutd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nLongT5 is only available in Transformers v4.20.", "I guess it got released today. Thank you @NielsRogge you are the GOAT.", "@NielsRogge do I still need to use the prefix:\r\n\r\nif model_checkpoint in [\"t5-small\", \"t5-base\", \"t5-larg\", \"t5-3b\", \"t5-11b\"]:\r\n prefix = \"summarize: \"\r\nelse:\r\n prefix = \"\"\r\n\r\nfor summarization with the LongT5 model or not?\r\n\r\nThanks,\r\n\r\nJorge", "@NielsRogge Also I think the model name in the long-t5 README is wrong (https://huggingface.co/google/long-t5-tglobal-large). It says \r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/longt5-tglobal-large\")\r\nmodel = LongT5Model.from_pretrained(\"google/longt5-tglobal-large\")\r\n```\r\nBut I think it should be \r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/long-t5-tglobal-large\")\r\nmodel = LongT5Model.from_pretrained(\"google/long-t5-tglobal-large\") \r\n```\r\n", "Feel free to open an issue/PR on the repo on the hub!", "The code examples were fixed. Closing this issue!", "@jorgeutd Hi!\r\nHave you figured out of the question? In the official document, it said that LongT5 does not use prefix. How do we use it in different down tasks? Thanks." ]
1,655
1,659
1,655
NONE
null
Hello Team, I am getting the following error when I am trying to import the new LongT5Model into my notebook: --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /tmp/ipykernel_55187/343105862.py in <cell line: 3>() 1 #from transformers import AutoTokenizer, AutoModelForSeq2SeqLM 2 ----> 3 from transformers import AutoTokenizer, LongT5Model 4 5 tokenizer = AutoTokenizer.from_pretrained("google/longt5-tglobal-base") ImportError: cannot import name 'LongT5Model' from 'transformers' (/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/__init__.py) Transformers version: 4.19.4 Python version: 3.8
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17736/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17735
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17735/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17735/comments
https://api.github.com/repos/huggingface/transformers/issues/17735/events
https://github.com/huggingface/transformers/issues/17735
1,273,710,937
I_kwDOCUB6oc5L60lZ
17,735
The messy code generated by opt125m.
{ "login": "920232796", "id": 32668889, "node_id": "MDQ6VXNlcjMyNjY4ODg5", "avatar_url": "https://avatars.githubusercontent.com/u/32668889?v=4", "gravatar_id": "", "url": "https://api.github.com/users/920232796", "html_url": "https://github.com/920232796", "followers_url": "https://api.github.com/users/920232796/followers", "following_url": "https://api.github.com/users/920232796/following{/other_user}", "gists_url": "https://api.github.com/users/920232796/gists{/gist_id}", "starred_url": "https://api.github.com/users/920232796/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/920232796/subscriptions", "organizations_url": "https://api.github.com/users/920232796/orgs", "repos_url": "https://api.github.com/users/920232796/repos", "events_url": "https://api.github.com/users/920232796/events{/privacy}", "received_events_url": "https://api.github.com/users/920232796/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @920232796 👋  `facebook/opt-125m` is a relatively small model, so it's normal that its outputs are not great (especially with `do_sample=True`). I'd suggest trying `facebook/opt-350m`.\r\n\r\nAs per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other requests, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗", "Thank you for your patient reply.", "@920232796 actually we have found a problem in the OPT files -- https://github.com/huggingface/transformers/pull/17785\r\n\r\nIt may improve the quality of the generation, but the comment above remains true :)", "Thank you very much!" ]
1,655
1,655
1,655
NONE
null
### System Info ```shell transformers version is 4.19.3 ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction import torch from transformers import AutoModelForCausalLM from transformers import TextGenerationPipeline, AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m") transformers_opt = AutoModelForCausalLM.from_pretrained("facebook/opt-125m") transformers_opt.eval() text = "The trophy doesn’t fit in the suitcase because " text_generator = TextGenerationPipeline(transformers_opt, tokenizer) out = text_generator(text, max_length=300, do_sample=True, top_p=0.9) print(f"transformers model out is {out}") ### Expected behavior ```shell I want to get normal output. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17735/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17734
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17734/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17734/comments
https://api.github.com/repos/huggingface/transformers/issues/17734/events
https://github.com/huggingface/transformers/pull/17734
1,273,627,981
PR_kwDOCUB6oc45x6Au
17,734
Refine Bf16 test for deepspeed
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
COLLABORATOR
null
# What does this PR do? This PR refines the `is_torch_b16_available` test in two separate ones for GPU and CPU are the DeepSpeed tests require the GPU bfloat16.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17734/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17734", "html_url": "https://github.com/huggingface/transformers/pull/17734", "diff_url": "https://github.com/huggingface/transformers/pull/17734.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17734.patch", "merged_at": 1655393279000 }
https://api.github.com/repos/huggingface/transformers/issues/17733
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17733/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17733/comments
https://api.github.com/repos/huggingface/transformers/issues/17733/events
https://github.com/huggingface/transformers/pull/17733
1,273,619,678
PR_kwDOCUB6oc45x4Lh
17,733
Layoutlmv2 tesseractconfig
{ "login": "kelvinAI", "id": 10686779, "node_id": "MDQ6VXNlcjEwNjg2Nzc5", "avatar_url": "https://avatars.githubusercontent.com/u/10686779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kelvinAI", "html_url": "https://github.com/kelvinAI", "followers_url": "https://api.github.com/users/kelvinAI/followers", "following_url": "https://api.github.com/users/kelvinAI/following{/other_user}", "gists_url": "https://api.github.com/users/kelvinAI/gists{/gist_id}", "starred_url": "https://api.github.com/users/kelvinAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kelvinAI/subscriptions", "organizations_url": "https://api.github.com/users/kelvinAI/orgs", "repos_url": "https://api.github.com/users/kelvinAI/repos", "events_url": "https://api.github.com/users/kelvinAI/events{/privacy}", "received_events_url": "https://api.github.com/users/kelvinAI/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@NielsRogge kindly review. Thanks!", "> LGTM! Could you add it to LayoutLMv3's feature extractor as well?\r\n\r\nSounds good! I'll ping you when it's done. Do I need to do anything to merge this to huggingface:main? You'll trigger the merge is that right?", ">You'll trigger the merge is that right?\r\n\r\nYes, indeed.", "Hi @kelvinAI, could you add it to LayoutLMv3 as well in this PR?\r\n\r\nThanks!", "> Hi @kelvinAI, could you add it to LayoutLMv3 as well in this PR?\r\n> \r\n> Thanks!\r\n\r\nDone! @NielsRogge ", "@NielsRogge pls review.\r\nThanks!", "Hi @kelvinAI, could you apply the suggestions such that I can merge your PR?\r\n\r\nThanks! ", "@NielsRogge done! :) " ]
1,655
1,659
1,659
CONTRIBUTOR
null
# What does this PR do? Giving user option to set config parameter used by Tesseract when performing feature extraction. Eg. to change psm levels while performing transcription by passing in '--psm 10' to config parameter while invoking image_to_data It is shown that changing the psm values greatly influences the end result of LayoutLMV2/XLM/V3, and the specific psm value is different depending on the document formatting. Refer : [PSM](https://github.com/tesseract-ocr/tesseract/issues/434) ```python pytesseract.image_to_data(image, lang=lang, output_type="dict", config="--psm 10") ``` Users can now set the tesseract config parameter during Processor initialization, like so: ```python # LayoutLMV2 processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", ocr_lang="eng", tesseract_config="--psm 5") # LayoutLMV3 processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", ocr_lang="eng", tesseract_config="--psm 5") ``` ## Before submitting - [❌] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [✔️] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [❌] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [✔️] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [❌] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17733/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17733", "html_url": "https://github.com/huggingface/transformers/pull/17733", "diff_url": "https://github.com/huggingface/transformers/pull/17733.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17733.patch", "merged_at": 1659371084000 }
https://api.github.com/repos/huggingface/transformers/issues/17732
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17732/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17732/comments
https://api.github.com/repos/huggingface/transformers/issues/17732/events
https://github.com/huggingface/transformers/issues/17732
1,273,484,658
I_kwDOCUB6oc5L59Vy
17,732
Issue with trainer.py Line#1022,1025,1035,1043,1051,1059,1061
{ "login": "avmodi", "id": 30557528, "node_id": "MDQ6VXNlcjMwNTU3NTI4", "avatar_url": "https://avatars.githubusercontent.com/u/30557528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avmodi", "html_url": "https://github.com/avmodi", "followers_url": "https://api.github.com/users/avmodi/followers", "following_url": "https://api.github.com/users/avmodi/following{/other_user}", "gists_url": "https://api.github.com/users/avmodi/gists{/gist_id}", "starred_url": "https://api.github.com/users/avmodi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avmodi/subscriptions", "organizations_url": "https://api.github.com/users/avmodi/orgs", "repos_url": "https://api.github.com/users/avmodi/repos", "events_url": "https://api.github.com/users/avmodi/events{/privacy}", "received_events_url": "https://api.github.com/users/avmodi/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Please include a *complete* reproducer for the bug you are raising. Your code sample does not include many things, in particular the `TrainingArguments`.", "```python\r\nimport logging\r\nimport os\r\nfrom statistics import mean, stdev\r\nimport sys\r\nfrom typing import Callable, Dict\r\n\r\nimport numpy as np\r\nfrom pprint import pformat\r\nfrom scipy.special import softmax\r\nimport torch\r\n\r\nfrom transformers import (\r\n AutoTokenizer,\r\n AutoConfig,\r\n HfArgumentParser,\r\n Trainer,\r\n EvalPrediction,\r\n set_seed\r\n)\r\n\r\nfrom utils import calc_classification_metrics, calc_regression_metrics, create_dir_if_not_exists\r\nfrom data import load_data_from_folder\r\nfrom model import TabularConfig, AutoModelWithTabular\r\nfrom multimodal_args import MultiModalDataArguments, ModelArguments, MultiModalTrainingArguments\r\nfrom transformers.debug_utils import DebugOption\r\nfrom transformers.training_args import OptimizerNames\r\nprint(MultiModalDataArguments)\r\nparser = HfArgumentParser((ModelArguments,MultiModalDataArguments,MultiModalTrainingArguments))\r\nmodel_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath('config.json'))\r\n\r\n# training_args.debug = [DebugOption.UNDERFLOW_OVERFLOW]\r\n# training_args.optim = OptimizerNames.ADAMW_HF\r\n\r\nstream_handler = logging.StreamHandler(sys.stdout)\r\nfile_handler = logging.FileHandler(filename=os.path.join(training_args.output_dir, 'eval_log.txt'),\r\n mode='w+')\r\nlogging.basicConfig(\r\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\r\n level=logging.INFO,\r\n datefmt=\"%m/%d/%Y %H:%M:%S\",\r\n handlers=[stream_handler, file_handler]\r\n)\r\nlogger = logging.getLogger(__name__)\r\n\r\n\r\nset_seed(training_args.seed)\r\nif (\r\n os.path.exists(training_args.output_dir)\r\n and os.listdir(training_args.output_dir)\r\n and training_args.do_train\r\n and not training_args.overwrite_output_dir\r\n ):\r\n raise ValueError(\r\n f\"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome.\"\r\n )\r\n \r\n\r\ncreate_dir_if_not_exists(training_args.output_dir)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\r\n cache_dir=model_args.cache_dir,\r\n )\r\ntrain_dataset, val_dataset, test_dataset = load_data_from_folder(\r\n data_args.data_path,\r\n data_args.text_cols,\r\n tokenizer,\r\n label_col=data_args.label_col,\r\n label_list=data_args.label_list,\r\n categorical_cols=data_args.cat_cols,\r\n numerical_cols=data_args.num_cols,\r\n categorical_encode_type=data_args.categorical_encoding,\r\n numerical_transformer_method=data_args.numerical_encoding,\r\n sep_text_token_str=tokenizer.sep_token,\r\n do_train = training_args.do_train,\r\n do_eval = training_args.do_eval,\r\n do_predict = training_args.do_predict\r\n )\r\ntrain_datasets = [train_dataset]\r\nval_datasets = [val_dataset]\r\ntest_datasets = [test_dataset]\r\n\r\ndef build_compute_metrics_fn(task_name: str) -> Callable[[EvalPrediction], Dict]:\r\n def compute_metrics_fn(p: EvalPrediction):\r\n if task_name == \"classification\":\r\n preds_labels = np.argmax(p.predictions, axis=1)\r\n if p.predictions.shape[-1] == 2:\r\n pred_scores = softmax(p.predictions, axis=1)[:, 1]\r\n else:\r\n pred_scores = softmax(p.predictions, axis=1)\r\n return calc_classification_metrics(pred_scores, preds_labels,\r\n p.label_ids)\r\n elif task_name == \"regression\":\r\n preds = np.squeeze(p.predictions)\r\n return calc_regression_metrics(preds, p.label_ids)\r\n else:\r\n return {}\r\n return compute_metrics_fn\r\n\r\nconfig = AutoConfig.from_pretrained(\r\n model_args.config_name if model_args.config_name else model_args.model_name_or_path,\r\n cache_dir=model_args.cache_dir,\r\n )\r\ntabular_config = TabularConfig(num_labels=len(data_args.label_list),\r\n cat_feat_dim=test_dataset.cat_feats.shape[\r\n 1] if test_dataset.cat_feats is not None else 0,\r\n numerical_feat_dim=test_dataset.numerical_feats.shape[\r\n 1] if test_dataset.numerical_feats is not None else 0,\r\n **vars(data_args))\r\nconfig.tabular_config = tabular_config\r\n\r\nmodel = AutoModelWithTabular.from_pretrained(\r\n model_args.config_name if model_args.config_name else model_args.model_name_or_path,\r\n config=config,\r\n cache_dir=model_args.cache_dir\r\n)\r\n\r\nimport os\r\nos.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\" # see issue #152\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\r\nos.environ['COMET_MODE'] = 'DISABLED'\r\nlogging.basicConfig(\r\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\r\n level=logging.INFO,\r\n datefmt=\"%m/%d/%Y %H:%M:%S\",\r\n handlers=[stream_handler, file_handler]\r\n)\r\nlogger = logging.getLogger(__name__)\r\ntraining_args.log_level = 'INFO'\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset,\r\n compute_metrics=build_compute_metrics_fn(data_args.task))\r\n\r\nif training_args.do_train:\r\n trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None)\r\n trainer.save_model()\r\n```\r\n\r\nConfig\r\n\r\n```json\r\n{\"text_cols\" : [],\r\n\"num_cols\" : [],\r\n\"label_col\" : \"label\",\r\n\"label_list\" : [],\r\n\"model_name_or_path\" : \"bert-base-uncased\",\r\n\"data_path\" : \"input\",\r\n\"combine_feat_method\" : \"gating\",\r\n\"task\" : \"classification\",\r\n\"create_folds\" : false,\r\n\"num_classes\" : 3,\r\n\"numerical_transformer_method\" : \"min_max\",\r\n\"output_dir\" : \"run/output\",\r\n\"logging_dir\" : \"run/log\",\r\n\"overwrite_output_dir\" : true,\r\n\"do_train\" : true,\r\n\"do_eval\" : true,\r\n\"do_predict\" : true,\r\n\"per_device_train_batch_size\" : 256,\r\n\"per_device_eval_batch_size\" : 256,\r\n\"num_train_epochs\" : 10,\r\n\"evaluate_during_training\" : true,\r\n\"logging_steps\" : 25,\r\n\"eval_steps\" : 50,\r\n\"save_steps\" : 50,\r\n\"log_level\" : \"INFO\",\r\n\"report_to\" : []\r\n}\r\n```\r\nThe training_args.optim = \"adamw_hf\" ( default choice )", "Same as in the other issue, this is not a reproducer. We have no idea how you defined the class `MultiModalTrainingArguments` in particular, which is probably where the problem lies.\r\n\r\nPlease use the [forums](https://discuss.huggingface.co/) to get helps from the community to debug your code (and find a small reproducer of the bug if it is indeed a bug in the lbirary).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,658
1,658
NONE
null
### System Info ```shell Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.18.0 - Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9 - Python version: 3.6.13 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"]="0" os.environ['COMET_MODE'] = 'DISABLED' logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", level=logging.INFO, datefmt="%m/%d/%Y %H:%M:%S", handlers=[stream_handler, file_handler] ) logger = logging.getLogger(__name__) training_args.log_level = 'INFO' trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset, compute_metrics=build_compute_metrics_fn(data_args.task)) if training_args.do_train: trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None) trainer.save_model() ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-29-feef8f1bc697> in <module> 1 if training_args.do_train: ----> 2 trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None) 3 trainer.save_model() ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1262 self.lr_scheduler = lr_scheduler 1263 elif not delay_optimizer_creation: -> 1264 self.create_optimizer_and_scheduler(num_training_steps=max_steps) 1265 1266 self.state = TrainerState() ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in create_optimizer_and_scheduler(self, num_training_steps) 827 `create_scheduler`) in a subclass. 828 """ --> 829 self.create_optimizer() 830 self.create_scheduler(num_training_steps=num_training_steps, optimizer=self.optimizer) 831 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in create_optimizer(self) 851 ] 852 --> 853 optimizer_cls, optimizer_kwargs = Trainer.get_optimizer_cls_and_kwargs(self.args) 854 855 if self.sharded_ddp == ShardedDDPOption.SIMPLE: ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in get_optimizer_cls_and_kwargs(args) 912 raise ValueError("Trainer tried to instantiate apex FusedAdam but apex is not installed!") 913 else: --> 914 raise ValueError(f"Trainer cannot instantiate unsupported optimizer: {args.optim}") 915 return optimizer_cls, optimizer_kwargs 916 **ValueError: Trainer cannot instantiate unsupported optimizer: adamw_hf** ### Expected behavior ```shell if args.optim == OptimizerNames.ADAFACTOR: optimizer_cls = Adafactor optimizer_kwargs.update({"scale_parameter": False, "relative_step": False}) elif args.optim == OptimizerNames.ADAMW_HF: from .optimization import AdamW optimizer_cls = AdamW optimizer_kwargs.update(adam_kwargs) elif args.optim == OptimizerNames.ADAMW_TORCH: from torch.optim import AdamW optimizer_cls = AdamW optimizer_kwargs.update(adam_kwargs) elif args.optim == OptimizerNames.ADAMW_TORCH_XLA: try: from torch_xla.amp.syncfree import AdamW optimizer_cls = AdamW optimizer_kwargs.update(adam_kwargs) except ImportError: raise ValueError("Trainer failed to import syncfree AdamW from torch_xla.") elif args.optim == OptimizerNames.ADAMW_APEX_FUSED: try: from apex.optimizers import FusedAdam optimizer_cls = FusedAdam optimizer_kwargs.update(adam_kwargs) except ImportError: raise ValueError("Trainer tried to instantiate apex FusedAdam but apex is not installed!") elif args.optim == OptimizerNames.ADAMW_BNB: try: from bitsandbytes.optim import Adam8bit optimizer_cls = Adam8bit optimizer_kwargs.update(adam_kwargs) except ImportError: raise ValueError("Trainer tried to instantiate bnb Adam8bit but bnb is not installed!") elif args.optim == OptimizerNames.SGD: optimizer_cls = torch.optim.SGD elif args.optim == OptimizerNames.ADAGRAD: optimizer_cls = torch.optim.Adagrad else: raise ValueError(f"Trainer cannot instantiate unsupported optimizer: {args.optim}") return optimizer_cls, optimizer_kwargs We can use OptimizerNames.ADAMW_HF.name == args.optim instead of OptimizerNames.ADAMW_HF == args.optim ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17732/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17731
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17731/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17731/comments
https://api.github.com/repos/huggingface/transformers/issues/17731/events
https://github.com/huggingface/transformers/pull/17731
1,273,483,307
PR_kwDOCUB6oc45xavJ
17,731
Improve vision models
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,656
1,656
CONTRIBUTOR
null
# What does this PR do? This PR improves the vision models by: - removing `to_2tuple` - sanity checking whether the channel dimension of pixel values provided to the model match with `config.num_channels` - replacing hardcoded 3 with `config.num_channels` for `xxxForMaskedImageModeling` models (fixes #17727) - replacing hardcoded 3 by `config.num_channels` in Flax models (ViT, BEiT) To do: - [x] ViT - [x] BEiT - [x] DeiT - [x] Swin - [x] PoolFormer - [x] DPT - [x] YOLOS - [x] ViLT - [x] GLPN - [x] DPT - [x] Data2VecVision - [x] MaskFormer - [x] ViTMAE - [x] TF and Flax implementations - [x] Corresponding test files - [x] add more Copied from statements (e.g. DropPath)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17731/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17731", "html_url": "https://github.com/huggingface/transformers/pull/17731", "diff_url": "https://github.com/huggingface/transformers/pull/17731.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17731.patch", "merged_at": 1656063292000 }
https://api.github.com/repos/huggingface/transformers/issues/17730
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17730/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17730/comments
https://api.github.com/repos/huggingface/transformers/issues/17730/events
https://github.com/huggingface/transformers/pull/17730
1,273,461,923
PR_kwDOCUB6oc45xWJH
17,730
Fix tf shared embedding
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "All the TF weights of OPT will need to be updated if this is approved. I think I can handle that along with #17713. \r\n", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
COLLABORATOR
null
# What does this PR do? A hack was used to properly import the shared embedding weights, but it can be removed (removing it also is convenient for the sharding PR) Found this while testing #17713. In HF's `save_pretrained` and `load_pretrained` the layer name is changed using ` name = "/".join(weight_name.split("/")[1:])`. This was breaking for OPT as the layer name was ` 'decoder.embed_tokens/model.decoder.embed_tokens/weight:0'` instead of `'tfopt_model/model/decoder/embed_tokens/weight:0'`. The naming is strange and had to use a scope hack. The hack comes from `BART`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17730/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17730", "html_url": "https://github.com/huggingface/transformers/pull/17730", "diff_url": "https://github.com/huggingface/transformers/pull/17730.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17730.patch", "merged_at": 1655381868000 }
https://api.github.com/repos/huggingface/transformers/issues/17729
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17729/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17729/comments
https://api.github.com/repos/huggingface/transformers/issues/17729/events
https://github.com/huggingface/transformers/issues/17729
1,273,423,171
I_kwDOCUB6oc5L5uVD
17,729
Issue with trainer.py class line #1460, 2643 and 1745.
{ "login": "avmodi", "id": 30557528, "node_id": "MDQ6VXNlcjMwNTU3NTI4", "avatar_url": "https://avatars.githubusercontent.com/u/30557528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avmodi", "html_url": "https://github.com/avmodi", "followers_url": "https://api.github.com/users/avmodi/followers", "following_url": "https://api.github.com/users/avmodi/following{/other_user}", "gists_url": "https://api.github.com/users/avmodi/gists{/gist_id}", "starred_url": "https://api.github.com/users/avmodi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avmodi/subscriptions", "organizations_url": "https://api.github.com/users/avmodi/orgs", "repos_url": "https://api.github.com/users/avmodi/repos", "events_url": "https://api.github.com/users/avmodi/events{/privacy}", "received_events_url": "https://api.github.com/users/avmodi/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Please include a *complete* reproducer for the bug you are raising. Your code sample does not include many things, in particular the `TrainingArguments`.", "```python\r\n\r\nimport logging\r\nimport os\r\nfrom statistics import mean, stdev\r\nimport sys\r\nfrom typing import Callable, Dict\r\n\r\nimport numpy as np\r\nfrom pprint import pformat\r\nfrom scipy.special import softmax\r\nimport torch\r\n\r\nfrom transformers import (\r\n AutoTokenizer,\r\n AutoConfig,\r\n HfArgumentParser,\r\n Trainer,\r\n EvalPrediction,\r\n set_seed\r\n)\r\n\r\nfrom utils import calc_classification_metrics, calc_regression_metrics, create_dir_if_not_exists\r\nfrom data import load_data_from_folder\r\nfrom model import TabularConfig, AutoModelWithTabular\r\nfrom multimodal_args import MultiModalDataArguments, ModelArguments, MultiModalTrainingArguments\r\nfrom transformers.debug_utils import DebugOption\r\nfrom transformers.training_args import OptimizerNames\r\nprint(MultiModalDataArguments)\r\nparser = HfArgumentParser((ModelArguments,MultiModalDataArguments,MultiModalTrainingArguments))\r\nmodel_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath('config.json'))\r\n\r\n# training_args.debug = [DebugOption.UNDERFLOW_OVERFLOW]\r\n# training_args.optim = OptimizerNames.ADAMW_HF\r\n\r\nstream_handler = logging.StreamHandler(sys.stdout)\r\nfile_handler = logging.FileHandler(filename=os.path.join(training_args.output_dir, 'eval_log.txt'),\r\n mode='w+')\r\nlogging.basicConfig(\r\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\r\n level=logging.INFO,\r\n datefmt=\"%m/%d/%Y %H:%M:%S\",\r\n handlers=[stream_handler, file_handler]\r\n)\r\nlogger = logging.getLogger(__name__)\r\n\r\n\r\nset_seed(training_args.seed)\r\nif (\r\n os.path.exists(training_args.output_dir)\r\n and os.listdir(training_args.output_dir)\r\n and training_args.do_train\r\n and not training_args.overwrite_output_dir\r\n ):\r\n raise ValueError(\r\n f\"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome.\"\r\n )\r\n \r\n\r\ncreate_dir_if_not_exists(training_args.output_dir)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\r\n cache_dir=model_args.cache_dir,\r\n )\r\ntrain_dataset, val_dataset, test_dataset = load_data_from_folder(\r\n data_args.data_path,\r\n data_args.text_cols,\r\n tokenizer,\r\n label_col=data_args.label_col,\r\n label_list=data_args.label_list,\r\n categorical_cols=data_args.cat_cols,\r\n numerical_cols=data_args.num_cols,\r\n categorical_encode_type=data_args.categorical_encoding,\r\n numerical_transformer_method=data_args.numerical_encoding,\r\n sep_text_token_str=tokenizer.sep_token,\r\n do_train = training_args.do_train,\r\n do_eval = training_args.do_eval,\r\n do_predict = training_args.do_predict\r\n )\r\ntrain_datasets = [train_dataset]\r\nval_datasets = [val_dataset]\r\ntest_datasets = [test_dataset]\r\n\r\ndef build_compute_metrics_fn(task_name: str) -> Callable[[EvalPrediction], Dict]:\r\n def compute_metrics_fn(p: EvalPrediction):\r\n if task_name == \"classification\":\r\n preds_labels = np.argmax(p.predictions, axis=1)\r\n if p.predictions.shape[-1] == 2:\r\n pred_scores = softmax(p.predictions, axis=1)[:, 1]\r\n else:\r\n pred_scores = softmax(p.predictions, axis=1)\r\n return calc_classification_metrics(pred_scores, preds_labels,\r\n p.label_ids)\r\n elif task_name == \"regression\":\r\n preds = np.squeeze(p.predictions)\r\n return calc_regression_metrics(preds, p.label_ids)\r\n else:\r\n return {}\r\n return compute_metrics_fn\r\n\r\nconfig = AutoConfig.from_pretrained(\r\n model_args.config_name if model_args.config_name else model_args.model_name_or_path,\r\n cache_dir=model_args.cache_dir,\r\n )\r\ntabular_config = TabularConfig(num_labels=len(data_args.label_list),\r\n cat_feat_dim=test_dataset.cat_feats.shape[\r\n 1] if test_dataset.cat_feats is not None else 0,\r\n numerical_feat_dim=test_dataset.numerical_feats.shape[\r\n 1] if test_dataset.numerical_feats is not None else 0,\r\n **vars(data_args))\r\nconfig.tabular_config = tabular_config\r\n\r\nmodel = AutoModelWithTabular.from_pretrained(\r\n model_args.config_name if model_args.config_name else model_args.model_name_or_path,\r\n config=config,\r\n cache_dir=model_args.cache_dir\r\n)\r\n\r\nimport os\r\nos.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\" # see issue #152\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\r\nos.environ['COMET_MODE'] = 'DISABLED'\r\nlogging.basicConfig(\r\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\r\n level=logging.INFO,\r\n datefmt=\"%m/%d/%Y %H:%M:%S\",\r\n handlers=[stream_handler, file_handler]\r\n)\r\nlogger = logging.getLogger(__name__)\r\ntraining_args.log_level = 'INFO'\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset,\r\n compute_metrics=build_compute_metrics_fn(data_args.task))\r\n\r\nif training_args.do_train:\r\n trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None)\r\n trainer.save_model()\r\n```\r\nConfig File\r\n```json\r\n{\"text_cols\" : [],\r\n\"num_cols\" : [],\r\n\"label_col\" : \"label\",\r\n\"label_list\" : [],\r\n\"model_name_or_path\" : \"bert-base-uncased\",\r\n\"data_path\" : \"input\",\r\n\"combine_feat_method\" : \"gating\",\r\n\"task\" : \"classification\",\r\n\"create_folds\" : false,\r\n\"num_classes\" : 3,\r\n\"numerical_transformer_method\" : \"min_max\",\r\n\"output_dir\" : \"run/output\",\r\n\"logging_dir\" : \"run/log\",\r\n\"overwrite_output_dir\" : true,\r\n\"do_train\" : true,\r\n\"do_eval\" : true,\r\n\"do_predict\" : true,\r\n\"per_device_train_batch_size\" : 256,\r\n\"per_device_eval_batch_size\" : 256,\r\n\"num_train_epochs\" : 10,\r\n\"evaluate_during_training\" : true,\r\n\"logging_steps\" : 25,\r\n\"eval_steps\" : 50,\r\n\"save_steps\" : 50,\r\n\"log_level\" : \"INFO\",\r\n\"report_to\" : []\r\n}\r\n```\r\n\r\nThe training_args.dedug = \"\" ( default value )", "This is not a reproducer, as it relies on modules you have define on your environment. We have no idea how you defined the class `MultiModalTrainingArguments` in particular, which is probably where the problem lies.\r\n\r\nPlease use the [forums](https://discuss.huggingface.co/) to get helps from the community to debug your code (and find a small reproducer of the bug if it is indeed a bug in the lbirary).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,658
1,658
NONE
null
### System Info ```shell Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.18.0 - Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9 - Python version: 3.6.13 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152 os.environ["CUDA_VISIBLE_DEVICES"]="0" os.environ['COMET_MODE'] = 'DISABLED' logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", level=logging.INFO, datefmt="%m/%d/%Y %H:%M:%S", handlers=[stream_handler, file_handler] ) logger = logging.getLogger(__name__) training_args.log_level = 'INFO' trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset, compute_metrics=build_compute_metrics_fn(data_args.task)) trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None) trainer.save_model() ``` Error : TypeError Traceback (most recent call last) <ipython-input-13-feef8f1bc697> in <module> 1 if training_args.do_train: ----> 2 trainer.train(model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None) 3 trainer.save_model() ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1239 ) 1240 -> 1241 if DebugOption.UNDERFLOW_OVERFLOW in self.args.debug: 1242 if self.args.n_gpu > 1: 1243 # nn.DataParallel(model) replicates the model, creating new variables and module **TypeError: 'in <string>' requires string as left operand, not DebugOption** ### Expected behavior ```shell The solution can be to use DebugOption.UNDERFLOW_OVERFLOW.value in self.args.debug: instead of DebugOption.UNDERFLOW_OVERFLOW in self.args.debug: ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17729/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17728
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17728/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17728/comments
https://api.github.com/repos/huggingface/transformers/issues/17728/events
https://github.com/huggingface/transformers/issues/17728
1,273,187,899
I_kwDOCUB6oc5L4047
17,728
CI Tests are failing in "run_tests_pipelines_tf"
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[]
1,655
1,655
1,655
CONTRIBUTOR
null
### System Info ```shell CI setup for `run_tests_pipelines_tf`: run_tests_pipelines_tf: working_directory: ~/transformers docker: - image: circleci/python:3.7 environment: OMP_NUM_THREADS: 1 RUN_PIPELINE_TESTS: yes TRANSFORMERS_IS_CI: yes resource_class: xlarge parallelism: 1 steps: - checkout - restore_cache: keys: - v0.4-tf-{{ checksum "setup.py" }} - v0.4-{{ checksum "setup.py" }} - run: pip install --upgrade pip - run: pip install .[sklearn,tf-cpu,testing,sentencepiece] - run: pip install tensorflow_probability - save_cache: key: v0.4-tf-{{ checksum "setup.py" }} paths: - '~/.cache/pip' - run: python utils/tests_fetcher.py | tee test_preparation.txt - store_artifacts: path: ~/transformers/test_preparation.txt - run: | if [ -f test_list.txt ]; then python -m pytest -n 8 --max-worker-restart=0 --dist=loadfile -rA -s --make-reports=tests_pipelines_tf $(cat test_list.txt) -m is_pipeline_test | tee tests_output.txt fi - store_artifacts: path: ~/transformers/tests_output.txt - store_artifacts: path: ~/transformers/reports ``` ### Who can help? @ydshieh , @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Following CI tests are failing in "run_tests_pipelines_tf" for PR #17623 when no changes are done with respect to TF or speech functionalities ``` FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_chunking_fast FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_return_timestamps_ctc_fast FAILED tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_small_model_pt ===== 3 failed, 680 passed, 578 skipped, 270 warnings in 222.67s (0:03:42) ===== ``` ```=================================== FAILURES =================================== __________ AutomaticSpeechRecognitionPipelineTests.test_chunking_fast __________ [gw1] linux -- Python 3.7.12 /usr/local/bin/python self = Audio(sampling_rate=16000, mono=True, decode=True, id=None) value = '/home/circleci/.cache/huggingface/datasets/downloads/extracted/06793a6d1707e1987473fd67ba38f9156c15e5a8d2956a4fff1d9690877b20a8/dev_clean/1272/128104/1272-128104-0000.flac' def encode_example(self, value: Union[str, dict]) -> dict: """Encode example into a format for Arrow. Args: value (:obj:`str` or :obj:`dict`): Data passed as input to Audio feature. Returns: :obj:`dict` """ try: > import soundfile as sf # soundfile is a dependency of librosa, needed to decode audio files. E ModuleNotFoundError: No module named 'soundfile' ../.local/lib/python3.7/site-packages/datasets/features/audio.py:83: ModuleNotFoundError ``` ``` _________ AutomaticSpeechRecognitionPipelineTests.test_small_model_pt __________ [gw3] linux -- Python 3.7.12 /usr/local/bin/python self = <tests.pipelines.test_pipelines_automatic_speech_recognition.AutomaticSpeechRecognitionPipelineTests testMethod=test_small_model_pt> @require_torch def test_small_model_pt(self): speech_recognizer = pipeline( task="automatic-speech-recognition", model="facebook/s2t-small-mustc-en-fr-st", tokenizer="facebook/s2t-small-mustc-en-fr-st", > framework="pt", ) tests/pipelines/test_pipelines_automatic_speech_recognition.py:136: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/transformers/pipelines/__init__.py:638: in pipeline feature_extractor, revision=revision, _from_pipeline=task, **model_kwargs src/transformers/models/auto/feature_extraction_auto.py:326: in from_pretrained return feature_extractor_class.from_dict(config_dict, **kwargs) src/transformers/utils/import_utils.py:809: in __getattr__ requires_backends(cls, cls._backends) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ obj = <class 'transformers.utils.dummy_speech_objects.Speech2TextFeatureExtractor'> backends = ['speech'] def requires_backends(obj, backends): if not isinstance(backends, (list, tuple)): backends = [backends] name = obj.__name__ if hasattr(obj, "__name__") else obj.__class__.__name__ checks = (BACKENDS_MAPPING[backend] for backend in backends) failed = [msg.format(name) for available, msg in checks if not available()] if failed: > raise ImportError("".join(failed)) E ImportError: E Speech2TextFeatureExtractor requires the torchaudio library but it was not found in your environment. You can install it with pip: E `pip install torchaudio ``` ### Expected behavior ```shell No tests fail ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17728/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17727
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17727/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17727/comments
https://api.github.com/repos/huggingface/transformers/issues/17727/events
https://github.com/huggingface/transformers/issues/17727
1,273,105,223
I_kwDOCUB6oc5L4gtH
17,727
SimMIM output num_channels should not be hardcoded
{ "login": "ccaapton", "id": 6211551, "node_id": "MDQ6VXNlcjYyMTE1NTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6211551?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ccaapton", "html_url": "https://github.com/ccaapton", "followers_url": "https://api.github.com/users/ccaapton/followers", "following_url": "https://api.github.com/users/ccaapton/following{/other_user}", "gists_url": "https://api.github.com/users/ccaapton/gists{/gist_id}", "starred_url": "https://api.github.com/users/ccaapton/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ccaapton/subscriptions", "organizations_url": "https://api.github.com/users/ccaapton/orgs", "repos_url": "https://api.github.com/users/ccaapton/repos", "events_url": "https://api.github.com/users/ccaapton/events{/privacy}", "received_events_url": "https://api.github.com/users/ccaapton/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,655
1,656
1,656
NONE
null
### Feature request In all 3 simmim models, reconstructed images channel is hardcoded as 3. This should be configurable as num_channels ``` deit/modeling_deit.py swin/modeling_swin.py vit/modeling_vit.py nn.Conv2d(in_channels=config.hidden_size, out_channels=config.encoder_stride**2 * 3, kernel_size=1) ``` @NielsRogge ### Motivation I'm training a grayscale model, but the reconstructed image has different dimension as the input image. ### Your contribution None
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17727/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17726
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17726/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17726/comments
https://api.github.com/repos/huggingface/transformers/issues/17726/events
https://github.com/huggingface/transformers/issues/17726
1,273,051,465
I_kwDOCUB6oc5L4TlJ
17,726
Input Packing
{ "login": "Sanger2000", "id": 17725268, "node_id": "MDQ6VXNlcjE3NzI1MjY4", "avatar_url": "https://avatars.githubusercontent.com/u/17725268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sanger2000", "html_url": "https://github.com/Sanger2000", "followers_url": "https://api.github.com/users/Sanger2000/followers", "following_url": "https://api.github.com/users/Sanger2000/following{/other_user}", "gists_url": "https://api.github.com/users/Sanger2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sanger2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sanger2000/subscriptions", "organizations_url": "https://api.github.com/users/Sanger2000/orgs", "repos_url": "https://api.github.com/users/Sanger2000/repos", "events_url": "https://api.github.com/users/Sanger2000/events{/privacy}", "received_events_url": "https://api.github.com/users/Sanger2000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@Sanger2000 this is cool - surprised you didn't get much interest. Would you be willing to expand on your approach here so that others can pick it up even if it doesn't get incorporated into the library?\r\n\r\nSpecifically:\r\n\r\n1. what is the delimiter token you used\r\n2. how did you modify the masks/outputs/model to handle the packed inputs?\r\n\r\nIf you have any pointers to existing code or other details, please share them here.\r\n\r\n#6661 is another issue that asks about packing" ]
1,655
1,673
1,658
NONE
null
### Feature request Sequence packing when tokenizing inputs. Most modern large language models pack together multiple sequences to saturate their large context windows. Otherwise, they risk wasted computation with excess padding. For example T5, GPT3, and PALM all implemented input packing. "During training we always train on sequences of the full nctx = 2048 token context window, packing multiple documents into a single sequence when documents are shorter than 2048, in order to increase computational efficiency. Sequences with multiple documents are not masked in any special way but instead documents within a sequence are delimited with a special end of text token, giving the language model the information necessary to infer that context separated by the end of text token is unrelated. This allows for efficient training without need for any special sequence-specific masking." [1] "We use a maximum sequence length of 512 and a batch size of 128 sequences. Whenever possible, we “pack” multiple sequences into each entry of the batch..." [2] I would suggest a change to tokenizers. When tokenizing multiple sequences, sufficiently small inputs are automatically packed together with a special delimiter token separating them. The simplest approach would be a greedy method, where inputs under something like 70% of the context window are added to a queue. Or when truncating longer inputs, the remaining chunk is added to the queue. When the queue grows to be larger than the size of the window, the first $n_{window}$ tokens are flushed and added as another input. A more aggressive strategy would be something like this - https://arxiv.org/pdf/2107.02027.pdf In addition, it would be useful to generate and supply optional masks to the model that prevent models from attending to different sequences. [1] - Brown, Tom, et al. "Language models are few-shot learners." Advances in neural information processing systems 33 (2020): 1877-1901. [2] - Raffel, Colin, et al. "Exploring the limits of transfer learning with a unified text-to-text transformer." J. Mach. Learn. Res. 21.140 (2020): 11. ### Motivation I have been dealing with a dataset with a very high variance in its sequence lengths. I implemented something like this for myself to pack inputs together and thought it could be quite useful as a general feature. ### Your contribution I could probably make the greedy packer and support the custom masks if I have time in the next few weeks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17726/reactions", "total_count": 12, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17726/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17725
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17725/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17725/comments
https://api.github.com/repos/huggingface/transformers/issues/17725/events
https://github.com/huggingface/transformers/pull/17725
1,273,049,377
PR_kwDOCUB6oc45v-VZ
17,725
Fix bug in the example of VisualBertForPreTraining
{ "login": "Jiayi-Pan", "id": 55055083, "node_id": "MDQ6VXNlcjU1MDU1MDgz", "avatar_url": "https://avatars.githubusercontent.com/u/55055083?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jiayi-Pan", "html_url": "https://github.com/Jiayi-Pan", "followers_url": "https://api.github.com/users/Jiayi-Pan/followers", "following_url": "https://api.github.com/users/Jiayi-Pan/following{/other_user}", "gists_url": "https://api.github.com/users/Jiayi-Pan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jiayi-Pan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jiayi-Pan/subscriptions", "organizations_url": "https://api.github.com/users/Jiayi-Pan/orgs", "repos_url": "https://api.github.com/users/Jiayi-Pan/repos", "events_url": "https://api.github.com/users/Jiayi-Pan/events{/privacy}", "received_events_url": "https://api.github.com/users/Jiayi-Pan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
CONTRIBUTOR
null
VIsualBert uses bert-base-uncased tokenizer, therefore, instead of {mask}, the mask token should be [MASK] :) - Documentation: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17725/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17725", "html_url": "https://github.com/huggingface/transformers/pull/17725", "diff_url": "https://github.com/huggingface/transformers/pull/17725.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17725.patch", "merged_at": 1655380486000 }
https://api.github.com/repos/huggingface/transformers/issues/17724
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17724/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17724/comments
https://api.github.com/repos/huggingface/transformers/issues/17724/events
https://github.com/huggingface/transformers/pull/17724
1,272,893,183
PR_kwDOCUB6oc45vd5V
17,724
Inference benchmarks of Torchdynamo + FX2TRT(now in Torch-TensorRT)
{ "login": "frank-wei", "id": 6955737, "node_id": "MDQ6VXNlcjY5NTU3Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/6955737?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frank-wei", "html_url": "https://github.com/frank-wei", "followers_url": "https://api.github.com/users/frank-wei/followers", "following_url": "https://api.github.com/users/frank-wei/following{/other_user}", "gists_url": "https://api.github.com/users/frank-wei/gists{/gist_id}", "starred_url": "https://api.github.com/users/frank-wei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frank-wei/subscriptions", "organizations_url": "https://api.github.com/users/frank-wei/orgs", "repos_url": "https://api.github.com/users/frank-wei/repos", "events_url": "https://api.github.com/users/frank-wei/events{/privacy}", "received_events_url": "https://api.github.com/users/frank-wei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17724). All of your documentation changes will be reflected on that endpoint.", "This looks excellent, @frank-wei, thank you for sharing the extensive benchmarks.\r\n\r\nSo the next step is to document how the users can deploy the proposed solution so that they don't need to try to fish it out from the benchmark code. Does it make sense?\r\n\r\nwrt the benchmark script in this PR, I'm not sure where it'd best belong. Perhaps somewhere in your repo as I'm sure it's going to evolve and then we could link to it from our documentation? how does that sound?\r\n\r\nalso cc: @sgugger ", "This should just be a simple addition to the existing TorchDynamo integration with NVFuser (cc: @anijain2305).", "> This should just be a simple addition to the existing TorchDynamo integration with NVFuser (cc: @anijain2305).\r\n\r\nIf we do that, as I proposed originally looking into the 8 ball, it should go under the same cmd arg and have a new value - as we aren't going to add a new cmd arg for each variation. \r\n\r\nIt'd like let's discuss the proposed integration API modifications before implementing those, to save everybody's time.\r\n\r\nAs I suggested probably the best future-proofing is to have the value comprised of possible multiple \"keys\" key1:key2:...:keyn - so that multiple combos could be supported down the road.", "@stas00 and @Chillee any context about the design of the integration API? Does the one API could work both for inference (fx2trt) and training (AOT)?", "It's the one that was added recently to integrate torchdynamo with the nvfuser backend:\r\nhttps://github.com/huggingface/transformers/blob/3981ee8650042e89d9c430ec34def2d58a2a12f7/src/transformers/training_args.py#L467-L469\r\n\r\nyou can see the PR here: https://github.com/huggingface/transformers/pull/17308", "Since the fx2trt needs some preprocessing time to trace the model and create TRT model engine, it is not suitable for inference in training process. \r\nDoes hf has pure inference scenarios where fx2trt could be leveraged?", "> Does hf has pure inference scenarios where fx2trt could be leveraged?\r\n\r\nOf course it has. Everything else besides training is inference\r\n\r\nhttps://github.com/huggingface/transformers/blob/3981ee8650042e89d9c430ec34def2d58a2a12f7/src/transformers/training_args.py#L123-L127\r\n\r\nhttps://github.com/huggingface/transformers/blob/3981ee8650042e89d9c430ec34def2d58a2a12f7/src/transformers/training_args.py#L267-L272\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@frank-wei have you tried `PEGASUS` ? It seemed to be not supported yet. I filled an issue with details: https://github.com/pytorch/torchdynamo/issues/777", "> @frank-wei have you tried `PEGASUS` ? It seemed to be not supported yet. I filled an issue with details: [pytorch/torchdynamo#777](https://github.com/pytorch/torchdynamo/issues/777)\r\n\r\n@philschmid , I did not try `PEGASUS` before. \r\nThe problem in your case is that torch_tensorrt fx path nightly version is out of sync with pytorch nightly. I am updating it today." ]
1,655
1,660
1,658
CONTRIBUTOR
null
Hey HF folks: I propose this PR as performance results preview for our SOTA inference engine combining torchdynamo+fx2trt. This PR is the based on the sound and great work in #17240 (cc @Chillee @jansel). Since the importance of torchdynamo is already emphasized in #17240, I will skip it and focus on the inference efforts we proposed by combining torchdynamo + fx2trt. # A short introduction about fx2trt It is library tool developed by Meta team (cc @yinghai) to lower the FX graph to TensorRT on GPU to take advantage of its various optimization paths. The library itself was just merged into [Torch-TensorRT](https://github.com/pytorch/TensorRT) as one of its two lowering paths to TensorRT running on GPU. # Results This script primarily comes from a great effort from @anijain2305 and I added some inference implementation logic there. Some highlights about the results: 1. Compared with FX integration, it is important that torchdynamo could handle those corner cases where FX could not like control flow or set item operation. That is the main the reason we could extend our implementation to these 12 models. Actually, maybe more models could be run but I just follow the same model pool from #17204 2. The experiments are run across different batch sizes from 1 to 4 and 8. The speedup is compared against the eager model. The trend is that speedup is pretty high on small batch size like 1 but get slower down when the batch size is increased. The reason behind it could be that torch eager mode is low efficiency on handling the small computation kernel while on large kernels, TRT and eager model would call similar kernel implementations. So their perf gap is reduced on large batch size. 3. TensorRT has accuracy degradation problem in float16 mode because of its optimization techniques. I changed the accuracy validation standard as [Cosine Similarity](https://pytorch.org/docs/stable/generated/torch.nn.CosineSimilarity.html) which is also another important evaluation standard on the accuracy against eager mode. For float32 mode, the absolute difference is used as standard. Run on A100: ``` To run for fp32 mode: $ python hf_dynamo.py --run-dynamo-fx2trt-fp32 --use-eval-mode To run for fp16 mode: $ python hf_dynamo.py --run-dynamo-fx2trt-fp16 --use-eval-mode ``` cc @stas00 === Final results for fp16 === **Batch size = 1** | model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression | |:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:| | BertForMaskedLM | torch.float16 | True | eager | 0.010 | 0.458 | 1.000 | 1.000 | | BertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.003 | 0.240 | 2.930 | 1.906 | | AlbertForMaskedLM | torch.float16 | True | eager | 0.012 | 0.417 | 1.000 | 1.000 | | AlbertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.004 | 0.051 | 2.775 | 8.257 | | GPT2LMHeadModel | torch.float16 | True | eager | 0.013 | 0.630 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.006 | 0.316 | 2.314 | 1.991 | | T5ForConditionalGeneration | torch.float16 | True | eager | 0.021 | 0.504 | 1.000 | 1.000 | | T5ForConditionalGeneration | torch.float16 | True | dynamo_fx2trt_fp16 | 0.024 | 0.162 | 0.898 | 3.115 | | DistilBertForMaskedLM | torch.float16 | True | eager | 0.006 | 0.273 | 1.000 | 1.000 | | DistilBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.003 | 0.158 | 2.119 | 1.722 | | RobertaForMaskedLM | torch.float16 | True | eager | 0.011 | 0.506 | 1.000 | 1.000 | | RobertaForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.006 | 0.290 | 1.986 | 1.743 | | GPT2LMHeadModel | torch.float16 | True | eager | 0.006 | 0.446 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.003 | 0.220 | 1.909 | 2.026 | | ElectraForMaskedLM | torch.float16 | True | eager | 0.010 | 0.458 | 1.000 | 1.000 | | ElectraForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.003 | 0.241 | 3.044 | 1.900 | | ConvBertForMaskedLM | torch.float16 | True | eager | 0.020 | 0.459 | 1.000 | 1.000 | | ConvBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.021 | 0.231 | 0.944 | 1.985 | | MobileBertForMaskedLM | torch.float16 | True | eager | 0.040 | 0.297 | 1.000 | 1.000 | | MobileBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.006 | 0.128 | 6.693 | 2.312 | | CamembertForMaskedLM | torch.float16 | True | eager | 0.011 | 0.460 | 1.000 | 1.000 | | CamembertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.005 | 0.241 | 2.075 | 1.906 | | LayoutLMForMaskedLM | torch.float16 | True | eager | 0.011 | 0.466 | 1.000 | 1.000 | | LayoutLMForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.004 | 0.247 | 3.031 | 1.885 | **Batch size = 4** | model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression | |:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:| | BertForMaskedLM | torch.float16 | True | eager | 0.012 | 1.185 | 1.000 | 1.000 | | BertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.008 | 0.326 | 1.445 | 3.631 | | AlbertForMaskedLM | torch.float16 | True | eager | 0.013 | 1.455 | 1.000 | 1.000 | | AlbertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.005 | 0.137 | 2.508 | 10.610 | | GPT2LMHeadModel | torch.float16 | True | eager | 0.014 | 1.826 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.008 | 0.515 | 1.705 | 3.549 | | T5ForConditionalGeneration | torch.float16 | True | eager | 0.025 | 1.684 | 1.000 | 1.000 | | T5ForConditionalGeneration | torch.float16 | True | dynamo_fx2trt_fp16 | 0.024 | 0.291 | 1.055 | 5.778 | | DistilBertForMaskedLM | torch.float16 | True | eager | 0.007 | 0.687 | 1.000 | 1.000 | | DistilBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.006 | 0.253 | 1.227 | 2.721 | | RobertaForMaskedLM | torch.float16 | True | eager | 0.015 | 1.288 | 1.000 | 1.000 | | RobertaForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.012 | 0.427 | 1.246 | 3.012 | | GPT2LMHeadModel | torch.float16 | True | eager | 0.008 | 1.120 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.006 | 0.394 | 1.430 | 2.842 | | ElectraForMaskedLM | torch.float16 | True | eager | 0.013 | 1.186 | 1.000 | 1.000 | | ElectraForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.008 | 0.329 | 1.517 | 3.605 | | ConvBertForMaskedLM | torch.float16 | True | eager | 0.021 | 1.243 | 1.000 | 1.000 | | ConvBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.020 | 0.321 | 1.077 | 3.876 | | MobileBertForMaskedLM | torch.float16 | True | eager | 0.043 | 0.893 | 1.000 | 1.000 | | MobileBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.009 | 0.221 | 4.923 | 4.046 | | CamembertForMaskedLM | torch.float16 | True | eager | 0.014 | 1.189 | 1.000 | 1.000 | | CamembertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.011 | 0.331 | 1.302 | 3.597 | | LayoutLMForMaskedLM | torch.float16 | True | eager | 0.012 | 1.189 | 1.000 | 1.000 | | LayoutLMForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.009 | 0.330 | 1.443 | 3.599 | **Batch size = 8** | model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression | |:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:| | BertForMaskedLM | torch.float16 | True | eager | 0.015 | 2.156 | 1.000 | 1.000 | | BertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.014 | 0.439 | 1.014 | 4.916 | | AlbertForMaskedLM | torch.float16 | True | eager | 0.017 | 2.840 | 1.000 | 1.000 | | AlbertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.009 | 0.250 | 1.892 | 11.361 | | GPT2LMHeadModel | torch.float16 | True | eager | 0.021 | 3.393 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.015 | 0.780 | 1.376 | 4.349 | | T5ForConditionalGeneration | torch.float16 | True | eager | 0.025 | 3.243 | 1.000 | 1.000 | | T5ForConditionalGeneration | torch.float16 | True | dynamo_fx2trt_fp16 | 0.027 | 0.465 | 0.929 | 6.977 | | DistilBertForMaskedLM | torch.float16 | True | eager | 0.009 | 1.237 | 1.000 | 1.000 | | DistilBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.009 | 0.371 | 0.904 | 3.334 | | RobertaForMaskedLM | torch.float16 | True | eager | 0.018 | 2.335 | 1.000 | 1.000 | | RobertaForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.019 | 0.618 | 0.958 | 3.781 | | GPT2LMHeadModel | torch.float16 | True | eager | 0.013 | 2.001 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float16 | True | dynamo_fx2trt_fp16 | 0.010 | 0.626 | 1.285 | 3.197 | | ElectraForMaskedLM | torch.float16 | True | eager | 0.015 | 2.156 | 1.000 | 1.000 | | ElectraForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.014 | 0.444 | 1.023 | 4.857 | | ConvBertForMaskedLM | torch.float16 | True | eager | 0.022 | 2.273 | 1.000 | 1.000 | | ConvBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.024 | 0.439 | 0.937 | 5.180 | | MobileBertForMaskedLM | torch.float16 | True | eager | 0.042 | 1.685 | 1.000 | 1.000 | | MobileBertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.013 | 0.343 | 3.265 | 4.909 | | CamembertForMaskedLM | torch.float16 | True | eager | 0.016 | 2.171 | 1.000 | 1.000 | | CamembertForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.017 | 0.453 | 0.950 | 4.792 | | LayoutLMForMaskedLM | torch.float16 | True | eager | 0.015 | 2.163 | 1.000 | 1.000 | | LayoutLMForMaskedLM | torch.float16 | True | dynamo_fx2trt_fp16 | 0.015 | 0.445 | 1.010 | 4.859 | === Final results for fp32 === **Batch size = 1** | model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression | |:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:| | BertForMaskedLM | torch.float32 | True | eager | 0.010 | 0.905 | 1.000 | 1.000 | | BertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.004 | 0.468 | 2.371 | 1.936 | | AlbertForMaskedLM | torch.float32 | True | eager | 0.011 | 0.839 | 1.000 | 1.000 | | AlbertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.004 | 0.100 | 2.563 | 8.401 | | GPT2LMHeadModel | torch.float32 | True | eager | 0.011 | 1.237 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.005 | 0.616 | 2.163 | 2.009 | | T5ForConditionalGeneration | torch.float32 | True | eager | 0.015 | 0.648 | 1.000 | 1.000 | | T5ForConditionalGeneration | torch.float32 | True | dynamo_fx2trt_fp32 | 0.008 | 0.318 | 1.879 | 2.038 | | DistilBertForMaskedLM | torch.float32 | True | eager | 0.006 | 0.534 | 1.000 | 1.000 | | DistilBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.003 | 0.313 | 2.080 | 1.705 | | RobertaForMaskedLM | torch.float32 | True | eager | 0.011 | 0.998 | 1.000 | 1.000 | | RobertaForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.006 | 0.563 | 1.707 | 1.773 | | GPT2LMHeadModel | torch.float32 | True | eager | 0.006 | 0.879 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.003 | 0.429 | 1.987 | 2.050 | | ElectraForMaskedLM | torch.float32 | True | eager | 0.010 | 0.902 | 1.000 | 1.000 | | ElectraForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.004 | 0.471 | 2.478 | 1.914 | | ConvBertForMaskedLM | torch.float32 | True | eager | 0.019 | 0.926 | 1.000 | 1.000 | | ConvBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.017 | 0.462 | 1.088 | 2.004 | | MobileBertForMaskedLM | torch.float32 | True | eager | 0.039 | 0.593 | 1.000 | 1.000 | | MobileBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.007 | 0.257 | 5.388 | 2.305 | | CamembertForMaskedLM | torch.float32 | True | eager | 0.011 | 0.904 | 1.000 | 1.000 | | CamembertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.006 | 0.474 | 1.833 | 1.908 | | LayoutLMForMaskedLM | torch.float32 | True | eager | 0.011 | 0.918 | 1.000 | 1.000 | | LayoutLMForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.005 | 0.483 | 2.391 | 1.900 | **Batch size = 4** | model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression | |:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:| | BertForMaskedLM | torch.float32 | True | eager | 0.015 | 2.361 | 1.000 | 1.000 | | BertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.011 | 0.643 | 1.334 | 3.672 | | AlbertForMaskedLM | torch.float32 | True | eager | 0.015 | 2.906 | 1.000 | 1.000 | | AlbertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.009 | 0.271 | 1.740 | 10.734 | | GPT2LMHeadModel | torch.float32 | True | eager | 0.016 | 3.631 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.012 | 1.012 | 1.412 | 3.589 | | T5ForConditionalGeneration | torch.float32 | True | eager | 0.017 | 1.982 | 1.000 | 1.000 | | T5ForConditionalGeneration | torch.float32 | True | dynamo_fx2trt_fp32 | 0.011 | 0.579 | 1.472 | 3.424 | | DistilBertForMaskedLM | torch.float32 | True | eager | 0.007 | 1.364 | 1.000 | 1.000 | | DistilBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.007 | 0.498 | 1.109 | 2.741 | | RobertaForMaskedLM | torch.float32 | True | eager | 0.014 | 2.568 | 1.000 | 1.000 | | RobertaForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.014 | 0.850 | 1.028 | 3.022 | | GPT2LMHeadModel | torch.float32 | True | eager | 0.009 | 2.227 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.007 | 0.777 | 1.336 | 2.866 | | ElectraForMaskedLM | torch.float32 | True | eager | 0.013 | 2.360 | 1.000 | 1.000 | | ElectraForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.011 | 0.648 | 1.182 | 3.640 | | ConvBertForMaskedLM | torch.float32 | True | eager | 0.020 | 2.475 | 1.000 | 1.000 | | ConvBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.019 | 0.639 | 1.054 | 3.874 | | MobileBertForMaskedLM | torch.float32 | True | eager | 0.041 | 1.783 | 1.000 | 1.000 | | MobileBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.011 | 0.441 | 3.610 | 4.047 | | CamembertForMaskedLM | torch.float32 | True | eager | 0.013 | 2.375 | 1.000 | 1.000 | | CamembertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.013 | 0.658 | 1.010 | 3.610 | | LayoutLMForMaskedLM | torch.float32 | True | eager | 0.013 | 2.373 | 1.000 | 1.000 | | LayoutLMForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.011 | 0.655 | 1.182 | 3.620 | **Batch size = 8** | model | dtype | is_accurate | name | time (s) | mem (GB) | speedup | mem_compression | |:---------------------------|:--------------|:--------------|:-------------------|-----------:|-----------:|----------:|------------------:| | BertForMaskedLM | torch.float32 | True | eager | 0.024 | 4.309 | 1.000 | 1.000 | | BertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.020 | 0.874 | 1.184 | 4.928 | | AlbertForMaskedLM | torch.float32 | True | eager | 0.029 | 5.679 | 1.000 | 1.000 | | AlbertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.016 | 0.500 | 1.787 | 11.367 | | GPT2LMHeadModel | torch.float32 | True | eager | 0.031 | 6.769 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.022 | 1.548 | 1.436 | 4.374 | | T5ForConditionalGeneration | torch.float32 | True | eager | 0.020 | 3.725 | 1.000 | 1.000 | | T5ForConditionalGeneration | torch.float32 | True | dynamo_fx2trt_fp32 | 0.019 | 0.924 | 1.018 | 4.033 | | DistilBertForMaskedLM | torch.float32 | True | eager | 0.014 | 2.470 | 1.000 | 1.000 | | DistilBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.012 | 0.743 | 1.115 | 3.325 | | RobertaForMaskedLM | torch.float32 | True | eager | 0.026 | 4.670 | 1.000 | 1.000 | | RobertaForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.024 | 1.236 | 1.070 | 3.779 | | GPT2LMHeadModel | torch.float32 | True | eager | 0.018 | 3.993 | 1.000 | 1.000 | | GPT2LMHeadModel | torch.float32 | True | dynamo_fx2trt_fp32 | 0.013 | 1.243 | 1.356 | 3.211 | | ElectraForMaskedLM | torch.float32 | True | eager | 0.024 | 4.315 | 1.000 | 1.000 | | ElectraForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.020 | 0.892 | 1.210 | 4.837 | | ConvBertForMaskedLM | torch.float32 | True | eager | 0.030 | 4.525 | 1.000 | 1.000 | | ConvBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.031 | 0.882 | 0.991 | 5.133 | | MobileBertForMaskedLM | torch.float32 | True | eager | 0.044 | 3.371 | 1.000 | 1.000 | | MobileBertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.019 | 0.685 | 2.338 | 4.921 | | CamembertForMaskedLM | torch.float32 | True | eager | 0.024 | 4.338 | 1.000 | 1.000 | | CamembertForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.022 | 0.903 | 1.098 | 4.802 | | LayoutLMForMaskedLM | torch.float32 | True | eager | 0.024 | 4.322 | 1.000 | 1.000 | | LayoutLMForMaskedLM | torch.float32 | True | dynamo_fx2trt_fp32 | 0.020 | 0.888 | 1.185 | 4.869 |
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17724/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17724/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17724", "html_url": "https://github.com/huggingface/transformers/pull/17724", "diff_url": "https://github.com/huggingface/transformers/pull/17724.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17724.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17723
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17723/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17723/comments
https://api.github.com/repos/huggingface/transformers/issues/17723/events
https://github.com/huggingface/transformers/pull/17723
1,272,658,501
PR_kwDOCUB6oc45uvOi
17,723
Sort the model doc Toc Alphabetically
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
COLLABORATOR
null
# What does this PR do? This PR sorts a bit the ToC that has some G models in the middle of the Os
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17723/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17723/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17723", "html_url": "https://github.com/huggingface/transformers/pull/17723", "diff_url": "https://github.com/huggingface/transformers/pull/17723.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17723.patch", "merged_at": 1655323917000 }
https://api.github.com/repos/huggingface/transformers/issues/17722
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17722/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17722/comments
https://api.github.com/repos/huggingface/transformers/issues/17722/events
https://github.com/huggingface/transformers/pull/17722
1,272,586,861
PR_kwDOCUB6oc45ufnM
17,722
normalize keys_to_ignore
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
CONTRIBUTOR
null
as discussed at https://github.com/huggingface/transformers/issues/16719#issuecomment-1156599137 this PR normalizes `_keys_to_ignore_on*` to not backslash on dot, unless it's an actual regex with regex patterns - this is just for consistency and easier troubleshooting. As sometimes it's unclear if `\` is needed or not when different modeling files use different styles. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17722/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17722", "html_url": "https://github.com/huggingface/transformers/pull/17722", "diff_url": "https://github.com/huggingface/transformers/pull/17722.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17722.patch", "merged_at": 1655319551000 }
https://api.github.com/repos/huggingface/transformers/issues/17721
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17721/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17721/comments
https://api.github.com/repos/huggingface/transformers/issues/17721/events
https://github.com/huggingface/transformers/pull/17721
1,272,508,890
PR_kwDOCUB6oc45uOv2
17,721
[tests] workaround for relative dataset path
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "the just released `datasets==2.3.2` fixed the bug, so closing this PR." ]
1,655
1,655
1,655
CONTRIBUTOR
null
`datasets==2.3.1` introduced a bug where it fails to load a dataset via a path that contains `..`. This PR works around it by avoiding this situation in the tests. @sgugger, @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17721/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17721/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17721", "html_url": "https://github.com/huggingface/transformers/pull/17721", "diff_url": "https://github.com/huggingface/transformers/pull/17721.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17721.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17720
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17720/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17720/comments
https://api.github.com/repos/huggingface/transformers/issues/17720/events
https://github.com/huggingface/transformers/pull/17720
1,272,494,136
PR_kwDOCUB6oc45uLmc
17,720
CLI: Add flag to push TF weights directly into main
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes yes yes! Maybe users could just set a flag, like \"auto-convert my weights\" and then we could have a machine that tracks their repos, looks for weights files that have changed and auto-updates versions for the other frameworks?", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
MEMBER
null
# What does this PR do? Adds the `--push` flag to the `pt-to-tf` CLI, to enabling pushing straight to main (assuming the user has the right permissions). Why am I adding this flag? A few users mentioned that they would be interested in having TF weights silently pushed straight into their repos. With this flag, I can start building a local midnight cronjob to automate the conversions (starting with pushes for users that requested it, then PR opening on users interested in PRs, then 1 PR per user if not whitelisted), which can eventually be moved into a separate machine to continuously feed the TF ecosystem 🔥
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17720/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17720", "html_url": "https://github.com/huggingface/transformers/pull/17720", "diff_url": "https://github.com/huggingface/transformers/pull/17720.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17720.patch", "merged_at": 1655317551000 }
https://api.github.com/repos/huggingface/transformers/issues/17719
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17719/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17719/comments
https://api.github.com/repos/huggingface/transformers/issues/17719/events
https://github.com/huggingface/transformers/pull/17719
1,272,487,464
PR_kwDOCUB6oc45uKLC
17,719
[example] image classification example requires newer datasets version
{ "login": "jeffra", "id": 645595, "node_id": "MDQ6VXNlcjY0NTU5NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/645595?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeffra", "html_url": "https://github.com/jeffra", "followers_url": "https://api.github.com/users/jeffra/followers", "following_url": "https://api.github.com/users/jeffra/following{/other_user}", "gists_url": "https://api.github.com/users/jeffra/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeffra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeffra/subscriptions", "organizations_url": "https://api.github.com/users/jeffra/orgs", "repos_url": "https://api.github.com/users/jeffra/repos", "events_url": "https://api.github.com/users/jeffra/events{/privacy}", "received_events_url": "https://api.github.com/users/jeffra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? The DeepSpeed + Transformers integration tests are showing issues with this example. We install `datastes >= 1.8.0` in the requirements.txt for this example. However, in running the example it requires `datasets.Image()` which wasn't introduced in datasets until 1.17.0 (as far as I can tell). This PR bumps up the min version to one that works with the example. Example output showing the error: https://github.com/microsoft/DeepSpeed/runs/6869668978?check_suite_focus=true We also are seeing issues with some of the examples with the latest datasets release but will file a different issue for that. /cc @mrwyattii <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00, @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17719/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17719", "html_url": "https://github.com/huggingface/transformers/pull/17719", "diff_url": "https://github.com/huggingface/transformers/pull/17719.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17719.patch", "merged_at": 1655315502000 }
https://api.github.com/repos/huggingface/transformers/issues/17718
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17718/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17718/comments
https://api.github.com/repos/huggingface/transformers/issues/17718/events
https://github.com/huggingface/transformers/issues/17718
1,272,459,107
I_kwDOCUB6oc5L2C9j
17,718
`max_length` and `stopping_criteria` in generate()
{ "login": "nitaytech", "id": 56558412, "node_id": "MDQ6VXNlcjU2NTU4NDEy", "avatar_url": "https://avatars.githubusercontent.com/u/56558412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nitaytech", "html_url": "https://github.com/nitaytech", "followers_url": "https://api.github.com/users/nitaytech/followers", "following_url": "https://api.github.com/users/nitaytech/following{/other_user}", "gists_url": "https://api.github.com/users/nitaytech/gists{/gist_id}", "starred_url": "https://api.github.com/users/nitaytech/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nitaytech/subscriptions", "organizations_url": "https://api.github.com/users/nitaytech/orgs", "repos_url": "https://api.github.com/users/nitaytech/repos", "events_url": "https://api.github.com/users/nitaytech/events{/privacy}", "received_events_url": "https://api.github.com/users/nitaytech/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[ { "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }, { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }, { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hey @nitaytech 👋 The behavior surrounding `max_length` can't be changed, as it would not be backwards compatible. But we also don't like its behavior for the reasons you described, hence the plans to deprecate (and the warning).\r\n\r\nAs per [this recent discussion](https://github.com/huggingface/transformers/issues/17414#issuecomment-1148836312), we have decided to give preference to the `max_new_tokens` argument, as it is clearer for all types of models. We are working on updating warnings and documentations to make it clear it is the correct way to control the maximum length of the generated text :)\r\n\r\nMeanwhile, can you confirm that `max_new_tokens` works properly in your case? ", "@nitaytech [this comment](https://github.com/huggingface/transformers/pull/17196#issuecomment-1159147143) might also be relevant to your pain points.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "related: #18018", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "from transformers import Seq2SeqTrainingArguments\r\n\r\ntraining_args = Seq2SeqTrainingArguments(\r\n output_dir=\"./kirah/fcv_s2t\", # change to a repo name of your choice\r\n per_device_train_batch_size=16,\r\n gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size\r\n learning_rate=1e-5,\r\n warmup_steps=500,\r\n max_steps=7,\r\n gradient_checkpointing=True,\r\n fp16=False,\r\n evaluation_strategy=\"steps\",\r\n per_device_eval_batch_size=8,\r\n predict_with_generate=True,\r\n max_new_tokens=1000, # <======== Before max_length=1000,\r\n save_steps=1000,\r\n eval_steps=1000,\r\n logging_steps=25,\r\n report_to=[\"tensorboard\"],\r\n load_best_model_at_end=True,\r\n metric_for_best_model=\"wer\",\r\n greater_is_better=False,\r\n push_to_hub=True,\r\n)\r\n__init__() got an unexpected keyword argument 'max_new_tokens'\r\n", "Hey @jhoanmartinez -- Seq2SeqTrainingArguments doesn't have `max_new_tokens` yet" ]
1,655
1,671
1,661
NONE
null
### System Info ```shell - `transformers` version: 4.19.2 - Platform: Linux-5.13.0-44-generic-x86_64-with-glibc2.31 - Python version: 3.10.4 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @patrickvonplaten Hi, I don't get the logic behind `max_length` and `stopping_criteria` for `generate(self,...)` function for encoder-decoder models. Accordingly, when you pass `max_length` you get the deprecated warning, which is ok - however, it recommends using the `StoppingCriteriaList` object with the `MaxLengthCriteria`. Now the real problem happens: The `generate()` function uses the following code: ******************************* # 5. Prepare `max_length` depending on other stopping criteria # if `max_new_tokens` is passed, but not `max_length` -> set `max_length = max_new_tokens` if max_length is None and max_new_tokens is not None: max_length = max_new_tokens + input_ids_seq_length elif max_length is not None and max_new_tokens is not None: # Both are set, this is odd, raise a warning warnings.warn( "Both `max_length` and `max_new_tokens` have been set " f"but they serve the same purpose. `max_length` {max_length} " f"will take priority over `max_new_tokens` {max_new_tokens}.", UserWarning, ) # default to config if still None max_length = max_length if max_length is not None else self.config.max_length ******************************* As you can see, `max_length ` is going to have a value no matter what (even if you pass `max_length=None` the value is set to be `self.config.max_length` which is equal to 20 for T5, and this extremely bad for users who are not aware to it... in older versions this wasn't the behavior of the `generate()` function.) Now, if you pass `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(100))` you get an Exception (because max_length is not None and you init the default StoppingCriteriaList with MaxLengthCriteria(max_length==self.config.max_length)). If you pass `max_length=100` you get the warning. If you don't pass `max_length ` you still get the warning (because `max_length = max_length if max_length is not None else self.config.max_length`) So how exactly someone is supposed to set the max length? Should I change self.config.max_length? This is not a good practice... Of course, I can pass `max_length` for `generate()`, however, the warning says not to do that... ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction call model.generate(max_length=...) and model.generate(stopping_criteria=StoppingCriteriaList([MaxLengthCriteria=100)]),...) ### Expected behavior ```shell change the warning, change the logic behind default StoppingCriteriaList change how you infer max_length make sure users are aware of max_length=model.configs.max_length when max_length is None ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17718/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17718/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17717
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17717/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17717/comments
https://api.github.com/repos/huggingface/transformers/issues/17717/events
https://github.com/huggingface/transformers/pull/17717
1,272,432,678
PR_kwDOCUB6oc45t-Wo
17,717
Revert "Change push CI to run on workflow_run event"
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "merged to avoid further test failures" ]
1,655
1,655
1,655
COLLABORATOR
null
Reverts huggingface/transformers#17692 Really sorry, but the `notification_service.py` has error ``` Traceback (most recent call last): File "utils/notification_service.py", line 766, in <module> ci_author = ci_details["author"]["login"] KeyError: 'author' ``` as the GH event is no longer coupled with a commit, and we lose some information about commit/author. I need to change more things. (On my own repo for testing, there is no `notification_service.py`. And this change of `workflow_run` can only be verified when being merged to `main`)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17717/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17717", "html_url": "https://github.com/huggingface/transformers/pull/17717", "diff_url": "https://github.com/huggingface/transformers/pull/17717.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17717.patch", "merged_at": 1655311363000 }
https://api.github.com/repos/huggingface/transformers/issues/17716
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17716/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17716/comments
https://api.github.com/repos/huggingface/transformers/issues/17716/events
https://github.com/huggingface/transformers/pull/17716
1,272,408,717
PR_kwDOCUB6oc45t5L4
17,716
Prepare transformers for v0.8.0 huggingface-hub release
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
MEMBER
null
Updates the staging endpoint to use `hub-ci` instead of `moon-staging`. This should be merged only once v0.8.0 is released.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17716/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17716", "html_url": "https://github.com/huggingface/transformers/pull/17716", "diff_url": "https://github.com/huggingface/transformers/pull/17716.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17716.patch", "merged_at": 1655826678000 }
https://api.github.com/repos/huggingface/transformers/issues/17715
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17715/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17715/comments
https://api.github.com/repos/huggingface/transformers/issues/17715/events
https://github.com/huggingface/transformers/pull/17715
1,272,387,505
PR_kwDOCUB6oc45t0l3
17,715
Make datasets<=2.2.2 for a quick fix for test failures
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "A full error message is\r\n\r\n```\r\n2022-06-15T03:19:39.4149310Z =================================== FAILURES ===================================\r\n2022-06-15T03:19:39.4149596Z _______________________ TestTrainerExt.test_run_seq2seq ________________________\r\n2022-06-15T03:19:39.4149773Z \r\n2022-06-15T03:19:39.4150008Z self = <test_trainer_ext.TestTrainerExt testMethod=test_run_seq2seq>\r\n2022-06-15T03:19:39.4150218Z \r\n2022-06-15T03:19:39.4150293Z @slow\r\n2022-06-15T03:19:39.4150745Z def test_run_seq2seq(self):\r\n2022-06-15T03:19:39.4151132Z > output_dir = self.run_trainer(\r\n2022-06-15T03:19:39.4151468Z eval_steps=2,\r\n2022-06-15T03:19:39.4151820Z max_len=128,\r\n2022-06-15T03:19:39.4152199Z model_name=MARIAN_MODEL,\r\n2022-06-15T03:19:39.4153137Z learning_rate=3e-4,\r\n2022-06-15T03:19:39.4153380Z num_train_epochs=10,\r\n2022-06-15T03:19:39.4154587Z distributed=False,\r\n2022-06-15T03:19:39.4155199Z )\r\n2022-06-15T03:19:39.4155329Z \r\n2022-06-15T03:19:39.4155456Z tests/extended/test_trainer_ext.py:180: \r\n2022-06-15T03:19:39.4155723Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n2022-06-15T03:19:39.4156015Z tests/extended/test_trainer_ext.py:375: in run_trainer\r\n2022-06-15T03:19:39.4156267Z main()\r\n2022-06-15T03:19:39.4156535Z examples/pytorch/translation/run_translation.py:346: in main\r\n2022-06-15T03:19:39.4156923Z raw_datasets = load_dataset(\r\n2022-06-15T03:19:39.4157383Z /usr/local/lib/python3.8/dist-packages/datasets/load.py:1656: in load_dataset\r\n2022-06-15T03:19:39.4157724Z builder_instance = load_dataset_builder(\r\n2022-06-15T03:19:39.4158171Z /usr/local/lib/python3.8/dist-packages/datasets/load.py:1439: in load_dataset_builder\r\n2022-06-15T03:19:39.4158514Z dataset_module = dataset_module_factory(\r\n2022-06-15T03:19:39.4159311Z /usr/local/lib/python3.8/dist-packages/datasets/load.py:1097: in dataset_module_factory\r\n2022-06-15T03:19:39.4159770Z return PackagedDatasetModuleFactory(\r\n2022-06-15T03:19:39.4160536Z /usr/local/lib/python3.8/dist-packages/datasets/load.py:743: in get_module\r\n2022-06-15T03:19:39.4161066Z data_files = DataFilesDict.from_local_or_remote(\r\n2022-06-15T03:19:39.4161777Z /usr/local/lib/python3.8/dist-packages/datasets/data_files.py:588: in from_local_or_remote\r\n2022-06-15T03:19:39.4162298Z DataFilesList.from_local_or_remote(\r\n2022-06-15T03:19:39.4163081Z /usr/local/lib/python3.8/dist-packages/datasets/data_files.py:556: in from_local_or_remote\r\n2022-06-15T03:19:39.4163523Z data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n2022-06-15T03:19:39.4164078Z /usr/local/lib/python3.8/dist-packages/datasets/data_files.py:194: in resolve_patterns_locally_or_by_urls\r\n2022-06-15T03:19:39.4164508Z for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):\r\n2022-06-15T03:19:39.4164839Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n2022-06-15T03:19:39.4164997Z \r\n2022-06-15T03:19:39.4165147Z base_path = '/transformers'\r\n2022-06-15T03:19:39.4165579Z pattern = '/transformers/tests/extended/../fixtures/tests_samples/wmt_en_ro/train.json'\r\n2022-06-15T03:19:39.4165899Z allowed_extensions = None\r\n2022-06-15T03:19:39.4166043Z \r\n2022-06-15T03:19:39.4166160Z def _resolve_single_pattern_locally(\r\n2022-06-15T03:19:39.4166478Z base_path: str, pattern: str, allowed_extensions: Optional[List[str]] = None\r\n2022-06-15T03:19:39.4166811Z ) -> List[Path]:\r\n2022-06-15T03:19:39.4167017Z \"\"\"\r\n2022-06-15T03:19:39.4167301Z Return the absolute paths to all the files that match the given patterns.\r\n2022-06-15T03:19:39.4167622Z It also supports absolute paths in patterns.\r\n2022-06-15T03:19:39.4167923Z If an URL is passed, it is returned as is.\r\n2022-06-15T03:19:39.4168165Z \"\"\"\r\n2022-06-15T03:19:39.4168412Z pattern = os.path.join(base_path, pattern)\r\n2022-06-15T03:19:39.4168678Z data_files_ignore = FILES_TO_IGNORE\r\n2022-06-15T03:19:39.4168931Z fs = LocalFileSystem()\r\n2022-06-15T03:19:39.4169275Z glob_iter = [PurePath(filepath) for filepath in fs.glob(pattern) if fs.isfile(filepath)]\r\n2022-06-15T03:19:39.4169591Z matched_paths = [\r\n2022-06-15T03:19:39.4169981Z Path(filepath).resolve()\r\n2022-06-15T03:19:39.4170218Z for filepath in glob_iter\r\n2022-06-15T03:19:39.4170568Z if filepath.name not in data_files_ignore and not any(part.startswith((\".\", \"__\")) for part in filepath.parts)\r\n2022-06-15T03:19:39.4170869Z ]\r\n2022-06-15T03:19:39.4171068Z if allowed_extensions is not None:\r\n2022-06-15T03:19:39.4171286Z out = [\r\n2022-06-15T03:19:39.4171477Z filepath\r\n2022-06-15T03:19:39.4171706Z for filepath in matched_paths\r\n2022-06-15T03:19:39.4172173Z if any(suffix[1:] in allowed_extensions for suffix in filepath.suffixes)\r\n2022-06-15T03:19:39.4172436Z ]\r\n2022-06-15T03:19:39.4172649Z if len(out) < len(matched_paths):\r\n2022-06-15T03:19:39.4173032Z invalid_matched_files = list(set(matched_paths) - set(out))\r\n2022-06-15T03:19:39.4173296Z logger.info(\r\n2022-06-15T03:19:39.4173809Z f\"Some files matched the pattern '{pattern}' at {Path(base_path).resolve()} but don't have valid data file extensions: {invalid_matched_files}\"\r\n2022-06-15T03:19:39.4174147Z )\r\n2022-06-15T03:19:39.4174332Z else:\r\n2022-06-15T03:19:39.4174527Z out = matched_paths\r\n2022-06-15T03:19:39.4174772Z if not out and not contains_wildcards(pattern):\r\n2022-06-15T03:19:39.4175185Z error_msg = f\"Unable to find '{pattern}' at {Path(base_path).resolve()}\"\r\n2022-06-15T03:19:39.4175489Z if allowed_extensions is not None:\r\n2022-06-15T03:19:39.4175807Z error_msg += f\" with any supported extension {list(allowed_extensions)}\"\r\n2022-06-15T03:19:39.4176181Z > raise FileNotFoundError(error_msg)\r\n2022-06-15T03:19:39.4176703Z E FileNotFoundError: Unable to find '/transformers/tests/extended/../fixtures/tests_samples/wmt_en_ro/train.json' at /transformers\r\n2022-06-15T03:19:39.4176964Z \r\n2022-06-15T03:19:39.4177238Z /usr/local/lib/python3.8/dist-packages/datasets/data_files.py:144: FileNotFoundError\r\n```", "This shouldn't be merged before the release branch is cut, to avoid the pin being in the release.", "@lhoestq \r\n\r\nAre you already aware of this issue (regarding `load_dataset`)? Otherwise I can try to make a simple reproducible example. Thank you :-)", "_The documentation is not available anymore as the PR was closed or merged._", "Do you think it is worth changing Dockerfile (for testing) to install datasets 2.2.2 for now? And discard this PR maybe ?", "superseded by https://github.com/huggingface/transformers/pull/17721\r\n\r\nand https://github.com/huggingface/datasets/pull/4505" ]
1,655
1,655
1,655
COLLABORATOR
null
# What does this PR do? Currently, we have a lot of test failures (scheduled CI) with ``` FileNotFoundError: Unable to find '/transformers/tests/extended/../fixtures/tests_samples/wmt_en_ro/train.json' at /transformers ``` which is caused by the release of `datasets 2.3.x`. This PR changes to "datasets<=2.2.2" temporarily to avoid the test failures.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17715/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17715", "html_url": "https://github.com/huggingface/transformers/pull/17715", "diff_url": "https://github.com/huggingface/transformers/pull/17715.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17715.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17714
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17714/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17714/comments
https://api.github.com/repos/huggingface/transformers/issues/17714/events
https://github.com/huggingface/transformers/issues/17714
1,272,244,643
I_kwDOCUB6oc5L1Omj
17,714
SegFormer feature extractor `do_normalize=False`
{ "login": "johnnv1", "id": 20444345, "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnnv1", "html_url": "https://github.com/johnnv1", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "repos_url": "https://api.github.com/users/johnnv1/repos", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null }, { "id": 4235521865, "node_id": "LA_kwDOCUB6oc78dO9J", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20extractors", "name": "Feature extractors", "color": "c2e0c6", "default": false, "description": "" } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This hasn't been fixed yet ", "cc @alaradirik @amyeroberts as well :)", "Hi @johnnv1,\r\n\r\nthanks for reporting, we're aware of this issue with feature extractors (see #15055 for a detailed description) and are planning to take it into account when updating the preprocessing pipeline for our vision models.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,662
1,662
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` ### Who can help? @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python # [...] feature_extractor = SegformerFeatureExtractor.from_pretrained('nvidia/mit-b1', do_resize=False, do_normalize=False) # [...] # call the extractor # minimal example img = Image.fromarray(np.random.randint(0, 255, (100, 100, 3), dtype=np.uint8)) msk = Image.fromarray(np.random.randint(1, 10, (100, 100), dtype=np.uint8)) feature_extractor(images=img, segmentation_maps=msk, return_tensors="pt") ``` This will raise: ```pythonoutput --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/transformers/feature_extraction_utils.py in convert_to_tensors(self, tensor_type) 167 if not is_tensor(value): --> 168 tensor = as_tensor(value) 169 4 frames /usr/local/lib/python3.7/dist-packages/transformers/feature_extraction_utils.py in as_tensor(value) 149 value = np.array(value) --> 150 return torch.tensor(value) 151 RuntimeError: Could not infer dtype of Image During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-70-9cd4a6ca6f4b> in <module>() 2 msk = Image.fromarray(np.array((100,100,1), dtype=np.uint8)) 3 ----> 4 feature_extractor(images=img, segmentation_maps=msk, return_tensors="pt") /usr/local/lib/python3.7/dist-packages/transformers/models/segformer/feature_extraction_segformer.py in __call__(self, images, segmentation_maps, return_tensors, **kwargs) 208 data["labels"] = labels 209 --> 210 encoded_inputs = BatchFeature(data=data, tensor_type=return_tensors) 211 212 return encoded_inputs /usr/local/lib/python3.7/dist-packages/transformers/feature_extraction_utils.py in __init__(self, data, tensor_type) 77 def __init__(self, data: Optional[Dict[str, Any]] = None, tensor_type: Union[None, str, TensorType] = None): 78 super().__init__(data) ---> 79 self.convert_to_tensors(tensor_type=tensor_type) 80 81 def __getitem__(self, item: str) -> Union[Any]: /usr/local/lib/python3.7/dist-packages/transformers/feature_extraction_utils.py in convert_to_tensors(self, tensor_type) 173 raise ValueError("Unable to create tensor returning overflowing values of different lengths. ") 174 raise ValueError( --> 175 "Unable to create tensor, you should probably activate padding " 176 "with 'padding=True' to have batched tensors with the same length." 177 ) ValueError: Unable to create tensor, you should probably activate padding with 'padding=True' to have batched tensors with the same length. ``` ### Expected behavior I was expecting that no error occurs in the conversion to tensor if I don't perform the normalization. This example is for SegFormer model, but I think the DETR have the same issue (#16715) A workaround, is use: ```python feature_extractor = SegformerFeatureExtractor.from_pretrained('nvidia/mit-b1', do_resize=False, do_normalize=True, image_mean= [0., 0., 0.], image_std = [1., 1., 1.]) ) ``` I don't know if `convert_to_tensors` is to work with `PIL.Image`, maybe just need to add this conversion as a default step in the extractor: ```python images = [self.to_numpy_array(image) for image in images if isinstance(image, Image.Image)] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17714/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17713
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17713/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17713/comments
https://api.github.com/repos/huggingface/transformers/issues/17713/events
https://github.com/huggingface/transformers/pull/17713
1,272,228,378
PR_kwDOCUB6oc45tSFI
17,713
TF Sharded
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "id": 1834054694, "node_id": "MDU6TGFiZWwxODM0MDU0Njk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow", "name": "TensorFlow", "color": "FF6F00", "default": false, "description": "Anything TensorFlow" }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Okay so the tfopt_for_causal_lm/tfopt_model \r\nprefix from the tfopt_for_causal_lm/model/decoder/embed_positions/weight:0 in the index json comes from the actual name of the layer (so tf side). This also creates the hack that we sometime need when some layer is shared : for OPT we have the following : 'decoder.embed_tokens/model.decoder.embed_tokens/weight:0' which thus becomes model.decoder.embed_tokens/weight:0 . Most interesting part is that the ‘decoder.embed_tokens’ comes from https://github.com/ArthurZucker/transformers/blob/e950ff48a91840e30966abaf86bdb02dc16fcdab/src/transformers/models/opt/modeling_tf_opt.py#L499-L511 (the load weight prefix hack using load_weight_prefix) I am sure that there is something to do about that so I will detail that and dig a bit further", "Looks very nice to me!\r\n\r\nOnly did a very high-level review. Defering to @gante and @sgugger here :-)" ]
1,655
1,655
1,655
COLLABORATOR
null
# What does this PR do? Introduces the sharding of TF models following the pytroch implementation. A simple working example is the following : ```python from transformers import TFOPTModel save_directory = "opt-350m" model = TFOPTModel.from_pretrained("facebook/opt-350m") model.save_pretrained(save_directory, max_shard_size = "1GB") tf_model = TFOPTModel.from_pretrained(save_directory) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17713/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17713", "html_url": "https://github.com/huggingface/transformers/pull/17713", "diff_url": "https://github.com/huggingface/transformers/pull/17713.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17713.patch", "merged_at": 1655827269000 }
https://api.github.com/repos/huggingface/transformers/issues/17712
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17712/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17712/comments
https://api.github.com/repos/huggingface/transformers/issues/17712/events
https://github.com/huggingface/transformers/pull/17712
1,272,200,717
PR_kwDOCUB6oc45tMDf
17,712
Fix Automatic Download of Pretrained Weights in DETR
{ "login": "AnugunjNaman", "id": 42839570, "node_id": "MDQ6VXNlcjQyODM5NTcw", "avatar_url": "https://avatars.githubusercontent.com/u/42839570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnugunjNaman", "html_url": "https://github.com/AnugunjNaman", "followers_url": "https://api.github.com/users/AnugunjNaman/followers", "following_url": "https://api.github.com/users/AnugunjNaman/following{/other_user}", "gists_url": "https://api.github.com/users/AnugunjNaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/AnugunjNaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnugunjNaman/subscriptions", "organizations_url": "https://api.github.com/users/AnugunjNaman/orgs", "repos_url": "https://api.github.com/users/AnugunjNaman/repos", "events_url": "https://api.github.com/users/AnugunjNaman/events{/privacy}", "received_events_url": "https://api.github.com/users/AnugunjNaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@NielsRogge ", "@NielsRogge Any Update here?" ]
1,655
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? Fixes #15764
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17712/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17712", "html_url": "https://github.com/huggingface/transformers/pull/17712", "diff_url": "https://github.com/huggingface/transformers/pull/17712.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17712.patch", "merged_at": 1655822736000 }
https://api.github.com/repos/huggingface/transformers/issues/17711
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17711/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17711/comments
https://api.github.com/repos/huggingface/transformers/issues/17711/events
https://github.com/huggingface/transformers/pull/17711
1,272,093,069
PR_kwDOCUB6oc45s0pC
17,711
Make attention_mask axes dynamic when exporting onnx
{ "login": "unbuilt", "id": 1238408, "node_id": "MDQ6VXNlcjEyMzg0MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1238408?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unbuilt", "html_url": "https://github.com/unbuilt", "followers_url": "https://api.github.com/users/unbuilt/followers", "following_url": "https://api.github.com/users/unbuilt/following{/other_user}", "gists_url": "https://api.github.com/users/unbuilt/gists{/gist_id}", "starred_url": "https://api.github.com/users/unbuilt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unbuilt/subscriptions", "organizations_url": "https://api.github.com/users/unbuilt/orgs", "repos_url": "https://api.github.com/users/unbuilt/repos", "events_url": "https://api.github.com/users/unbuilt/events{/privacy}", "received_events_url": "https://api.github.com/users/unbuilt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17711). All of your documentation changes will be reflected on that endpoint.", "cc @michaelbenayoun @lewtun ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey @unbuilt just checking if you were able to test that the script runs correctly with your change and the default settings? If yes, this looks good to merge IMO :)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,665
1,665
NONE
null
# What does this PR do? Make `attention_mask` axes dynamic when exporting onnx ## Who can review? [@fatcat-z](https://github.com/fatcat-z)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17711/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17711", "html_url": "https://github.com/huggingface/transformers/pull/17711", "diff_url": "https://github.com/huggingface/transformers/pull/17711.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17711.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17710
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17710/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17710/comments
https://api.github.com/repos/huggingface/transformers/issues/17710/events
https://github.com/huggingface/transformers/pull/17710
1,272,014,911
PR_kwDOCUB6oc45sjjF
17,710
[ViTMAE] Fix docstrings and variable names
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? Fixes #17473 Fixes #17665 This PR improves the docstrings and variable names of the patchify, unpatchify and forward_loss methods of ViTMAE. This way, also the number of channels isn't hardcoded anymore. cc @sayakpaul
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17710/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17710", "html_url": "https://github.com/huggingface/transformers/pull/17710", "diff_url": "https://github.com/huggingface/transformers/pull/17710.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17710.patch", "merged_at": 1655819761000 }
https://api.github.com/repos/huggingface/transformers/issues/17709
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17709/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17709/comments
https://api.github.com/repos/huggingface/transformers/issues/17709/events
https://github.com/huggingface/transformers/pull/17709
1,271,845,071
PR_kwDOCUB6oc45r_N-
17,709
[Wav2Vec2Conformer] Official release
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Wait, how was the repo-consistency check passing without this? It should have given a big red cross.", "> Wait, how was the repo-consistency check passing without this? It should have given a big red cross.\r\n\r\nWhat is the problem here exactly? ", "There is a check in the CI that shouldn't let models be present without being in the README, I was wondering why it was not failing but found the reason. `Wav2Vec2-Conformer` is whitelisted for this test. Could you remove it from `MODELS_NOT_IN_README` in `utils/check_copies.py` in this PR?", "I see - will do! " ]
1,655
1,655
1,655
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add link to paper and improve readme ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17709/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17709", "html_url": "https://github.com/huggingface/transformers/pull/17709", "diff_url": "https://github.com/huggingface/transformers/pull/17709.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17709.patch", "merged_at": 1655310855000 }
https://api.github.com/repos/huggingface/transformers/issues/17708
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17708/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17708/comments
https://api.github.com/repos/huggingface/transformers/issues/17708/events
https://github.com/huggingface/transformers/pull/17708
1,271,732,626
PR_kwDOCUB6oc45rnkw
17,708
Fix duplicate code at T5Model
{ "login": "lkm2835", "id": 30465912, "node_id": "MDQ6VXNlcjMwNDY1OTEy", "avatar_url": "https://avatars.githubusercontent.com/u/30465912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lkm2835", "html_url": "https://github.com/lkm2835", "followers_url": "https://api.github.com/users/lkm2835/followers", "following_url": "https://api.github.com/users/lkm2835/following{/other_user}", "gists_url": "https://api.github.com/users/lkm2835/gists{/gist_id}", "starred_url": "https://api.github.com/users/lkm2835/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lkm2835/subscriptions", "organizations_url": "https://api.github.com/users/lkm2835/orgs", "repos_url": "https://api.github.com/users/lkm2835/repos", "events_url": "https://api.github.com/users/lkm2835/events{/privacy}", "received_events_url": "https://api.github.com/users/lkm2835/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hey @lkm2835,\r\n\r\ncould you give a bit more feedback on why this is not necessary? I don't exactly see why this is necessary at the moment.", "Oh, sorry @patrickvonplaten \r\n\r\n1411-1412 and 1414-1415 are same code.\r\n```\r\n1411 if self.model_parallel:\r\n1412 torch.cuda.set_device(self.decoder.first_device)\r\n1413 # Set device for model parallelism\r\n1414 if self.model_parallel:\r\n1415 torch.cuda.set_device(self.decoder.first_device)\r\n1416 hidden_states = hidden_states.to(self.decoder.first_device)\r\n ...\r\n``` \r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L1411\r\n", "In T5ForConditionalGeneration,\r\n\r\n```\r\n1619 if self.model_parallel:\r\n1620 torch.cuda.set_device(self.decoder.first_device)\r\n\r\n1622 if labels is not None and decoder_input_ids is None and decoder_inputs_embeds is None:\r\n1623 # get decoder inputs from shifting lm labels to the right\r\n1624 decoder_input_ids = self._shift_right(labels)\r\n\r\n1626 # Set device for model parallelism\r\n1627 if self.model_parallel:\r\n1628 torch.cuda.set_device(self.decoder.first_device)\r\n1629 hidden_states = hidden_states.to(self.decoder.first_device)\r\n```\r\n\r\n1619-1620 and 1627-1628 are same code. But If 1622-1624 need `set_device`, 1619-1620 is necessary.\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L1619" ]
1,655
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? Unlike `T5ForConditionalGeneration`, it doesn't seem to be necessary at `T5Model`. @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17708/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17708", "html_url": "https://github.com/huggingface/transformers/pull/17708", "diff_url": "https://github.com/huggingface/transformers/pull/17708.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17708.patch", "merged_at": 1655839841000 }
https://api.github.com/repos/huggingface/transformers/issues/17707
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17707/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17707/comments
https://api.github.com/repos/huggingface/transformers/issues/17707/events
https://github.com/huggingface/transformers/issues/17707
1,271,732,445
I_kwDOCUB6oc5LzRjd
17,707
DensePhrase: StopIteration: Caught StopIteration in replica 0 on device 0.
{ "login": "xixiaoyao", "id": 24541791, "node_id": "MDQ6VXNlcjI0NTQxNzkx", "avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xixiaoyao", "html_url": "https://github.com/xixiaoyao", "followers_url": "https://api.github.com/users/xixiaoyao/followers", "following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}", "gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}", "starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions", "organizations_url": "https://api.github.com/users/xixiaoyao/orgs", "repos_url": "https://api.github.com/users/xixiaoyao/repos", "events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}", "received_events_url": "https://api.github.com/users/xixiaoyao/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,658
1,658
NONE
null
### System Info ```shell densephrase == 1.0 - `transformers` version: 2.9.0 - Platform: Linux-4.14.0_1-0-0-39-x86_64-with-centos-7.9.2009-Core - Python version: 3.7.13 - PyTorch version (GPU?): 1.9.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? I installed densephrase absolutely followed with readme, [here](https://github.com/princeton-nlp/DensePhrases) and when I run `make draft MODEL_NAME=test`,it raises error as follows. ![image](https://user-images.githubusercontent.com/24541791/173757450-0f05b89a-ba95-4c6c-a17e-358331e44ce2.png) ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction after installed densephrase(https://github.com/princeton-nlp/DensePhrases), just run: make draft MODEL_NAME=test ### Expected behavior ```shell no error ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17707/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17706
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17706/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17706/comments
https://api.github.com/repos/huggingface/transformers/issues/17706/events
https://github.com/huggingface/transformers/issues/17706
1,271,659,472
I_kwDOCUB6oc5Ly_vQ
17,706
QuestionAnsweringPipeline returns full context in Japanese
{ "login": "KoichiYasuoka", "id": 15098598, "node_id": "MDQ6VXNlcjE1MDk4NTk4", "avatar_url": "https://avatars.githubusercontent.com/u/15098598?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KoichiYasuoka", "html_url": "https://github.com/KoichiYasuoka", "followers_url": "https://api.github.com/users/KoichiYasuoka/followers", "following_url": "https://api.github.com/users/KoichiYasuoka/following{/other_user}", "gists_url": "https://api.github.com/users/KoichiYasuoka/gists{/gist_id}", "starred_url": "https://api.github.com/users/KoichiYasuoka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KoichiYasuoka/subscriptions", "organizations_url": "https://api.github.com/users/KoichiYasuoka/orgs", "repos_url": "https://api.github.com/users/KoichiYasuoka/repos", "events_url": "https://api.github.com/users/KoichiYasuoka/events{/privacy}", "received_events_url": "https://api.github.com/users/KoichiYasuoka/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "I suspect that \"encoding\" in Japanese models do not work at https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L452\r\nbut I'm vague how to fix.", "Hi @KoichiYasuoka 👋 As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other requests, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗 \r\n\r\n(Since the issue is about the quality of the output, it's probably model-related, and not a bug per se. In any case, if you suspect it is due to a bug in `transformers`, please add more information here)", "Hi @KoichiYasuoka ,\r\n\r\nThis seems to be linked to the pipeline attempts to align on \"words\". The problem is that this japanese tokenizer does not ever cut on \"words\" so the whole context is a single word, so the realignment just forgets all about the actual answer, which is a bit sad.\r\n\r\nI created a PR to include a new parameter to disable this so it can work on your use case (I personally think it should be the default but we cannot change this because of backward compatibility)\r\n\r\n", "Thank you @Narsil for creating new PR with `align_to_words=False` option. Well, can I use the option in the `widget` of [deberta-base-japanese-aozora-ud-head](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-aozora-ud-head) page?", "Hi, the PR is not merged yet, and it will take a few days before it lands on the API (API doesn't run master).\r\n\r\nAfterwards, while being undocumented and thus maybe deactivated at anytime (though we rarely do this), you could send `align_to_words: false` within the `parameters` part of your query to the API.\r\n\r\nUnfortunately the widget itself will not use parameters.\r\n\r\nDoes that answer your question ?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,660
1,660
CONTRIBUTOR
null
### System Info ```shell - `transformers` version: 4.19.4 - Platform: Linux-5.10.0-13-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - Huggingface_hub version: 0.1.0 - PyTorch version (GPU?): 1.11.0+cu102 (False) ``` ### Who can help? @Narsil @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction QuestionAnsweringPipeline (almost always) returns full `context` in Japanese, for example: ```py from transformers import AutoTokenizer, AutoModelForQuestionAnswering, QuestionAnsweringPipeline tokenizer = AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head") model = AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head") qap = QuestionAnsweringPipeline(tokenizer=tokenizer, model=model) print(qap(question="国語", context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) ``` returns `{'score': 0.9999955892562866, 'start': 0, 'end': 30, 'answer': '全学年にわたって小学校の国語の教科書に挿し絵が用いられている'}`. On the other hand, directly with `torch.argmax` ```py import torch from transformers import AutoTokenizer,AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head") model = AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head") question = "国語" context = "全学年にわたって小学校の国語の教科書に挿し絵が用いられている" inputs = tokenizer(question,context, return_tensors="pt", return_offsets_mapping=True) offsets = inputs.pop("offset_mapping").tolist()[0] outputs = model(**inputs) start, end = torch.argmax(outputs.start_logits), torch.argmax(outputs.end_logits) print(context[offsets[start][0]:offsets[end][-1]]) ``` the model returns the answer "教科書" correctly. ### Expected behavior ```shell Return the right answer "教科書" instead of full context. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17706/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17705
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17705/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17705/comments
https://api.github.com/repos/huggingface/transformers/issues/17705/events
https://github.com/huggingface/transformers/issues/17705
1,271,595,435
I_kwDOCUB6oc5LywGr
17,705
mBART generate random strings in end of sentence
{ "login": "ZeguanXiao", "id": 38279341, "node_id": "MDQ6VXNlcjM4Mjc5MzQx", "avatar_url": "https://avatars.githubusercontent.com/u/38279341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZeguanXiao", "html_url": "https://github.com/ZeguanXiao", "followers_url": "https://api.github.com/users/ZeguanXiao/followers", "following_url": "https://api.github.com/users/ZeguanXiao/following{/other_user}", "gists_url": "https://api.github.com/users/ZeguanXiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZeguanXiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZeguanXiao/subscriptions", "organizations_url": "https://api.github.com/users/ZeguanXiao/orgs", "repos_url": "https://api.github.com/users/ZeguanXiao/repos", "events_url": "https://api.github.com/users/ZeguanXiao/events{/privacy}", "received_events_url": "https://api.github.com/users/ZeguanXiao/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[]
1,655
1,655
1,655
NONE
null
### System Info ```shell transformers==4.19.2 ``` ### Who can help? @patil-suraj@patrickvonplaten, @Narsil, @gante ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Since this happens to my personal project and the code is too custom, I just past the results and ```generete```call. As below show, sometimes mBART don't stop generating sentence properly but generate some strange tokens. ``` if "25" in self.hparams.model_name_or_path: generated_tokens = self.model.generate(input_ids=batch["input_ids"], attention_mask=batch["attention_mask"], decoder_start_token_id=self.tokenizer.lang_code_to_id[self.hparams.tgt_lang], do_sample=True, temperature=self.hparams.temperature, num_return_sequences=self.hparams.n_hypothesis, max_length=MAX_LENGTH) else: generated_tokens = self.model.generate(input_ids=batch["input_ids"], attention_mask=batch["attention_mask"], forced_bos_token_id=self.tokenizer.lang_code_to_id[self.hparams.tgt_lang], do_sample=True, temperature=self.hparams.temperature, num_return_sequences=self.hparams.n_hypothesis, max_length=MAX_LENGTH) ``` <img width="867" alt="屏幕快照 2022-06-15 上午10 40 25" src="https://user-images.githubusercontent.com/38279341/173725168-c867fc87-6f0f-4205-a76e-bf70b5a946d6.png"> <img width="412" alt="屏幕快照 2022-06-15 上午10 44 52" src="https://user-images.githubusercontent.com/38279341/173725722-271fb4d4-ccdc-47b0-be5c-31296c596fb2.png"> ### Expected behavior ```shell mBART all ways generate valid sentence. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17705/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17704
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17704/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17704/comments
https://api.github.com/repos/huggingface/transformers/issues/17704/events
https://github.com/huggingface/transformers/issues/17704
1,271,108,260
I_kwDOCUB6oc5Lw5Kk
17,704
Cannot run run_qa.py due to "ImportError: cannot import name 'send_example_telemetry'"
{ "login": "maria364", "id": 13906264, "node_id": "MDQ6VXNlcjEzOTA2MjY0", "avatar_url": "https://avatars.githubusercontent.com/u/13906264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maria364", "html_url": "https://github.com/maria364", "followers_url": "https://api.github.com/users/maria364/followers", "following_url": "https://api.github.com/users/maria364/following{/other_user}", "gists_url": "https://api.github.com/users/maria364/gists{/gist_id}", "starred_url": "https://api.github.com/users/maria364/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maria364/subscriptions", "organizations_url": "https://api.github.com/users/maria364/orgs", "repos_url": "https://api.github.com/users/maria364/repos", "events_url": "https://api.github.com/users/maria364/events{/privacy}", "received_events_url": "https://api.github.com/users/maria364/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, did you solve this problem, I have the same problem now", "@maria364 \r\nHi, did you solve this problem, I have the same problem now TAT", "Hi, I am experiencing the same problem when trying to run \"run_mlm.py\".", "Hi, @maria364 @xueqianyi @DidiDerDenker Could you try the latest version of `transformers`? This should fix the issue I believe.\r\n\r\n", "> \r\n\r\nThanks so much!And that makes sense:\r\n`pip install git+https://github.com/huggingface/transformers`" ]
1,655
1,656
1,655
NONE
null
### System Info ```shell - `transformers` version: 4.18.0.dev0 - Platform: Linux-4.15.0-180-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.10.0+cu102 (True) - Tensorflow version (GPU?): 2.6.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am trying to run the [run_qa.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py) using the corresponding command provided in the instructions. However, I face the following issue: Traceback (most recent call last): File "run_qa.py", line 45, in <module> from transformers.utils import check_min_version, send_example_telemetry ImportError: cannot import name 'send_example_telemetry' ### Expected behavior ```shell I would expect to see the following values, as mentioned in the instructions. f1 = 88.52 exact_match = 81.22 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17704/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17703
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17703/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17703/comments
https://api.github.com/repos/huggingface/transformers/issues/17703/events
https://github.com/huggingface/transformers/issues/17703
1,271,015,965
I_kwDOCUB6oc5Lwiod
17,703
Add Flax implementation for BLOOM
{ "login": "haileyschoelkopf", "id": 65563625, "node_id": "MDQ6VXNlcjY1NTYzNjI1", "avatar_url": "https://avatars.githubusercontent.com/u/65563625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haileyschoelkopf", "html_url": "https://github.com/haileyschoelkopf", "followers_url": "https://api.github.com/users/haileyschoelkopf/followers", "following_url": "https://api.github.com/users/haileyschoelkopf/following{/other_user}", "gists_url": "https://api.github.com/users/haileyschoelkopf/gists{/gist_id}", "starred_url": "https://api.github.com/users/haileyschoelkopf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haileyschoelkopf/subscriptions", "organizations_url": "https://api.github.com/users/haileyschoelkopf/orgs", "repos_url": "https://api.github.com/users/haileyschoelkopf/repos", "events_url": "https://api.github.com/users/haileyschoelkopf/events{/privacy}", "received_events_url": "https://api.github.com/users/haileyschoelkopf/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi!\nThank you very much for the contribution!\nOn my side it's a green light since I am not working on it, and it is not on my plans for now. Therefore, I'll be happy to review it! Let us know if you want to work on that :)", "Thanks! I will open a WIP PR soon and tag you there once I do.", "Very cool idea - think this can also be a flagship project where we can showcase how to fine-tune BLOOM with Flax cc @patil-suraj @sanchit-gandhi ", "Awesome! Would be very happy to help with it :) ", "Great idea! Would also be interested in getting involved, this would be a super cool model addition!", "Thanks everyone for the interest! I'd love to collaborate with you all.\r\n\r\nI'm hoping to push a rough draft of modeling code by the end of the weekend (earlier if I have time), and will tag you all when I open the PR with that. Does that sound alright?", "I've opened a PR (and documented the state of the in-progress code I'm still working on) at #17761 ! We can discuss further in that PR how to collaborate / proceed." ]
1,655
1,655
null
CONTRIBUTOR
null
### Model description I'm interested in adding an implementation of BLOOM in Flax. The implementation shouldn't be too bad since the pytorch implementation can serve as a guide and a way to check correctness. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation @younesbelkada @stas00 @patrickvonplaten If someone is already planning to work on this then no worries, but if not I will start on this as soon as I have time!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17703/reactions", "total_count": 9, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 2, "rocket": 4, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17703/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/17702
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17702/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17702/comments
https://api.github.com/repos/huggingface/transformers/issues/17702/events
https://github.com/huggingface/transformers/pull/17702
1,270,960,492
PR_kwDOCUB6oc45pEek
17,702
[LongT5] disable model parallel test
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thank you for the quick action, @patil-suraj ❤️\r\n> \r\n> Just to know: we no longer add `parallelize` to new models, right, like what @patrickvonplaten said it's outdated?\r\n\r\nYes, because now any model can be parallelized using the sharded checkpoint and accelerate utils that Sylvain added. cf https://github.com/huggingface/transformers/pull/17341" ]
1,655
1,655
1,655
MEMBER
null
# What does this PR do? LongT5 doesn't implement the old model parallel logic. This PR disables the model parallel tests for longt5.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17702/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17702/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17702", "html_url": "https://github.com/huggingface/transformers/pull/17702", "diff_url": "https://github.com/huggingface/transformers/pull/17702.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17702.patch", "merged_at": 1655220460000 }
https://api.github.com/repos/huggingface/transformers/issues/17701
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17701/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17701/comments
https://api.github.com/repos/huggingface/transformers/issues/17701/events
https://github.com/huggingface/transformers/pull/17701
1,270,879,599
PR_kwDOCUB6oc45ozS5
17,701
Add a TF in-graph tokenizer for BERT
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger @gante I'm happy with this PR now, so I'm ready for a final review! Also, the tests are slightly slow (~1min for the whole test suite on a single core). Should I mark some of them as `@slow`, or will they only be run nightly and when a PR affects the BERT directory anyway?" ]
1,655
1,656
1,656
MEMBER
null
Hey all! I've done some testing and the in-graph BERT tokenizer is now yielding the same outputs as our tokenizers, even for multi-part texts where we need to concatenate and get `token_type_ids` right. There's still several things left to do before this is ready, but I figured now is the time to lay it out and get some feedback! Left to do: - [x] Add input normalization - [x] Is texts_a / texts_b the right way to handle inputs? Should it just be a multidimensional tensor? - [x] Should this be a complete class rather than reading attributes from an existing tokenizer? - [x] Add imports and maybe some kind of AutoModel to make this findable by users - [x] Add dependency for tensorflow_text and import checks - [x] Do we need to change the name? Most TF users will still want the normal tokenizers - [x] Add tests, particularly one with a full model and one with saving to savedmodel, as these are main use cases - [x] Currently always pads to max length - this should be an option - [x] Add documentation - [ ] Consider adding docs on how to add other TF tokenizers, so we can see if users want to add them once we have a framework in place?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17701/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17701/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17701", "html_url": "https://github.com/huggingface/transformers/pull/17701", "diff_url": "https://github.com/huggingface/transformers/pull/17701.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17701.patch", "merged_at": 1656327981000 }
https://api.github.com/repos/huggingface/transformers/issues/17700
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17700/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17700/comments
https://api.github.com/repos/huggingface/transformers/issues/17700/events
https://github.com/huggingface/transformers/pull/17700
1,270,650,698
PR_kwDOCUB6oc45oCKX
17,700
[LongT5] Rename checkpoitns
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@patrickvonplaten Thanks for fixing this! I realized later it's not ideal to use any capital letters." ]
1,655
1,655
1,655
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> T5Long checkpoints names didn't follow the "standard" Transformers naming. Changed already on the Hub, need to change in Transformers as well . cc @patil-suraj ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17700/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17700/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17700", "html_url": "https://github.com/huggingface/transformers/pull/17700", "diff_url": "https://github.com/huggingface/transformers/pull/17700.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17700.patch", "merged_at": 1655208651000 }
https://api.github.com/repos/huggingface/transformers/issues/17699
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17699/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17699/comments
https://api.github.com/repos/huggingface/transformers/issues/17699/events
https://github.com/huggingface/transformers/pull/17699
1,270,643,752
PR_kwDOCUB6oc45oApk
17,699
Update-longt5
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,655
1,655
1,655
MEMBER
null
# What does this PR do? Fix checkpoint names in LongT5. cc @stancld
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17699/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17699", "html_url": "https://github.com/huggingface/transformers/pull/17699", "diff_url": "https://github.com/huggingface/transformers/pull/17699.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17699.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17698
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17698/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17698/comments
https://api.github.com/repos/huggingface/transformers/issues/17698/events
https://github.com/huggingface/transformers/pull/17698
1,270,641,018
PR_kwDOCUB6oc45oADW
17,698
Italian/accelerate
{ "login": "mfumanelli", "id": 53374883, "node_id": "MDQ6VXNlcjUzMzc0ODgz", "avatar_url": "https://avatars.githubusercontent.com/u/53374883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfumanelli", "html_url": "https://github.com/mfumanelli", "followers_url": "https://api.github.com/users/mfumanelli/followers", "following_url": "https://api.github.com/users/mfumanelli/following{/other_user}", "gists_url": "https://api.github.com/users/mfumanelli/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfumanelli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfumanelli/subscriptions", "organizations_url": "https://api.github.com/users/mfumanelli/orgs", "repos_url": "https://api.github.com/users/mfumanelli/repos", "events_url": "https://api.github.com/users/mfumanelli/events{/privacy}", "received_events_url": "https://api.github.com/users/mfumanelli/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,658
1,658
CONTRIBUTOR
null
# What does this PR do? Italian translation of accelerate.mdx See issue: https://github.com/huggingface/transformers/issues/17459 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). @omarespejel @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17698/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17698", "html_url": "https://github.com/huggingface/transformers/pull/17698", "diff_url": "https://github.com/huggingface/transformers/pull/17698.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17698.patch", "merged_at": 1658406228000 }
https://api.github.com/repos/huggingface/transformers/issues/17697
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17697/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17697/comments
https://api.github.com/repos/huggingface/transformers/issues/17697/events
https://github.com/huggingface/transformers/issues/17697
1,270,350,727
I_kwDOCUB6oc5LuAOH
17,697
Need the ability to modify PSM values for Tesseract call in LayoutLM V2/ XLM / V3 Processor
{ "login": "kelvinAI", "id": 10686779, "node_id": "MDQ6VXNlcjEwNjg2Nzc5", "avatar_url": "https://avatars.githubusercontent.com/u/10686779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kelvinAI", "html_url": "https://github.com/kelvinAI", "followers_url": "https://api.github.com/users/kelvinAI/followers", "following_url": "https://api.github.com/users/kelvinAI/following{/other_user}", "gists_url": "https://api.github.com/users/kelvinAI/gists{/gist_id}", "starred_url": "https://api.github.com/users/kelvinAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kelvinAI/subscriptions", "organizations_url": "https://api.github.com/users/kelvinAI/orgs", "repos_url": "https://api.github.com/users/kelvinAI/repos", "events_url": "https://api.github.com/users/kelvinAI/events{/privacy}", "received_events_url": "https://api.github.com/users/kelvinAI/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @NielsRogge ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,659
1,659
CONTRIBUTOR
null
### Feature request There exists a need to modify PSM values while calling Tesseract during feature extraction stage with OCR enabled. https://github.com/huggingface/transformers/blob/31ee80d55673f32c0f5d50936f371e661b74b21a/src/transformers/models/layoutlmv3/feature_extraction_layoutlmv3.py#L53 ### Motivation Changing Page Segmentation Modes(PSM) values have significant impact on the output of all LayoutLM models, depending on the type/formatting of input document. The default psm value is set to 3, and is not optimal in every situation. It is helpful if users can modify PSM values based on different document types. [PSM Reference](https://stackoverflow.com/questions/44619077/pytesseract-ocr-multiple-config-options) ### Your contribution I've already created a PR in a branch (https://github.com/huggingface/transformers/pull/17005) for LMV2/XLM and currently using it for my own projects, but it would be better if the official repo have it so there is no need to keep maintaining/updating my own fork. Hoping to see this feature added to LayoutLMV3!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17697/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17696
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17696/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17696/comments
https://api.github.com/repos/huggingface/transformers/issues/17696/events
https://github.com/huggingface/transformers/issues/17696
1,270,350,542
I_kwDOCUB6oc5LuALO
17,696
TypeError: can't pickle _thread.lock objects
{ "login": "abs-xyz", "id": 96618009, "node_id": "U_kgDOBcJGGQ", "avatar_url": "https://avatars.githubusercontent.com/u/96618009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abs-xyz", "html_url": "https://github.com/abs-xyz", "followers_url": "https://api.github.com/users/abs-xyz/followers", "following_url": "https://api.github.com/users/abs-xyz/following{/other_user}", "gists_url": "https://api.github.com/users/abs-xyz/gists{/gist_id}", "starred_url": "https://api.github.com/users/abs-xyz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abs-xyz/subscriptions", "organizations_url": "https://api.github.com/users/abs-xyz/orgs", "repos_url": "https://api.github.com/users/abs-xyz/repos", "events_url": "https://api.github.com/users/abs-xyz/events{/privacy}", "received_events_url": "https://api.github.com/users/abs-xyz/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hello! Facing the same issue with the following system info: \r\n```\r\ndatasets==2.5.1\r\nhuggingface-hub==0.10.0\r\nmultidict==6.0.2\r\nmultiprocess==0.70.13\r\nnumpy==1.23.3\r\ntokenizers==0.12.1\r\ntorch==1.9.0+cu111\r\ntorchaudio==0.9.0\r\ntqdm==4.64.1\r\ntransformers==4.22.2\r\n```\r\n\r\nThis issue is only triggered when I keep load_best_model_at_end as True (I am not doing any hyperparameter search): Training code and stack trace are: \r\n\r\n### Training Code with Trigger \r\n```\r\ntraining_args = TrainingArguments(\r\n output_dir=f'../asr/models_src_raw/{args.lang}',\r\n overwrite_output_dir = True, \r\n group_by_length=True,\r\n per_device_train_batch_size=16,\r\n gradient_accumulation_steps=2,\r\n evaluation_strategy=\"steps\",\r\n num_train_epochs=80,\r\n gradient_checkpointing=True,\r\n fp16=True,\r\n save_steps=10,\r\n eval_steps=10,\r\n logging_steps=10,\r\n learning_rate=3e-4,\r\n warmup_steps=300,\r\n save_total_limit=1,\r\n load_best_model_at_end = True, \r\n metric_for_best_model = wer_metric, \r\n skip_memory_metrics = True\r\n )\r\n\r\n trainer = Trainer(\r\n model=model,\r\n data_collator=data_collator,\r\n args=training_args,\r\n compute_metrics=compute_metrics,\r\n train_dataset=train,\r\n eval_dataset =test,\r\n tokenizer=processor.feature_extractor,\r\n )\r\n\r\n trainer.train()\r\n\r\n```\r\n(If I remove the load_best_model_at_end, Works smoothly) ", "Hi, with transformers 4.26.1 on Sage maker I am still having this error: TypeError: cannot pickle '_thread.lock' object.\r\n\r\ndef hp_space(trial):\r\nreturn {\r\n\"learning_rate\": trial.suggest_float(\"learning_rate\", 1e-5, 1e-3, log=True),\r\n\"num_train_epochs\": trial.suggest_int(\"num_train_epochs\", 1, 10),\r\n\"seed\": trial.suggest_int(\"seed\", 1, 40),\r\n\"per_device_train_batch_size\": trial.suggest_categorical(\"per_device_train_batch_size\", [16, 32, 64]),\r\n\"weight_decay\": trial.suggest_float(\"weight_decay\", 1e-3, 1e-1, log=True),\r\n}\r\n\r\nbest_run = trainer.hyperparameter_search(n_trials=20, direction=\"minimize\", hp_space=hp_space)" ]
1,655
1,677
1,658
NONE
null
### System Info ```shell - `transformers` version: 4.19.4 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0+cu113 (False) - Tensorflow version (GPU?): 2.8.2 (False) [Google Colab] ``` ### Who can help? @amogkam ### Information - [ ] My own modified scripts - [ ] The official example scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Note: this is related to [this closed issue](https://github.com/huggingface/transformers/issues/11249). This is the code I'm using: ``` args = TrainingArguments( f"{model_name}-hyperp-{task}", evaluation_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=3, weight_decay=0.01, skip_memory_metrics=True, # https://github.com/huggingface/transformers/issues/12177 [picking error] ) trainer = Trainer( model_init=model_init, # function to initialize model (using 'from_pretrained') args=args, train_dataset = tokenized_datasets["train"], eval_dataset = tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics, ) trainer.hyperparameter_search( hp_space=lambda _: tune_config, backend="ray", n_trials=10, resources_per_trial={"cpu": 1, "gpu": 0}, scheduler=scheduler, keep_checkpoints_num=1, checkpoint_score_attr="training_iteration", progress_reporter=reporter, local_dir="/ray_results/", name="tune_transformer_pbt", log_to_file=True, ) ``` Error: ``` TypeError Traceback (most recent call last) [<ipython-input-50-3716493001d6>](https://localhost:8080/#) in <module>() 33 local_dir="/ray_results/", 34 name="tune_transformer_pbt", ---> 35 log_to_file=True, 36 ) 37 [/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in hyperparameter_search(self, hp_space, compute_objective, n_trials, direction, backend, hp_name, **kwargs) 2083 HPSearchBackend.WANDB: run_hp_search_wandb, 2084 } -> 2085 best_run = backend_dict[backend](self, n_trials, direction, **kwargs) 2086 2087 self.hp_search_backend = None [/usr/local/lib/python3.7/dist-packages/transformers/integrations.py](https://localhost:8080/#) in run_hp_search_ray(trainer, n_trials, direction, **kwargs) 296 config=trainer.hp_space(None), 297 num_samples=n_trials, --> 298 **kwargs, 299 ) 300 best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3]) [/usr/local/lib/python3.7/dist-packages/ray/tune/tune.py](https://localhost:8080/#) in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, local_dir, search_alg, scheduler, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, reuse_actors, trial_executor, raise_on_failed_trial, callbacks, max_concurrent_trials, _experiment_checkpoint_dir, loggers, _remote) 363 364 if not trial_executor or isinstance(trial_executor, RayTrialExecutor): --> 365 _ray_auto_init() 366 367 if _remote: [/usr/local/lib/python3.7/dist-packages/ray/tune/tune.py](https://localhost:8080/#) in _ray_auto_init() 876 "call `ray.init(...)` before `tune.run`." 877 ) --> 878 ray.init() [/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 103 if func.__name__ != "init" or is_client_mode_enabled_by_default: 104 return getattr(ray, func.__name__)(*args, **kwargs) --> 105 return func(*args, **kwargs) 106 107 return wrapper [/usr/local/lib/python3.7/dist-packages/ray/worker.py](https://localhost:8080/#) in init(address, num_cpus, num_gpus, resources, object_store_memory, local_mode, ignore_reinit_error, include_dashboard, dashboard_host, dashboard_port, job_config, configure_logging, logging_level, logging_format, log_to_driver, namespace, runtime_env, storage, _enable_object_reconstruction, _redis_max_memory, _plasma_directory, _node_ip_address, _driver_object_store_memory, _memory, _redis_password, _temp_dir, _metrics_export_port, _system_config, _tracing_startup_hook, _node_name, **kwargs) 1120 1121 for hook in _post_init_hooks: -> 1122 hook() 1123 1124 node_id = global_worker.core_worker.get_current_node_id() [/usr/local/lib/python3.7/dist-packages/ray/tune/registry.py](https://localhost:8080/#) in flush(self) 230 self.references[k] = v 231 else: --> 232 self.references[k] = ray.put(v) 233 self.to_flush.clear() [/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 103 if func.__name__ != "init" or is_client_mode_enabled_by_default: 104 return getattr(ray, func.__name__)(*args, **kwargs) --> 105 return func(*args, **kwargs) 106 107 return wrapper [/usr/local/lib/python3.7/dist-packages/ray/worker.py](https://localhost:8080/#) in put(value, _owner) 1892 with profiling.profile("ray.put"): 1893 try: -> 1894 object_ref = worker.put_object(value, owner_address=serialize_owner_address) 1895 except ObjectStoreFullError: 1896 logger.info( [/usr/local/lib/python3.7/dist-packages/ray/worker.py](https://localhost:8080/#) in put_object(self, value, object_ref, owner_address) 305 ), "Local Mode does not support inserting with an ObjectRef" 306 --> 307 serialized_value = self.get_serialization_context().serialize(value) 308 # This *must* be the first place that we construct this python 309 # ObjectRef because an entry with 0 local references is created when [/usr/local/lib/python3.7/dist-packages/ray/serialization.py](https://localhost:8080/#) in serialize(self, value) 419 return RawSerializedObject(value) 420 else: --> 421 return self._serialize_to_msgpack(value) [/usr/local/lib/python3.7/dist-packages/ray/serialization.py](https://localhost:8080/#) in _serialize_to_msgpack(self, value) 398 metadata = ray_constants.OBJECT_METADATA_TYPE_PYTHON 399 pickle5_serialized_object = self._serialize_to_pickle5( --> 400 metadata, python_objects 401 ) 402 else: [/usr/local/lib/python3.7/dist-packages/ray/serialization.py](https://localhost:8080/#) in _serialize_to_pickle5(self, metadata, value) 359 except Exception as e: 360 self.get_and_clear_contained_object_refs() --> 361 raise e 362 finally: 363 self.set_out_of_band_serialization() [/usr/local/lib/python3.7/dist-packages/ray/serialization.py](https://localhost:8080/#) in _serialize_to_pickle5(self, metadata, value) 355 self.set_in_band_serialization() 356 inband = pickle.dumps( --> 357 value, protocol=5, buffer_callback=writer.buffer_callback 358 ) 359 except Exception as e: [/usr/local/lib/python3.7/dist-packages/ray/cloudpickle/cloudpickle_fast.py](https://localhost:8080/#) in dumps(obj, protocol, buffer_callback) 71 file, protocol=protocol, buffer_callback=buffer_callback 72 ) ---> 73 cp.dump(obj) 74 return file.getvalue() 75 [/usr/local/lib/python3.7/dist-packages/ray/cloudpickle/cloudpickle_fast.py](https://localhost:8080/#) in dump(self, obj) 618 def dump(self, obj): 619 try: --> 620 return Pickler.dump(self, obj) 621 except RuntimeError as e: 622 if "recursion" in e.args[0]: TypeError: can't pickle _thread.lock objects ``` I've already added `skip_memory_metrics=True` in the `TrainingArguments`. ### Expected behavior ```shell Expecting to not give this error. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17696/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17695
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17695/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17695/comments
https://api.github.com/repos/huggingface/transformers/issues/17695/events
https://github.com/huggingface/transformers/issues/17695
1,269,799,851
I_kwDOCUB6oc5Lr5ur
17,695
Difference in the number of data during deep learning
{ "login": "rurujisu", "id": 107429090, "node_id": "U_kgDOBmc84g", "avatar_url": "https://avatars.githubusercontent.com/u/107429090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rurujisu", "html_url": "https://github.com/rurujisu", "followers_url": "https://api.github.com/users/rurujisu/followers", "following_url": "https://api.github.com/users/rurujisu/following{/other_user}", "gists_url": "https://api.github.com/users/rurujisu/gists{/gist_id}", "starred_url": "https://api.github.com/users/rurujisu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rurujisu/subscriptions", "organizations_url": "https://api.github.com/users/rurujisu/orgs", "repos_url": "https://api.github.com/users/rurujisu/repos", "events_url": "https://api.github.com/users/rurujisu/events{/privacy}", "received_events_url": "https://api.github.com/users/rurujisu/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @rurujisu 👋 As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other requests, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) 🤗 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,658
1,658
NONE
null
### System Info ```shell Python 3 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction import matplotlib.pyplot as plt import matplotlib.tri as tri import numpy as np import pandas as pd from keras.models import Sequential from keras.layers import Dense, Activation input = pd.read_csv('D:\\puri\\capture\\3_1\\all\\inlet.csv', sep='\t', skiprows=0) output = pd.read_csv('D:\\puri\\capture\\3_1\\all\\output.csv', sep='\t', skiprows=0) model = Sequential([ Dense(32, input_shape=(784,)), Activation('relu'), Dense(10), Activation('softmax'), ]) model = Sequential() model.add(Dense(32, input_dim=784)) model.add(Activation('relu')) ### Expected behavior ```shell hi i just started studying machine learning I followed the machine learning example, but I can't apply it, so I'm asking here I want to know the correlation between input data and output data, but the number of input data is 500(500 x 7) and the number of output data is 1.8 million(1.8M x 4). In this case, what model should I study? (inlet variables : 7, output variables : 4) And i have 5 input-output cases, is that enough to find the correlation? Thanks for reading let me know if I need to find another way ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17695/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17694
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17694/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17694/comments
https://api.github.com/repos/huggingface/transformers/issues/17694/events
https://github.com/huggingface/transformers/issues/17694
1,269,737,282
I_kwDOCUB6oc5LrqdC
17,694
In run_mlm.py the group_texts function incorrectly splits the text into lists of chars
{ "login": "simonhughes22", "id": 2167017, "node_id": "MDQ6VXNlcjIxNjcwMTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2167017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonhughes22", "html_url": "https://github.com/simonhughes22", "followers_url": "https://api.github.com/users/simonhughes22/followers", "following_url": "https://api.github.com/users/simonhughes22/following{/other_user}", "gists_url": "https://api.github.com/users/simonhughes22/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonhughes22/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonhughes22/subscriptions", "organizations_url": "https://api.github.com/users/simonhughes22/orgs", "repos_url": "https://api.github.com/users/simonhughes22/repos", "events_url": "https://api.github.com/users/simonhughes22/events{/privacy}", "received_events_url": "https://api.github.com/users/simonhughes22/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "opened in accident, you are deleting that column" ]
1,655
1,655
1,655
NONE
null
### System Info ```shell Testing the code for the run_mlm.py - https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py to pull it into a separate notebook, I came across an issue with how the group_texts function is being used. Right now it correctly concatenates the input_ids and other lists, but it also operates on the text column (as that is one of the keys in the examples dict()) and incorrectly slices those strings as arrays, based on the token sequence length, which does not apply to the text string length. Further more it produces a list of lists of chars, not a list of strings. I am concerned this may cause issues for the subsequent DataCollatorForLanguageModeling or any downstream classes that rely on the original input text. If this text were tokenized, the function would work but in current code it appears to retain the original text strings. ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` python def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. if total_length >= max_seq_length: total_length = (total_length // max_seq_length) * max_seq_length # Split by chunks of max_len. result = { k: [ t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)] for k, t in concatenated_examples.items() } return result tokenized_datasets = tokenized_datasets.map( group_texts, batched=True, num_proc=N_CPU, load_from_cache_file=True, desc=f"Grouping texts in chunks of {max_seq_length}", ) ``` ### Expected behavior ```shell For it to group and concatenate the tokenized_data correctly without messing up the text field. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17694/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17693
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17693/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17693/comments
https://api.github.com/repos/huggingface/transformers/issues/17693/events
https://github.com/huggingface/transformers/pull/17693
1,269,646,980
PR_kwDOCUB6oc45krye
17,693
Swin main layer
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655
1,655
1,655
COLLABORATOR
null
# What does this PR do? Refactor the Swin model to have a `MainLayer` which is called by all models to get the Swin outputs (pre-head). c.f. [relevant comment](https://github.com/huggingface/transformers/pull/17427#discussion_r895278064) from @sayakpaul on ResNet port The following script was run to check weights could still be successfully loaded into the TF models: ``` from transformers import AutoFeatureExtractor, TFSwinForImageClassification, TFSwinForMaskedImageModeling checkpoint = "microsoft/swin-tiny-patch4-window7-224" # relative_position_index isn't updated during training. In TF set as instance param print("\nTFSwinForImageClassification - from PyTorch checkpoint") tf_model = TFSwinForImageClassification.from_pretrained(checkpoint, from_pt=True) print("\nTFSwinForImageClassification - from TF checkpoint") tf_model = TFSwinForImageClassification.from_pretrained(checkpoint) # relative_position_index isn't updated during training. In TF set as instance param # We don't have a masked image modeling checkpoint - use image classification checkpoint # Some weights will not be used (classifier head) # Some weights newly initialised (decoder, mask token) print("\nTFSwinForMaskedImageModeling - from PyTorch checkpoint") tf_model = TFSwinForMaskedImageModeling.from_pretrained(checkpoint, from_pt=True) print("\nTFSwinForMaskedImageModeling - from TF checkpoint") tf_model = TFSwinForMaskedImageModeling.from_pretrained(checkpoint) ``` Produced the outputs: ``` TFSwinForImageClassification - from PyTorch checkpoint Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFSwinForImageClassification: ['swin.encoder.layers.1.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.0.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.1.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.0.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.3.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.3.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.4.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.3.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.5.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.2.attention.self.relative_position_index'] - This IS expected if you are initializing TFSwinForImageClassification from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing TFSwinForImageClassification from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model). All the weights of TFSwinForImageClassification were initialized from the PyTorch model. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFSwinForImageClassification for predictions without further training. TFSwinForImageClassification - from TF checkpoint All model checkpoint layers were used when initializing TFSwinForImageClassification. All the layers of TFSwinForImageClassification were initialized from the model checkpoint at microsoft/swin-tiny-patch4-window7-224. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFSwinForImageClassification for predictions without further training. TFSwinForMaskedImageModeling - from PyTorch checkpoint Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFSwinForMaskedImageModeling: ['classifier.weight', 'swin.encoder.layers.1.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.0.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.1.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.0.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.3.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.3.blocks.1.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.4.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.3.attention.self.relative_position_index', 'classifier.bias', 'swin.encoder.layers.2.blocks.5.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.0.attention.self.relative_position_index', 'swin.encoder.layers.2.blocks.2.attention.self.relative_position_index'] - This IS expected if you are initializing TFSwinForMaskedImageModeling from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing TFSwinForMaskedImageModeling from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model). Some weights or buffers of the TF 2.0 model TFSwinForMaskedImageModeling were not initialized from the PyTorch model and are newly initialized: ['swin.embeddings.mask_token', 'decoder.0.weight', 'decoder.0.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. TFSwinForMaskedImageModeling - from TF checkpoint Some layers from the model checkpoint at microsoft/swin-tiny-patch4-window7-224 were not used when initializing TFSwinForMaskedImageModeling: ['classifier'] - This IS expected if you are initializing TFSwinForMaskedImageModeling from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing TFSwinForMaskedImageModeling from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some layers of TFSwinForMaskedImageModeling were not initialized from the model checkpoint at microsoft/swin-tiny-patch4-window7-224 and are newly initialized: ['decoder', 'swin/embeddings/mask_token:0'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17693/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17693", "html_url": "https://github.com/huggingface/transformers/pull/17693", "diff_url": "https://github.com/huggingface/transformers/pull/17693.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17693.patch", "merged_at": 1655213292000 }
https://api.github.com/repos/huggingface/transformers/issues/17692
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17692/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17692/comments
https://api.github.com/repos/huggingface/transformers/issues/17692/events
https://github.com/huggingface/transformers/pull/17692
1,269,622,718
PR_kwDOCUB6oc45kmn2
17,692
Change push CI to run on workflow_run event
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger I merged this PR, you can check on the commit history page\r\n\r\n[Change push CI to run on workflow_run event](https://github.com/huggingface/transformers/commits/main)\r\n\r\nHope you ❤️ it!", "Amazing, thanks a lot!", "I am sorry to bother you again ..." ]
1,655
1,655
1,655
COLLABORATOR
null
# What does this PR do? The attempt in #17369 (to make commit history status checks less noisy) unfortunately has no effect. After a discussion in [this comment](https://github.com/huggingface/transformers/pull/17369#issuecomment-1153846717), this PR changes push CI to be triggered by a `on: workflow_run` event. Note the change only takes effect once this PR is merged into `main`, as mentioned in the doc. of [workflow_run](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_run). The result would be like in [accelerate](https://github.com/huggingface/accelerate), where the jobs in `on-merge.yml` won't be shown, and the workflow run page look like [this](https://github.com/huggingface/accelerate/actions/workflows/on-merge.yml).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17692/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17692", "html_url": "https://github.com/huggingface/transformers/pull/17692", "diff_url": "https://github.com/huggingface/transformers/pull/17692.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17692.patch", "merged_at": 1655307811000 }
https://api.github.com/repos/huggingface/transformers/issues/17691
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17691/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17691/comments
https://api.github.com/repos/huggingface/transformers/issues/17691/events
https://github.com/huggingface/transformers/issues/17691
1,269,586,043
I_kwDOCUB6oc5LrFh7
17,691
"comet-ml not installed" error in Trainer (despite comet-ml being installed)
{ "login": "przecze", "id": 25085890, "node_id": "MDQ6VXNlcjI1MDg1ODkw", "avatar_url": "https://avatars.githubusercontent.com/u/25085890?v=4", "gravatar_id": "", "url": "https://api.github.com/users/przecze", "html_url": "https://github.com/przecze", "followers_url": "https://api.github.com/users/przecze/followers", "following_url": "https://api.github.com/users/przecze/following{/other_user}", "gists_url": "https://api.github.com/users/przecze/gists{/gist_id}", "starred_url": "https://api.github.com/users/przecze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/przecze/subscriptions", "organizations_url": "https://api.github.com/users/przecze/orgs", "repos_url": "https://api.github.com/users/przecze/repos", "events_url": "https://api.github.com/users/przecze/events{/privacy}", "received_events_url": "https://api.github.com/users/przecze/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "cc @sgugger ", "As the error message indicates, you need to have cometml installed to use it `report_to=\"comet_ml\"`\r\n```\r\nRuntimeError: CometCallback requires comet-ml to be installed. Run `pip install comet-ml`.\r\n```\r\nIt also tells you exactly which command to run to fix this: `pip install comet-ml`.", "Hey,\r\nThe issue here is that error appears despite cometml being installed (with pip).\r\n\r\nEDIT: Edited the issue title to make it more clear.\r\n\r\nOn Mon, Jul 4, 2022, 14:33 Sylvain Gugger ***@***.***> wrote:\r\n\r\n> As the error message indicates, you need to have cometml installed to use\r\n> it report_to=\"comet_ml\"\r\n>\r\n> RuntimeError: CometCallback requires comet-ml to be installed. Run `pip install comet-ml`.\r\n>\r\n> It also tells you exactly which command to run to fix this: pip install\r\n> comet-ml.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/17691#issuecomment-1173767326>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AF7MPQSGKFHH4UZWW3JTEWLVSLKYRANCNFSM5YURU4KQ>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Did you properly initialize it with your API key then?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@sgugger How to do it? In [this](https://huggingface.co/docs/transformers/main_classes/callback) doc, there's no mentioning about API key in comet callback. I tried set up COMET_API_KEY, COMET_MODE, COMET_PROJECT_NAME inside function that runs on spawn, but no luck so far. Also downgraded comet-ml till 3.1.17.\r\n\r\n`os.environ[\"COMET_API_KEY\"] = \"<api-key>\"`\r\n`os.environ[\"COMET_MODE\"] = \"ONLINE\"`\r\n`os.environ[\"COMET_PROJECT_NAME\"] = \"<project-name>\"`", "Maybe open an issue with them? We did not write this integration with comet-ml and we don't maintain it. It was written by the Comet team :-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This is still an issue", "This is still an issue, please re-open or address it -- none of the suggested methods of integrating with Comet ML are working for me -- neither the report_to=\"comet_ml\" approach or the manual compute_metrics approach from this tutorial (https://www.comet.com/docs/v2/integrations/ml-frameworks/huggingface/).", "This is still an issue right now, please kindly consider reopen and resolve this problem ! Thank you.", "Still happens to me with comet_ml installed with both conda and pip.", "@ngctnnnn @guyshur @frmccann97 @maximejkb As mentioned above, we didn't add and don't maintain the comet-ml integration. Have you raised an issue in their [relevant repo](https://github.com/comet-ml)? " ]
1,655
1,703
1,662
NONE
null
### System Info ```shell - `transformers` version: 4.19.4 - Platform: Linux-4.19.0-17-amd64-x86_64-with-glibc2.31 - Python version: 3.9.6 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.4.0 (cpu) - Jax version: 0.3.4 - JaxLib version: 0.3.2 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help? @sg ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Install comet-ml (in my case comet-ml==3.31.3) 2. Create TrainingArguments with `report-to='comet_ml' 3. Try to instantiate Trainer This can be reproduced by adding `report_to='comet_ml'` to training arguments in this notebook: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BERT/Fine_tuning_BERT_(and_friends)_for_multi_label_text_classification.ipynb Following error happens when creating the Trainer: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /tmp/ipykernel_5296/3132099784.py in <module> ----> 1 trainer = Trainer( 2 model, 3 args, 4 train_dataset=encoded_dataset["train"], 5 eval_dataset=encoded_dataset["validation"], /opt/conda/lib/python3.9/site-packages/transformers/trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics) 444 default_callbacks = DEFAULT_CALLBACKS + get_reporting_integration_callbacks(self.args.report_to) 445 callbacks = default_callbacks if callbacks is None else default_callbacks + callbacks --> 446 self.callback_handler = CallbackHandler( 447 callbacks, self.model, self.tokenizer, self.optimizer, self.lr_scheduler 448 ) /opt/conda/lib/python3.9/site-packages/transformers/trainer_callback.py in __init__(self, callbacks, model, tokenizer, optimizer, lr_scheduler) 288 self.callbacks = [] 289 for cb in callbacks: --> 290 self.add_callback(cb) 291 self.model = model 292 self.tokenizer = tokenizer /opt/conda/lib/python3.9/site-packages/transformers/trainer_callback.py in add_callback(self, callback) 305 306 def add_callback(self, callback): --> 307 cb = callback() if isinstance(callback, type) else callback 308 cb_class = callback if isinstance(callback, type) else callback.__class__ 309 if cb_class in [c.__class__ for c in self.callbacks]: /opt/conda/lib/python3.9/site-packages/transformers/integrations.py in __init__(self) 667 def __init__(self): 668 if not _has_comet: --> 669 raise RuntimeError("CometCallback requires comet-ml to be installed. Run `pip install comet-ml`.") 670 self._initialized = False 671 self._log_assets = False RuntimeError: CometCallback requires comet-ml to be installed. Run `pip install comet-ml`. ``` ### Expected behavior ```shell A Trainer is successfully created with cometml callback enabled. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17691/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17690
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17690/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17690/comments
https://api.github.com/repos/huggingface/transformers/issues/17690/events
https://github.com/huggingface/transformers/issues/17690
1,269,532,792
I_kwDOCUB6oc5Lq4h4
17,690
GPT-2 based models generation breaks when adding new special tokens
{ "login": "NtaylorOX", "id": 49034323, "node_id": "MDQ6VXNlcjQ5MDM0MzIz", "avatar_url": "https://avatars.githubusercontent.com/u/49034323?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NtaylorOX", "html_url": "https://github.com/NtaylorOX", "followers_url": "https://api.github.com/users/NtaylorOX/followers", "following_url": "https://api.github.com/users/NtaylorOX/following{/other_user}", "gists_url": "https://api.github.com/users/NtaylorOX/gists{/gist_id}", "starred_url": "https://api.github.com/users/NtaylorOX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NtaylorOX/subscriptions", "organizations_url": "https://api.github.com/users/NtaylorOX/orgs", "repos_url": "https://api.github.com/users/NtaylorOX/repos", "events_url": "https://api.github.com/users/NtaylorOX/events{/privacy}", "received_events_url": "https://api.github.com/users/NtaylorOX/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @NtaylorOX,\r\n\r\nSorry I'm not following a 100% here what the problem is here. I can run all of the above samples without a problem and I don't see exactly what the bug is here. Could you maybe copy-paste a single code snippet here that shows the error and then explain what the output should be? :-) \r\n\r\nFrom what I understand, there is a problem when adding the `<pad_token>` to GPT2's tokenizer? Why is OPT used in the example here?", "Hi! Thanks for the reply @patrickvonplaten \r\n\r\nSo there was actually a bug in my issue! The output was meant to be fully of <pad> tokens or whatever additioanl special tokens had been added - but it seems markdown was showing/compiling these. I've updated comment now. \r\n\r\nSo what happens is that when you update the GPT2 tokenizer via add_special_tokens - the generate function ends up just predicting those new additional tokens repeatedly. You can see the output in full in the colab notebook. \r\n\r\nI believe my issue has the appropriate code snippets with output - although I may have made it a bit messy. \r\n\r\nThe point here is that the using the prompt:\r\n\"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: \"\r\n\r\nThe untouched gpt model generates: \r\n\"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: Spain. Not just the capital of a country; the capital of Europe. \"\r\n\r\nBut when you add any special token, such as <pad> token using add_special_tokens and resize the embeddings of the model. You get\r\n\"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: pad pad pad pad pad or whatever special token you added.\"\r\n\r\nI am 99% sure that adding special tokens should not be intefering with the ability of the model to generate in this way.\r\n\r\nThe reason for using OPT is because it essentially uses same tokenizer class and the problem doesn't occur for it. But it has occured for all gpt2 variants I've tried.\r\n\r\nHas this cleared it up at all?\r\n\r\nAgain, I think its clearer in the colab notebook\r\n", "Hey @NtaylorOX,\r\n\r\nSo I guess you're referring to this code snippet here:\r\n\r\n```python\r\n# Declare special tokens for padding and separating the context from the slogan:\r\nSPECIAL_TOKENS_DICT = {\r\n 'pad_token': '<pad>', \r\n}\r\n\r\n# # Add these special tokens to the vocabulary and resize model's embeddings:\r\ntokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n\r\n# Show the full list of special tokens:\r\nprint(tokenizer.special_tokens_map)\r\n# run same prompt\r\nprompt = \"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is\"\r\n\r\ninput_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids\r\n\r\ngenerated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200)\r\n\r\ntokenizer.batch_decode(generated_ids, skip_special_tokens=False)\r\n```\r\n\r\nwhich then generates the `<pad>` token as an output (but isn't this expected since you set ` skip_special_tokens=False`?\r\n\r\nSorry I'm still not 100% certain I understand what you mean. Could you please post a single code snippet that I can just copy-paste and run and that shows me an output **and** a message what the output should have been instead? \r\n\r\nThis would be super nice - sorry I'm a bit lost here", "Hi @patrickvonplaten,\r\n\r\nThanks for persisting with my confusing post :D. \r\n\r\nYes The following snippest is the main concern:\r\n```\r\n# Declare special tokens for padding and separating the context from the slogan:\r\nSPECIAL_TOKENS_DICT = {\r\n 'pad_token': '<pad>', \r\n}\r\n\r\n# # Add these special tokens to the vocabulary and resize model's embeddings:\r\ntokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n\r\n# Show the full list of special tokens:\r\nprint(tokenizer.special_tokens_map)\r\n# run same prompt\r\nprompt = \"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is\"\r\n\r\ninput_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids\r\n\r\ngenerated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200)\r\n\r\ntokenizer.batch_decode(generated_ids, skip_special_tokens=False)\r\n```\r\n\r\nThe expected output for gpt2-medium would be the same as the output **before** adding the special tokens, which would be:\r\n\r\n\"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: Basel. Capital of Liechtenstein is: Liechtenstein. Capital of Mexico is: Mexico City. Capital of South Africa is: Cape Town....\"\r\n\r\nSo nice and sensible output. To my understanding, and the way it works with non-gpt2 models, is that adding special tokens should not lead to a different output, but it does. \r\n\r\nAgain, after adding special tokens as desribed above, the output becomes:\r\n\r\n\"'Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is pad pad pad pad ...\".\r\n\r\nTo me this seems wrong? The output should be the same as it was originally, but its unable to produce anything other than pad tokens when generating now. And if you inspect the input ids etc, there is no pad token encoded by the tokenizer, nor is there any padding as its a single sample. \r\n\r\nHas this made anything clearer?\r\n\r\n", "Haha we'll get there @NtaylorOX :-) \r\n\r\nRight now when running [your last code snippet](https://github.com/huggingface/transformers/issues/17690#issuecomment-1162746071), I get:\r\n\r\n```\r\nNameError Traceback (most recent call last)\r\n<ipython-input-1-d3e787aeade6> in <module>\r\n 5\r\n 6 # # Add these special tokens to the vocabulary and resize model's embeddings:\r\n----> 7 tokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)\r\n 8 model.resize_token_embeddings(len(tokenizer))\r\n 9\r\n\r\nNameError: name 'tokenizer' is not defined\r\n\r\n```\r\n\r\nCould you fix the code snippet so that I can run it in a Python shell to see the output expected by you?", "Now I'm confused. In your comment did you mean to put the code snippest after \"I get:\"? \r\n\r\nI read this as you would be posting the output from running the code?\r\n\r\nTo get what I believe to produce the \"incorrect output\", run this:\r\n```\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\nfrom transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoModelForMaskedLM, AutoTokenizer, set_seed\r\nimport os \r\nimport torch\r\nimport csv\r\n\r\nimport torch\r\nfrom torch.utils.data import Dataset\r\n\r\ncuda_device = torch.device('cuda:0')\r\n# now set the default gpu to this one\r\ntorch.cuda.set_device(cuda_device)\r\n\r\n# set model name and load in using transformers automodel/autotokenizer classes\r\n# use smallest gpt2 type model but can use others\r\nMODEL_NAME = 'distilgpt2' #'distilgpt2' 'gpt2-medium' 'gpt2\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(MODEL_NAME)\r\n\r\n# Declare special tokens for padding and separating the context from the slogan:\r\nSPECIAL_TOKENS_DICT = {\r\n 'pad_token': '<pad>', \r\n}\r\n\r\n# # Add these special tokens to the vocabulary and resize model's embeddings:\r\ntokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n\r\n# Show the full list of special tokens:\r\nprint(tokenizer.special_tokens_map)\r\n# run same prompt\r\nprompt = \"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is\"\r\n\r\ninput_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids\r\n\r\ngenerated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200)\r\n\r\ntokenizer.batch_decode(generated_ids, skip_special_tokens=False)\r\n\r\n```\r\n\r\nTo get what the output should be and normally is without special tokens:\r\n\r\n```\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\nfrom transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoModelForMaskedLM, AutoTokenizer, set_seed\r\nimport os \r\nimport torch\r\nimport csv\r\n\r\nimport torch\r\nfrom torch.utils.data import Dataset\r\n\r\ncuda_device = torch.device('cuda:0')\r\n# now set the default gpu to this one\r\ntorch.cuda.set_device(cuda_device)\r\n\r\n# set model name and load in using transformers automodel/autotokenizer classes\r\n# use smallest gpt2 type model but can use others\r\nMODEL_NAME = 'distilgpt2' #'distilgpt2' 'gpt2-medium' 'gpt2\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(MODEL_NAME)\r\n\r\n# run same prompt\r\nprompt = \"Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is\"\r\n\r\ninput_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids\r\n\r\ngenerated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200)\r\n\r\ntokenizer.batch_decode(generated_ids, skip_special_tokens=False)\r\n```\r\n\r\n\r\nDoes this help?", "Hey @NtaylorOX,\r\n\r\nSorry just corrected my comment above. Ok I think I see what the problem is. You've added a token and now this token is predominantly generated. IMO this is not because it's called a `<pad>` token, it's simply due to the pretrained weights of `distilgpt2`.\r\n\r\nAlso see this issue: https://github.com/huggingface/transformers/issues/8472 ", "Hi @patrickvonplaten ,\r\n\r\nYes - I did not mean it was only affecting <pad> tokens. But it seems I did not find that previous issue which seems to address the problem.\r\n\r\nAlso, as I mentioned, it does not only affect distilgpt - it affects all GPT2 models I tried. But does not happen to OPT model which I was i found it odd?", "Also - on that other issue: https://github.com/huggingface/transformers/issues/8472 \r\n\r\nWhen using your nicely supplied possible fix: \r\n```\r\nimport torch\r\nimport torch.nn.functional as F\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')\r\ntokenizer.add_special_tokens(\r\n\t{'additional_special_tokens': ['<USER>', '<SYSTEM>']}\r\n)\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained('distilgpt2')\r\nmodel.resize_token_embeddings(len(tokenizer))\r\ninp_tok_ids = tokenizer.encode('I want a pepperoni pizza with mushroom')\r\ninp_tensor = torch.LongTensor(inp_tok_ids).unsqueeze(0)\r\nmodel.eval()\r\n\r\nmodel.lm_head.weight[-2, :] = (torch.zeros((768,)) - 10000.0) \r\nmodel.lm_head.weight[-1, :] = (torch.zeros((768,)) - 10000.0) \r\n\r\nwith torch.no_grad():\r\n\tfor i in range(10):\r\n\t\toutputs = model(inp_tensor)\r\n\t\tlogits = outputs[0][:, -1, :]\r\n\t\tprobs = F.softmax(logits, dim=-1)\r\n\t\tnext_token = torch.multinomial(probs, num_samples=1).squeeze(1)\r\n\t\tinp_tensor = torch.cat([inp_tensor, next_token.unsqueeze(-1)], dim=-1)\r\n\r\nprint(tokenizer.decode(inp_tensor[0]))\r\n```\r\n\r\nI am getting an error:\r\n\r\n```\r\nRuntimeError Traceback (most recent call last)\r\n\r\nmodel.eval()\r\n----> 3 model.lm_head.weight[-2, :] = (torch.zeros((768,)) - 10000.0) \r\n 4 model.lm_head.weight[-1, :] = (torch.zeros((768,)) - 10000.0) \r\n 6 with torch.no_grad():\r\n\r\nRuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.\r\n```\r\n\r\nSorry if I shouldn't be crossing wires so much! Just wanted to highlight that this example doesn't seem to work, at least with my transformers version etc." ]
1,655
1,657
1,657
NONE
null
### System Info ```shell - `transformers` version: 4.19.4 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: (True) - Using distributed or parallel set-up in script?: (False) ``` ### Who can help? @patrickvonplaten ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The problem occurs when using GPT2 based models with the transformers library, and specifically when using the model.generate() after adding new special tokens, or <pad> tokens. I have put together a colab for this issue here: https://colab.research.google.com/gist/NtaylorOX/56c3578c1bfe6d6f5ec35ed0641c5e98/hf_gpt2_generate_bug.ipynb. Steps to reproduce: 1.) Load in libraries and instantiate a GPT2 based model ``` from transformers import GPT2Tokenizer, GPT2LMHeadModel from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoModelForMaskedLM, AutoTokenizer, set_seed import os import torch import csv import torch from torch.utils.data import Dataset cuda_device = torch.device('cuda:0') # now set the default gpu to this one torch.cuda.set_device(cuda_device) # set model name and load in using transformers automodel/autotokenizer classes # use smallest gpt2 type model but can use others MODEL_NAME = 'distilgpt2' #'distilgpt2' 'gpt2-medium' 'gpt2 tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForCausalLM.from_pretrained(MODEL_NAME) ``` 2.) Sanity check ``` # test its ability with few easy examples prompt = "Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is" input_ids = tokenizer(prompt, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200) tokenizer.batch_decode(generated_ids, skip_special_tokens=False) ``` Outputs: Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. ['Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: Spain. Not just the capital of a country; the capital of Europe....] 3.) Add additional special tokens such as \<pad> ``` # Declare special tokens for padding and separating the context from the slogan: SPECIAL_TOKENS_DICT = { 'pad_token': '<pad>', } # # Add these special tokens to the vocabulary and resize model's embeddings: tokenizer.add_special_tokens(SPECIAL_TOKENS_DICT) model.resize_token_embeddings(len(tokenizer)) # Show the full list of special tokens: print(tokenizer.special_tokens_map) ``` Outputs: {'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '<pad>'} 4.) Now run through the generate process again ``` # run same prompt prompt = "Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is" input_ids = tokenizer(prompt, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200) tokenizer.batch_decode(generated_ids, skip_special_tokens=False) ``` output: 'Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is pad pad pad' This <pad>token issue can be fixed by instead setting pad_token_id to eos_token_id via: ``` tokenizer.pad_token = tokenizer.eos_token ``` But with other special tokens the problem persists. Please see the colab notebook for more detailed examples. ### Expected behavior ```shell The adding of new special tokens and subsequence resizing of the model embeddings should leave a model performing in its original pre-trained state when given known tokens. For example, this problem does not occur with a similar autoregressive model, "facebook/opt". MODEL_NAME = "facebook/opt-350m" # reload model and tokenizer from its original pre-trained state model = AutoModelForCausalLM.from_pretrained(MODEL_NAME) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) # Declare special tokens for padding and separating the context from the slogan: SPECIAL_TOKENS_DICT = { 'additional_special_tokens': ['<context>', '<slogan>'] } # OPT already has a <pad> token so add other special tokens to the vocabulary and resize model's embeddings: tokenizer.add_special_tokens(SPECIAL_TOKENS_DICT) model.resize_token_embeddings(len(tokenizer)) # run same single prompt as before prompt = "Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is" input_ids = tokenizer(prompt, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=200) tokenizer.batch_decode(generated_ids, skip_special_tokens=False) ``` output: "</s>Capital of England is: London. Capital of France is: Paris. Capital of Spain is: Madrid. Capital of Switzerland is: Switzerland. Capital of Italy is: Naples. Capital of France is: Rome. Capital of Spain is: Madrid. " This output is as it should be - but when using GPT2 based models, something goes wrong. If this is not a bug, and expected behaviour based on something I've missed, please let me know! ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17690/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17689
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17689/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17689/comments
https://api.github.com/repos/huggingface/transformers/issues/17689/events
https://github.com/huggingface/transformers/pull/17689
1,269,389,148
PR_kwDOCUB6oc45jz4_
17,689
Include a comment to reflect Amy's contributions
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sgugger forgot tagging you.", "Hey @sayakpaul, that's really admirable, thank you for that. I personally don't think it needs to be in a comment, as the code isn't where attribution lies, and would clutter the code with non-technical details. The attribution lives in git, and that's where we should do something if you want to add a mention of Amy for those lines of code.\r\n\r\nHow about doing something as simple as switching a if/else statement (or any other kind of no-op change) and having Amy as author/co-author?", "@LysandreJik see if it's okay now.", "Let's see if this works, thanks a lot!" ]
1,655
1,655
1,655
MEMBER
null
This PR adds a note to `src/transformers/modeling_tf_pytorch_utils.py` to reflect @amyeroberts's contributions suggested in https://github.com/huggingface/transformers/pull/17571. It is an oversight on my end that I forgot to mention in the first place. I hope it's viewed as a mistake and not as a plagiarism attempt.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17689/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17689/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17689", "html_url": "https://github.com/huggingface/transformers/pull/17689", "diff_url": "https://github.com/huggingface/transformers/pull/17689.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17689.patch", "merged_at": 1655212539000 }
https://api.github.com/repos/huggingface/transformers/issues/17688
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17688/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17688/comments
https://api.github.com/repos/huggingface/transformers/issues/17688/events
https://github.com/huggingface/transformers/issues/17688
1,269,286,813
I_kwDOCUB6oc5Lp8ed
17,688
clm example training script uses larger train/eval data than it should
{ "login": "mayankjobanputra", "id": 10355927, "node_id": "MDQ6VXNlcjEwMzU1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/10355927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mayankjobanputra", "html_url": "https://github.com/mayankjobanputra", "followers_url": "https://api.github.com/users/mayankjobanputra/followers", "following_url": "https://api.github.com/users/mayankjobanputra/following{/other_user}", "gists_url": "https://api.github.com/users/mayankjobanputra/gists{/gist_id}", "starred_url": "https://api.github.com/users/mayankjobanputra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayankjobanputra/subscriptions", "organizations_url": "https://api.github.com/users/mayankjobanputra/orgs", "repos_url": "https://api.github.com/users/mayankjobanputra/repos", "events_url": "https://api.github.com/users/mayankjobanputra/events{/privacy}", "received_events_url": "https://api.github.com/users/mayankjobanputra/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "If you approve that this is a legitimate bug, then please let me know I will open the PR.", "This is completely intended and not a bug. Sample/example is meant as one processed training/evaluation example, which is what is done here.", "Yeah maybe for my purposes I needed it to filter the number of samples before grouping and I thought it would be same for others. \r\n\r\nThanks for the quick response. Closing the issue :)" ]
1,655
1,655
1,655
NONE
null
### System Info ```shell - `transformers` version: 4.12.3 - Platform: Linux-4.15.0-180-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.9.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: (True) - Using distributed or parallel set-up in script?: (True) ``` ### Who can help? @sgugger @patil-suraj ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Just run `examples/pytorch/language-modeling/run_clm.py` script with `max_train_samples` parameter with value 1, it'll still group first 1024 tokens regardless. This is also a bug across libraries (i.e., tensorflow, FLAX) and also for `max_eval_samples` parameter as well. ### Expected behavior ```shell The script should only use the max number of samples specified from the dataset. This happens because the grouping takes place before selecting the number of samples. ``` Grouping: (https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/examples/pytorch/language-modeling/run_clm.py#L447) Dataset selection:(https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/examples/pytorch/language-modeling/run_clm.py#L460)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17688/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17687
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17687/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17687/comments
https://api.github.com/repos/huggingface/transformers/issues/17687/events
https://github.com/huggingface/transformers/issues/17687
1,269,183,855
I_kwDOCUB6oc5LpjVv
17,687
how can I use emformer checkpoint?
{ "login": "dykim3", "id": 100189969, "node_id": "U_kgDOBfjHEQ", "avatar_url": "https://avatars.githubusercontent.com/u/100189969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dykim3", "html_url": "https://github.com/dykim3", "followers_url": "https://api.github.com/users/dykim3/followers", "following_url": "https://api.github.com/users/dykim3/following{/other_user}", "gists_url": "https://api.github.com/users/dykim3/gists{/gist_id}", "starred_url": "https://api.github.com/users/dykim3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dykim3/subscriptions", "organizations_url": "https://api.github.com/users/dykim3/orgs", "repos_url": "https://api.github.com/users/dykim3/repos", "events_url": "https://api.github.com/users/dykim3/events{/privacy}", "received_events_url": "https://api.github.com/users/dykim3/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nEmformer hasn't been added to the library yet.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,655
1,657
1,657
NONE
null
### Feature request ImportError Traceback (most recent call last) <ipython-input-1-7a40e4bd817f> in <module> ----> 1 from transformers import EmformerForRNNT 2 3 model = EmformerForRNNT.from_pretrained("anton-l/emformer-base-librispeech") ImportError: cannot import name 'EmformerForRNNT' from 'transformers' ( .local/lib/python3.8/site-packages/transformers/__init__.py) ### Motivation . ### Your contribution .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17687/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17686
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17686/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17686/comments
https://api.github.com/repos/huggingface/transformers/issues/17686/events
https://github.com/huggingface/transformers/pull/17686
1,268,897,208
PR_kwDOCUB6oc45iK49
17,686
Save huggingface checkpoint as artifact in mlflow callback
{ "login": "swethmandava", "id": 17828952, "node_id": "MDQ6VXNlcjE3ODI4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/17828952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/swethmandava", "html_url": "https://github.com/swethmandava", "followers_url": "https://api.github.com/users/swethmandava/followers", "following_url": "https://api.github.com/users/swethmandava/following{/other_user}", "gists_url": "https://api.github.com/users/swethmandava/gists{/gist_id}", "starred_url": "https://api.github.com/users/swethmandava/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/swethmandava/subscriptions", "organizations_url": "https://api.github.com/users/swethmandava/orgs", "repos_url": "https://api.github.com/users/swethmandava/repos", "events_url": "https://api.github.com/users/swethmandava/events{/privacy}", "received_events_url": "https://api.github.com/users/swethmandava/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks again!", "Hi there! @swethmandava Thanks for adding this functionality. Quick question: because the artifact logging was removed, wouldn't the intermediate checkpoints not be tracked? Only the latest checkpoint would be logged as a model, right?", "> Hi there! @swethmandava Thanks for adding this functionality. Quick question: because the artifact logging was removed, wouldn't the intermediate checkpoints not be tracked? Only the latest checkpoint would be logged as a model, right?\r\n\r\nIt should now save all the checkpoints. every time on_save is called" ]
1,655
1,660
1,655
CONTRIBUTOR
null
# What does this PR do? 1. Store model checkpoints including tokenizers that are needed to reload the model from mlflow as artifacts 2. Allow model to be register-able. (they are not if log_artifacts is used to log the model) Fixes # (issue) https://github.com/huggingface/transformers/issues/15495 https://github.com/huggingface/transformers/issues/10881 https://github.com/huggingface/transformers/issues/7698 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17686/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17686", "html_url": "https://github.com/huggingface/transformers/pull/17686", "diff_url": "https://github.com/huggingface/transformers/pull/17686.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17686.patch", "merged_at": 1655489644000 }
https://api.github.com/repos/huggingface/transformers/issues/17685
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17685/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17685/comments
https://api.github.com/repos/huggingface/transformers/issues/17685/events
https://github.com/huggingface/transformers/issues/17685
1,268,714,421
I_kwDOCUB6oc5Lnwu1
17,685
Disregard
{ "login": "schlopp96", "id": 71921821, "node_id": "MDQ6VXNlcjcxOTIxODIx", "avatar_url": "https://avatars.githubusercontent.com/u/71921821?v=4", "gravatar_id": "", "url": "https://api.github.com/users/schlopp96", "html_url": "https://github.com/schlopp96", "followers_url": "https://api.github.com/users/schlopp96/followers", "following_url": "https://api.github.com/users/schlopp96/following{/other_user}", "gists_url": "https://api.github.com/users/schlopp96/gists{/gist_id}", "starred_url": "https://api.github.com/users/schlopp96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/schlopp96/subscriptions", "organizations_url": "https://api.github.com/users/schlopp96/orgs", "repos_url": "https://api.github.com/users/schlopp96/repos", "events_url": "https://api.github.com/users/schlopp96/events{/privacy}", "received_events_url": "https://api.github.com/users/schlopp96/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,655
1,655
1,655
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17685/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17684
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17684/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17684/comments
https://api.github.com/repos/huggingface/transformers/issues/17684/events
https://github.com/huggingface/transformers/pull/17684
1,268,574,114
PR_kwDOCUB6oc45hKkv
17,684
[Pipeline] avoid importing tensorflow if not used
{ "login": "NouamaneTazi", "id": 29777165, "node_id": "MDQ6VXNlcjI5Nzc3MTY1", "avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NouamaneTazi", "html_url": "https://github.com/NouamaneTazi", "followers_url": "https://api.github.com/users/NouamaneTazi/followers", "following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}", "gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}", "starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions", "organizations_url": "https://api.github.com/users/NouamaneTazi/orgs", "repos_url": "https://api.github.com/users/NouamaneTazi/repos", "events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}", "received_events_url": "https://api.github.com/users/NouamaneTazi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Was this issue resolved in another PR already? @sgugger ", "The fact that TensorFlow takes all GPU memory has been fixed in #18044" ]
1,655
1,657
1,657
MEMBER
null
# What does this PR do? Avoids loading unnecessary modules by `pipelines.base.infer_framework_load_model()` which could create some unexpected behaviour like tensorflow allocating all GPU memory. @sgugger @LysandreJik Before: ```python from transformers import pipeline pipeline("text-classification") # This would try importing `TFDistilBertForSequenceClassification`if both tensorflow and pytorch # are available, and tensorflow would allocate all GPU memory, even if we expect to use # the pytorch model ``` After: ```python from transformers import pipeline pipeline("text-classification") # Only `DistilBertForSequenceClassification` is imported, and tensorflow is not called ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17684/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/17684/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17684", "html_url": "https://github.com/huggingface/transformers/pull/17684", "diff_url": "https://github.com/huggingface/transformers/pull/17684.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17684.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17683
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17683/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17683/comments
https://api.github.com/repos/huggingface/transformers/issues/17683/events
https://github.com/huggingface/transformers/pull/17683
1,268,480,686
PR_kwDOCUB6oc45g491
17,683
Update eli5_app.py
{ "login": "cyai", "id": 83634399, "node_id": "MDQ6VXNlcjgzNjM0Mzk5", "avatar_url": "https://avatars.githubusercontent.com/u/83634399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cyai", "html_url": "https://github.com/cyai", "followers_url": "https://api.github.com/users/cyai/followers", "following_url": "https://api.github.com/users/cyai/following{/other_user}", "gists_url": "https://api.github.com/users/cyai/gists{/gist_id}", "starred_url": "https://api.github.com/users/cyai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyai/subscriptions", "organizations_url": "https://api.github.com/users/cyai/orgs", "repos_url": "https://api.github.com/users/cyai/repos", "events_url": "https://api.github.com/users/cyai/events{/privacy}", "received_events_url": "https://api.github.com/users/cyai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17683). All of your documentation changes will be reflected on that endpoint." ]
1,655
1,655
1,655
NONE
null
# What does this PR do? Fixes # (issue) Updated the string format style for a cleaner understanding of the code. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17683/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17683", "html_url": "https://github.com/huggingface/transformers/pull/17683", "diff_url": "https://github.com/huggingface/transformers/pull/17683.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17683.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17682
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17682/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17682/comments
https://api.github.com/repos/huggingface/transformers/issues/17682/events
https://github.com/huggingface/transformers/issues/17682
1,268,414,529
I_kwDOCUB6oc5LmnhB
17,682
Truncation + max_length not working for GPT2TokenizerFast
{ "login": "teetone", "id": 16793796, "node_id": "MDQ6VXNlcjE2NzkzNzk2", "avatar_url": "https://avatars.githubusercontent.com/u/16793796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/teetone", "html_url": "https://github.com/teetone", "followers_url": "https://api.github.com/users/teetone/followers", "following_url": "https://api.github.com/users/teetone/following{/other_user}", "gists_url": "https://api.github.com/users/teetone/gists{/gist_id}", "starred_url": "https://api.github.com/users/teetone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/teetone/subscriptions", "organizations_url": "https://api.github.com/users/teetone/orgs", "repos_url": "https://api.github.com/users/teetone/repos", "events_url": "https://api.github.com/users/teetone/events{/privacy}", "received_events_url": "https://api.github.com/users/teetone/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Also gently pinging @SaulLu @mishig25 @thomasw21 here", "Hi @teetone,\r\n\r\nThank you for your detailed outcome. While investigating, I noticed that the portion of text that produces this behaviour is `\"the simplicity of their 'studio' is the reason why\"`\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\n\r\nDESIRED_TOKEN_LENGTH = 1949\r\nTEXT=\"their 'studio'\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\r\nencoding_1 = tokenizer.encode(TEXT, truncation=True, max_length=DESIRED_TOKEN_LENGTH)\r\nprint(f\"Encoding 1st time is of length {len(encoding_1)} and corresponds to {tokenizer.convert_ids_to_tokens(encoding_1)}\")\r\n\r\ndecoded_encoding_1 = tokenizer.decode(encoding_1)\r\n# \r\nencoding_2 = tokenizer.encode(decoded_encoding_1)\r\nprint(f\"Encoding 2nd time is of length {len(encoding_2)} and corresponds to {tokenizer.convert_ids_to_tokens(encoding_2)}\")\r\nprint(f\"Decoded sequence of ids is \\\"{decoded_encoding_1}\\\"\")\r\n# Encoding 1st time is of length 5 and corresponds to ['their', \"Ġ'\", 'stud', 'io', \"'\"]\r\n# Encoding 2nd time is of length 6 and corresponds to ['their', \"'s\", 't', 'ud', 'io', \"'\"]\r\n# Decoded sequence of ids is \"their'studio'\"\r\n```\r\nThe reason is that by default the `clean_up_tokenization_spaces` argument is set to true and has the effect of removing the space between `their` and `'studio'`. By specifying this argument to False you get:\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\n\r\nDESIRED_TOKEN_LENGTH = 1949\r\nTEXT=\"their 'studio'\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\r\nencoding_1 = tokenizer.encode(TEXT, truncation=True, max_length=DESIRED_TOKEN_LENGTH)\r\nprint(f\"Encoding 1st time is of length {len(encoding_1)} and corresponds to {tokenizer.convert_ids_to_tokens(encoding_1)}\")\r\n\r\ndecoded_encoding_1 = tokenizer.decode(encoding_1, clean_up_tokenization_spaces=False)\r\nencoding_2 = tokenizer.encode(decoded_encoding_1)\r\nprint(f\"Encoding 2nd time is of length {len(encoding_2)} and corresponds to {tokenizer.convert_ids_to_tokens(encoding_2)}\")\r\nprint(f\"Decoded sequence of ids \\\"{decoded_encoding_1}\\\"\")\r\n# Encoding 1st time is of length 5 and corresponds to ['their', \"Ġ'\", 'stud', 'io', \"'\"]\r\n# Encoding 2nd time is of length 5 and corresponds to ['their', \"Ġ'\", 'stud', 'io', \"'\"]\r\n# Decoded sequence of ids \"their 'studio'\"\r\n```\r\nI hope this answers your problem!\r\n", "> \r\n\r\n@SaulLu, thanks for your help! I'm using this `encode` + `decode` logic to truncate text to fit in a given context window. Is it safe to say that if I want to truncate while preserving the original text, I should pass in `clean_up_tokenization_spaces=False` when calling `decode`?", "Generally it is not promised that 1-1 matching is possible. But in the particular case of GPT-2 (without added tokens or special tokens present in the sentence to be tokenized) I think it should work with `clean_up_tokenization_spaces=False` in the `decode` method!", "> Generally it is not promised that 1-1 matching is possible. But in the particular case of GPT-2 (without added tokens or special tokens present in the sentence to be tokenized) I think it should work with `clean_up_tokenization_spaces=False` in the `decode` method!\r\n\r\nThat makes sense. Thank you again! I will close this issue as resolved." ]
1,654
1,655
1,655
NONE
null
### System Info ```shell - `transformers` version: 4.13.0 - Platform: Linux-4.15.0-29-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyTorch version (GPU?): 1.10.2+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Who can help? @patil-suraj, @patrickvonplaten, @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run the following Python code: ```Python from typing import List from transformers import GPT2TokenizerFast DESIRED_TOKEN_LENGTH: int = 1949 TEXT = 'Title: Hilltop Hoods\n\nBackground: Hilltop Hoods are an Australian hip hop group that formed in 1994 in Blackwood, Adelaide, South Australia. The group was founded by Suffa (Matthew David Lambert) and MC Pressure (Daniel Howe Smith), who were joined by DJ Debris (Barry John M. Francis) after fellow founder, DJ Next (Ben John Hare), left in 1999. The group released its first extended play, Back\n\nSection: 2007-2009: The Hard Road Restrung and State of the Art\nPassage: Two of Hilltop Hoods\' founders first met in 1987 when MC Suffa (aka Matthew David Lambert) and MC Pressure (Daniel Howe Smith) attended Blackwood High School in Eden Hills - a suburb of Adelaide. In 1991 they joined up with DJ Next (Ben John Hare) through a mutual friend and formed an Australian hip hop group. Their name was supplied by fellow local MC Flak (from Cross Bred Mongrels) - the suburb of Blackwood is known by locals as the Hilltop. The band\'s influences include American hip hop artists: Notorious B.I.G., KRS-One, Gang Starr, Wu-Tang Clan and Public Enemy. At live shows Next was the group\'s DJ, for recording he contributed audio engineering and all the scratching/turntablism on their early works. He regularly competed in the local DMC World DJ Championships (DMC) tournaments, winning the South Australian DMC championships multiple times. Hilltop Hoods recorded a demo, Highlanders, which was released on cassette tape only. As well as Pressure and Suffa on vocals, the group included MC Summit aka DJ Sum-1, but he did not appear on later Hilltop Hoods work. The group\'s first official release, in 1997, was a vinyl-only, seven-track extended play, Back Once Again. Production was handled by DJ Debris (Barry John M Francis), turntablism and audio engineering by Next, vocals by Pressure and Suffa. The third track, "Shades of Grey", features Debris with a verse, and was co-written by Francis, Hare, Lambert and Smith. Fifth track, "Mankind Must Suffa" also features a guest verse from Quromystix (aka Quro, Andrew Michael Bradley) - a member of Finger Lickin\' Good and later the Fuglemen. "Mankind Must Suffa" is credited to Lambert, Smith, Francis and Bradley. Back Once Again is out of print and unavailable for retail purchase. The group\'s debut studio album, A Matter of Time, was released in 1999 on CD only. As with Back Once Again, it is now unavailable for retail purchase. All scratching/turntablism is performed by Next, a track, "Let Me Show You", has no vocals - solely showcasing his turntable skills. American MC Bukue One (Tion Torrence) appears for a guest verse on "Deaf Can Hear". The track is credited to Lambert, Smith, Francis, Hare and Torrence. The album was released independently but with financial assistance from Arts SA - the band were inspired, in 2005, to set up their own Hilltop Hoods Initiative, to help local artists. After the album appeared, Next left the group and moved to Melbourne. In 2004 he moved to London. In 1999 Debris, who was also a member of the Cross Bred Mongrels, replaced Next and became the Hilltop Hoods\' full-time DJ. Hilltop Hoods founded the Certified Wise Crew - a hip hop collaborative - with local groups Terra Firma, Cross Bred Mongrels and After Hours. Certified Wise Crew has since expanded to include MCs Trauma, Blockade, Kolaps, Flea, with Vents and Funkoars joining in later years. Hilltop Hoods received two nominations for the Hip Hop Act of the Year Award at the Australian Dance Music Awards and again at the 3D World Music Awards in 2001 and 2002. In 2001 the group\'s second album, Left Foot, Right Foot, was released with Lambert, Francis and M. Veraquth producing. On 22 September 2003, Hilltop Hoods released their third album, The Calling, which became a commercial breakthrough. In an interview after the release of their fourth album, Suffa revealed that The Calling was recorded on his mother\'s computer and the simplicity of their \'studio\' is the reason why some of the music on the album is in monaural (\'mono\') sound. The Calling entered the ARIA Albums Chart in March 2004 and reached No. 53 before exiting the top 100 in September of the same year. By December 2006 it was certified platinum for shipment of 70,000 units, becoming the first Australian hip hop album to achieve platinum status. In March 2012, it re-entered the chart and peaked at No. 50 - eight-and-a-half years after its first release. It featured two singles, "The Nosebleed Section" and "Dumb Enough", which were listed in the Triple J Hottest 100, 2003. "The Nosebleed Section" was ranked No. 17 in the Triple J Hottest 100 of All Time in 2009. Hilltop Hoods\' chart and commercial success was a turning point in the Australian Hip Hop scene because it demonstrated widespread support for the genre that reached beyond an underground fan base. On 1 April 2006, the group followed with their fourth album, The Hard Road, which peaked at number one. It was the first Australian hip hop album to do so. It was certified gold within a week of being released. Its lead single, "Clown Prince", reached the top 30 on the related ARIA Singles Chart. It featured guest verses from New York rapper, Omni, and British MCs, Mystro and Braintax. The Hilltop Hoods received the inaugural Australian Independent Record (AIR) Award for Independent Artist of the Year and Best Performing Independent Album for The Hard Road in 2006. The track, "The Blue Blooded", is a collaboration with Australian MCs: Funkoars, Hau from Koolism, Mortar, Vents, Drapht, Muph & Plutonic, Pegz and Robby Balboa. On 27 April of the same year, Hilltop Hoods performed at the Bass in the Grass music festival in Darwin alongside fellow hip hop group, The Herd. That same day they issued a second single, the title track from the album. Its video includes fellow members from the Certified Wise Crew - Cross Bred Mongrels, Terrafirma and Funkoars. Following the success of The Hard Road Tour in early 2006, the Hilltop Hoods began their second national tour for the year, The Stopping All Stations Tour, which visited more regional areas of Australia as well as the capital cities. They were supported by Koolism and Mystro. Late that year, Hilltop Hoods released their third single from the album, "What a Great Night". The video shows the group at a club with camera shots panning up and down to reveal a new location. It used special effects and is one of the most expensive video clips for an Australian hip hop group, mirroring the group\'s rise in success and popularity. Also late in the year the band won the J Award for best album of the year from Triple J. They performed the Homebake Festival and Falls Festival before the end of the year. The Hard Road received the AIR Award for Best Independent Hip Hop/Urban Release in 2007. On 12 May 2007, Hilltop Hoods released their next album The Hard Road: Restrung which is a remix of their previous studio album, The Hard Road, featuring the Adelaide Symphony Orchestra and Okwerdz. It peaked at No. 8 on the ARIA Albums Chart. Like its predecessor The Hard Road, it took out "Best Urban Release" at the ARIA Awards of 2007, with the group going back-to-back in the category. The lead single from the album "Recapturing the Vibe Restrung", its video clip was on high rotation on rage & jtv. That year the group performed at the Southbound Festival (WA), The Great Escape at Newington Armory over Easter, and embarked on a UK tour with a Sydney-based string quartet. They finished the year by headlining the Pyramid Rock Festival on Victoria\'s Phillip Island over New Year\'s Eve 2007. In 2008 they performed at the Big Day Out festivals, at Glastonbury Festival and Islington Academy in London. In December their DVD, The City of Light, was released and was nominated as \'Best Music DVD\' at the 2008 ARIA Awards. Hilltop Hoods left their longtime home of Obese Records to start their own label, Golden Era Records, to release their future material. In November 2008 Pressure announced on Triple J\'s breakfast program that the next studio album, State of the Art, would be recorded with session musicians: "We realised with this one after doing Restrung and having an orchestra that we were a bit less limited. So we\'re going to have some session musos come in on this one and stuff like that". The album was released on 12 June, with the lead single, "Chase That Feeling", issued as a digital download on 8 May, and featured a return guest appearance by a quartet from the Adelaide Symphony Orchestra. The album debuted at number one on the albums chart while "Chase That Feeling" peaked at No. 8 on the related singles chart. By 2010 State of the Art was certified 2x platinum for shipment of 140,000 units. In early 2009 the Hilltop Hoods performed at the Groovin the Moo festival in Townsville, Maitland and Bendigo. They also performed at Triple J\'s One Night Stand in Sale, Victoria on 30 May, and at Fat as Butter festival in Newcastle on 25 October where they played several of the tracks from the album. To promote its release the band started a national tour starting on 18 July and performed at most major cities including state capitals. The second national tour that year followed on 11 November with support provided by Vents.\n\nQuestion: What is significant about this time?\nAnswer: On 1 April 2006, the group followed with their fourth album, The Hard Road,\n\nQuestion: How did this album do?\nAnswer: which peaked at number one. It was the first Australian hip hop album to do so.\n\nQuestion: Are there any other interesting aspects about this article?\nAnswer:' tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") tokens: List[int] = tokenizer.encode(TEXT, truncation=True, max_length=DESIRED_TOKEN_LENGTH) assert len(tokens) == DESIRED_TOKEN_LENGTH # 1949, no problem here result: str = tokenizer.decode(tokens) print(len(tokenizer.tokenize(result))) # 1950 when it should be 1949 print(len(tokenizer.encode(result))) # 1950 when it should be 1949 assert len(tokenizer.tokenize(result)) == DESIRED_TOKEN_LENGTH # Fails here! ``` It should fail at the last assertion: ``` Traceback (most recent call last): File "/Users/tonyhlee/research/mercury/benchmarking/gpt2_tokenizer_bug.py", line 16, in <module> assert len(tokenizer.tokenize(result)) == DESIRED_TOKEN_LENGTH # Fails here! AssertionError ``` ### Expected behavior Since I passed in `truncation=True` and `max_length=1949` to `encode`, I would expect the resulting text to be 1949 tokens long after decoding. It's 1950 tokens long instead.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17682/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17681
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17681/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17681/comments
https://api.github.com/repos/huggingface/transformers/issues/17681/events
https://github.com/huggingface/transformers/issues/17681
1,268,363,487
I_kwDOCUB6oc5LmbDf
17,681
trainer fails when fsdp = full_shard auto_wrap
{ "login": "chijames", "id": 11708477, "node_id": "MDQ6VXNlcjExNzA4NDc3", "avatar_url": "https://avatars.githubusercontent.com/u/11708477?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chijames", "html_url": "https://github.com/chijames", "followers_url": "https://api.github.com/users/chijames/followers", "following_url": "https://api.github.com/users/chijames/following{/other_user}", "gists_url": "https://api.github.com/users/chijames/gists{/gist_id}", "starred_url": "https://api.github.com/users/chijames/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chijames/subscriptions", "organizations_url": "https://api.github.com/users/chijames/orgs", "repos_url": "https://api.github.com/users/chijames/repos", "events_url": "https://api.github.com/users/chijames/events{/privacy}", "received_events_url": "https://api.github.com/users/chijames/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "cc @pacman100 ", "@pacman100 Thanks for looking into this! Would love to provide any additional information!", "Hello @chijames, thanks for letting us know that `default_auto_wrap_policy` is no more, will be fixing it shortly. Regarding the subsequent error, it is unrelated to the integration and I have opened issue in PyTorch repo for the same and mentioned this issue in that issue as seen above. \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> Hello @chijames, thanks for letting us know that `default_auto_wrap_policy` is no more, will be fixing it shortly. Regarding the subsequent error, it is unrelated to the integration and I have opened issue in PyTorch repo for the same and mentioned this issue in that issue as seen above.\r\n\r\nOne way to get the **default_auto_wrap_policy** is to get Nvidia's docker nvcr.io/nvidia/pytorch:**22.05-py3**\r\n\r\nThe definition of **default_auto_wrap_policy** is in /opt/pytorch/pytorch/torch/distributed/fsdp/wrap.py:31\r\n\r\n\r\n" ]
1,654
1,665
1,658
NONE
null
### System Info ```shell - `transformers` version: 4.20.0.dev0 - Platform: Linux-5.4.0-1072-aws-x86_64-with-debian-buster-sid - Python version: 3.7.12 - Huggingface_hub version: 0.7.0 - PyTorch version (GPU?): 1.13.0.dev20220610 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ``` ### Who can help? @sgugger @patric ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```bash torchrun --nproc_per_node=4 \ run_summarization.py \ --model_name_or_path google/pegasus-large \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=1 \ --per_device_eval_batch_size=1 \ --overwrite_output_dir \ --predict_with_generate \ --fsdp "full_shard auto_wrap" \ --fsdp_min_num_params 20000 ``` Running the above script will generate the following error: `File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1242, in _wrap_model from torch.distributed.fsdp.wrap import default_auto_wrap_policy ImportError: cannot import name 'default_auto_wrap_policy' from 'torch.distributed.fsdp.wrap'` A little bit digging into the torch/distributed/fsdp/wrap.py shows default_auto_wrap_policy is no longer in the file. I tried to change it to size_based_auto_wrap_policy as it seems to have the same function signature. Unfortunately, another error pops up: `File "run_summarization.py", line 734, in <module> main() File "run_summarization.py", line 653, in main ignore_keys_for_eval=ignore_keys_for_eval, File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1610, in _inner_training_loop train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1372, in train tr_loss_step = self.training_step(model, inputs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 2301, in training_step ignore_keys_for_eval=ignore_keys_for_eval, File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1610, in _inner_training_loop loss = self.compute_loss(model, inputs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 2333, in compute_loss tr_loss_step = self.training_step(model, inputs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 2301, in training_step outputs = model(**inputs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl loss = self.compute_loss(model, inputs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 2333, in compute_loss return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2303, in forward outputs = model(**inputs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl outputs = self._fsdp_wrapped_module(*args, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2303, in forward return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 476, in forward return self.module(*inputs, **kwinputs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl outputs = self._fsdp_wrapped_module(*args, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1414, in forward return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 476, in forward return self.module(*inputs, **kwinputs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return_dict=return_dict, File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1414, in forward return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 476, in forward return self.module(*inputs, **kwinputs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return_dict=return_dict, File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1414, in forward return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1245, in forward return_dict=return_dict, File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return_dict=return_dict, File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs)return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1245, in forward File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2303, in forward return_dict=return_dict, File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl outputs = self._fsdp_wrapped_module(*args, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 2303, in forward return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 476, in forward return self.module(*inputs, **kwinputs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl outputs = self._fsdp_wrapped_module(*args, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 761, in forward return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/distributed/fsdp/flatten_params_wrapper.py", line 476, in forward inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return self.module(*inputs, **kwinputs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/functional.py", line 2156, in embedding return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 761, in forward inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale File "/home/ubuntu/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Output 0 of ViewBackward0 is a view and its base or another view of its base has been modified inplace. This view is the output of a function that retur ns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.` This time I have no idea how the problem should be solved. Any help is greatly appreciated! Thanks. ### Expected behavior ```shell The script should run without errors when fsdp is enabled. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17681/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17680
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17680/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17680/comments
https://api.github.com/repos/huggingface/transformers/issues/17680/events
https://github.com/huggingface/transformers/pull/17680
1,268,361,281
PR_kwDOCUB6oc45gh8u
17,680
Save huggingface checkpoint as artifact in mlflow callback
{ "login": "swethmandava", "id": 17828952, "node_id": "MDQ6VXNlcjE3ODI4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/17828952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/swethmandava", "html_url": "https://github.com/swethmandava", "followers_url": "https://api.github.com/users/swethmandava/followers", "following_url": "https://api.github.com/users/swethmandava/following{/other_user}", "gists_url": "https://api.github.com/users/swethmandava/gists{/gist_id}", "starred_url": "https://api.github.com/users/swethmandava/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/swethmandava/subscriptions", "organizations_url": "https://api.github.com/users/swethmandava/orgs", "repos_url": "https://api.github.com/users/swethmandava/repos", "events_url": "https://api.github.com/users/swethmandava/events{/privacy}", "received_events_url": "https://api.github.com/users/swethmandava/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Opened #17686 from branch. closing this", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? 1. Store model checkpoints including tokenizers that are needed to reload the model from mlflow as artifacts 2. Allow model to register-able. (they are not if log_artifacts is used to log the model) Fixes # (issue) https://github.com/huggingface/transformers/issues/15495 https://github.com/huggingface/transformers/issues/10881 https://github.com/huggingface/transformers/issues/7698 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17680/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17680", "html_url": "https://github.com/huggingface/transformers/pull/17680", "diff_url": "https://github.com/huggingface/transformers/pull/17680.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17680.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17679
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17679/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17679/comments
https://api.github.com/repos/huggingface/transformers/issues/17679/events
https://github.com/huggingface/transformers/pull/17679
1,268,341,643
PR_kwDOCUB6oc45geNl
17,679
Fix typo in adding_a_new_model README
{ "login": "ayushtues", "id": 43698245, "node_id": "MDQ6VXNlcjQzNjk4MjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43698245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayushtues", "html_url": "https://github.com/ayushtues", "followers_url": "https://api.github.com/users/ayushtues/followers", "following_url": "https://api.github.com/users/ayushtues/following{/other_user}", "gists_url": "https://api.github.com/users/ayushtues/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayushtues/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayushtues/subscriptions", "organizations_url": "https://api.github.com/users/ayushtues/orgs", "repos_url": "https://api.github.com/users/ayushtues/repos", "events_url": "https://api.github.com/users/ayushtues/events{/privacy}", "received_events_url": "https://api.github.com/users/ayushtues/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,654
1,655
1,655
CONTRIBUTOR
null
# What does this PR do? Fixes #17678 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Documentation: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17679/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17679", "html_url": "https://github.com/huggingface/transformers/pull/17679", "diff_url": "https://github.com/huggingface/transformers/pull/17679.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17679.patch", "merged_at": 1655104928000 }
https://api.github.com/repos/huggingface/transformers/issues/17678
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17678/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17678/comments
https://api.github.com/repos/huggingface/transformers/issues/17678/events
https://github.com/huggingface/transformers/issues/17678
1,268,341,097
I_kwDOCUB6oc5LmVlp
17,678
Typo in adding_a_new_model README
{ "login": "ayushtues", "id": 43698245, "node_id": "MDQ6VXNlcjQzNjk4MjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43698245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayushtues", "html_url": "https://github.com/ayushtues", "followers_url": "https://api.github.com/users/ayushtues/followers", "following_url": "https://api.github.com/users/ayushtues/following{/other_user}", "gists_url": "https://api.github.com/users/ayushtues/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayushtues/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayushtues/subscriptions", "organizations_url": "https://api.github.com/users/ayushtues/orgs", "repos_url": "https://api.github.com/users/ayushtues/repos", "events_url": "https://api.github.com/users/ayushtues/events{/privacy}", "received_events_url": "https://api.github.com/users/ayushtues/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sent a PR to fix this [here](https://github.com/huggingface/transformers/pull/17679)" ]
1,654
1,655
1,655
CONTRIBUTOR
null
There's a typo in the adding_a_new_model README [file](https://github.com/huggingface/transformers/blob/main/templates/adding_a_new_model/README.md), It would be `make fix-copies`, not `maxke fix-copies` here ![image](https://user-images.githubusercontent.com/43698245/173199900-0489726e-3373-4567-ac41-96240c41b230.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17678/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17677
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17677/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17677/comments
https://api.github.com/repos/huggingface/transformers/issues/17677/events
https://github.com/huggingface/transformers/pull/17677
1,268,335,172
PR_kwDOCUB6oc45gc9M
17,677
Add missing tokenizer tests - Longformer
{ "login": "tgadeliya", "id": 32731151, "node_id": "MDQ6VXNlcjMyNzMxMTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32731151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tgadeliya", "html_url": "https://github.com/tgadeliya", "followers_url": "https://api.github.com/users/tgadeliya/followers", "following_url": "https://api.github.com/users/tgadeliya/following{/other_user}", "gists_url": "https://api.github.com/users/tgadeliya/gists{/gist_id}", "starred_url": "https://api.github.com/users/tgadeliya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tgadeliya/subscriptions", "organizations_url": "https://api.github.com/users/tgadeliya/orgs", "repos_url": "https://api.github.com/users/tgadeliya/repos", "events_url": "https://api.github.com/users/tgadeliya/events{/privacy}", "received_events_url": "https://api.github.com/users/tgadeliya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I read discussion in merged tokenizers' tests PRs and post [~~Don't~~ Repeat Yourself*](https://huggingface.co/blog/transformers-design-philosophy) on HF blog and I manually add \"the copying mechanism\". But I don't understand how it is work, so I tried not to change copied test code from Roberta tokenizer tests. If code modification is not a problem, I would like to add some minor changes, e.g. delete commented code and split big test into smaller one.\r\nCould describe \"copying mechanism\" works in more details?", "Thanks a lot for working on this @tgadeliya!! \r\n\r\nAs far as I know, there are no identified \"practices\" for this case (cc @LysandreJik in case you have another opinion). Nevertheless, if changes are relevant, they are obviously welcome. For example, it is possible to indicate the changes made as here: \r\nhttps://github.com/huggingface/transformers/blob/d95a32cc60e5d92b4bf08cd805c6b0db7b4100cc/src/transformers/models/deberta/modeling_deberta.py#L308-L309\r\nIf the differences are too long to list perhaps the message can just explain why it diverged from the originally copied and pasted code.\r\n\r\nDoes this help you?", "@SaulLu, Sorry for the late reply. Summer is ending :) \r\n\r\nThanks for your comment. Now it is clear for me. Actually, I came to the conclusion, that code cleaning not so necessary considering all pros and cons. So this PR can be reviewed and merged ", "@SaulLu I refreshed this PR, so now it is ready to merge", "Thanks @tgadeliya :hugs: " ]
1,654
1,661
1,661
CONTRIBUTOR
null
# What does this PR do? This PR add tests for Longformer tokenizer copying tests from Roberta tokenizer's test suite, because those tokenizers are absolutely identical. Fixes #16627 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? @SaulLu @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17677/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17677", "html_url": "https://github.com/huggingface/transformers/pull/17677", "diff_url": "https://github.com/huggingface/transformers/pull/17677.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17677.patch", "merged_at": 1661163201000 }
https://api.github.com/repos/huggingface/transformers/issues/17676
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17676/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17676/comments
https://api.github.com/repos/huggingface/transformers/issues/17676/events
https://github.com/huggingface/transformers/issues/17676
1,268,300,599
I_kwDOCUB6oc5LmLs3
17,676
Problems when producing distilBERT
{ "login": "SSFWL496", "id": 66315801, "node_id": "MDQ6VXNlcjY2MzE1ODAx", "avatar_url": "https://avatars.githubusercontent.com/u/66315801?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SSFWL496", "html_url": "https://github.com/SSFWL496", "followers_url": "https://api.github.com/users/SSFWL496/followers", "following_url": "https://api.github.com/users/SSFWL496/following{/other_user}", "gists_url": "https://api.github.com/users/SSFWL496/gists{/gist_id}", "starred_url": "https://api.github.com/users/SSFWL496/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SSFWL496/subscriptions", "organizations_url": "https://api.github.com/users/SSFWL496/orgs", "repos_url": "https://api.github.com/users/SSFWL496/repos", "events_url": "https://api.github.com/users/SSFWL496/events{/privacy}", "received_events_url": "https://api.github.com/users/SSFWL496/received_events", "type": "User", "site_admin": false }
[ { "id": 3817266200, "node_id": "MDU6TGFiZWwzODE3MjY2MjAw", "url": "https://api.github.com/repos/huggingface/transformers/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,654
1,658
1,658
NONE
null
### System Info ```shell Hello!I am training a distilBert from scratch following scripts under examples/distillation. ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction And got error when running the scripts below: ``` python scripts/token_counts.py \ --data_file data/binarized_text.bert-base-uncased.pickle \ --token_counts_dump data/token_counts.bert-base-uncased.pickle \ --vocab_size 30522 ``` ### Expected behavior ```shell The error I got is: Traceback (most recent call last): File "scripts/token_counts.py", line 44, in <module> data = pickle.load(fp) EOFError: Ran out of input ``` Could you help me resolve this? ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17676/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/17675
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17675/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17675/comments
https://api.github.com/repos/huggingface/transformers/issues/17675/events
https://github.com/huggingface/transformers/issues/17675
1,268,293,216
I_kwDOCUB6oc5LmJ5g
17,675
AutoTokenizer fails to do_lower_case
{ "login": "pratyushmaini", "id": 29012981, "node_id": "MDQ6VXNlcjI5MDEyOTgx", "avatar_url": "https://avatars.githubusercontent.com/u/29012981?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pratyushmaini", "html_url": "https://github.com/pratyushmaini", "followers_url": "https://api.github.com/users/pratyushmaini/followers", "following_url": "https://api.github.com/users/pratyushmaini/following{/other_user}", "gists_url": "https://api.github.com/users/pratyushmaini/gists{/gist_id}", "starred_url": "https://api.github.com/users/pratyushmaini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pratyushmaini/subscriptions", "organizations_url": "https://api.github.com/users/pratyushmaini/orgs", "repos_url": "https://api.github.com/users/pratyushmaini/repos", "events_url": "https://api.github.com/users/pratyushmaini/events{/privacy}", "received_events_url": "https://api.github.com/users/pratyushmaini/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ ">>> print(tokenizer.tokenize(\"Huggingface\"))\r\n['Hug', 'ging', 'face']", "Hey @pratyushmaini 👋 Following our bug submission template yields better outcomes -- we have many issues and requests coming in, and we need the help of the community to maximize our usefulness :) One of the fields of the template is `Who can help?`, where you can find the right person to tag on your issue.\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,654
1,658
1,658
NONE
null
If we use the AutoTokenizer library, this still does not work. ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("roberta-base", do_lower_case=True) tokenizer.do_lower_case = True print(tokenizer.tokenize("Huggingface")) ``` _Originally posted by @pratyushmaini in https://github.com/huggingface/transformers/issues/9122#issuecomment-1152939838_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17675/timeline
completed
null
null