url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/19983
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19983/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19983/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19983/events
|
https://github.com/huggingface/transformers/issues/19983
| 1,429,900,827
|
I_kwDOCUB6oc5VOo4b
| 19,983
|
Cannot export Donut models to ONNX
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "mht-sharma",
"id": 21088122,
"node_id": "MDQ6VXNlcjIxMDg4MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mht-sharma",
"html_url": "https://github.com/mht-sharma",
"followers_url": "https://api.github.com/users/mht-sharma/followers",
"following_url": "https://api.github.com/users/mht-sharma/following{/other_user}",
"gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions",
"organizations_url": "https://api.github.com/users/mht-sharma/orgs",
"repos_url": "https://api.github.com/users/mht-sharma/repos",
"events_url": "https://api.github.com/users/mht-sharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/mht-sharma/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "mht-sharma",
"id": 21088122,
"node_id": "MDQ6VXNlcjIxMDg4MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mht-sharma",
"html_url": "https://github.com/mht-sharma",
"followers_url": "https://api.github.com/users/mht-sharma/followers",
"following_url": "https://api.github.com/users/mht-sharma/following{/other_user}",
"gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions",
"organizations_url": "https://api.github.com/users/mht-sharma/orgs",
"repos_url": "https://api.github.com/users/mht-sharma/repos",
"events_url": "https://api.github.com/users/mht-sharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/mht-sharma/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @mht-sharma would you mind taking a look at this? It might be related to some of the subtleties you noticed with Whisper and passing encoder outputs through the model vs using the getters",
"```python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx --atol 1e-2```\r\n\r\nwith ```--atol 1e-2``` it works, but value of atol is low.\r\n\r\nI think it is better to convert the model separately:\r\n- Encoder\r\n- Decoder\r\n- Decoder with past value.\r\n\r\nAnd pipeline it together.\r\n\r\n",
"@BakingBrains I mentioned this here #19401",
"Update: \r\nThe error occurs only in the encoder part of the model i.e `Donut`. Updated the model inputs to actual inputs from dataset, however the still still persisted.\r\n\r\nThe issue starts happening from the following [modeling_donut_swin.py#L501 ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/donut/modeling_donut_swin.py#L501 ) layer activation in the `DonutSwinLayer`. The `GeluActivation` causes the outputs to diverge between original and onnx models. After removing the activation or using `relu` the model works till 1e-4 atol.",
"> Update: The error occurs only in the encoder part of the model i.e `Donut`. Updated the model inputs to actual inputs from dataset, however the still still persisted.\r\n> \r\n> The issue starts happening from the following [modeling_donut_swin.py#L501 ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/donut/modeling_donut_swin.py#L501) layer activation in the `DonutSwinLayer`. The `GeluActivation` causes the outputs to diverge between original and onnx models. After removing the activation or using `relu` the model works till 1e-4 atol.\r\n\r\nThe original SwinModel is also using this: https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k/raw/main/config.json\r\n\r\nIf you try to convert it, you don't get this issue\r\n",
"Any updates on it @mht-sharma ?",
"Hi, @lewtun & @mht-sharma any updates?",
"Hi @WaterKnight1998 , apologies for late response. I was not able to work actively on the issue past few weeks. However, I have seen similar issues with other models and it was mainly because of the sensitivity to the inputs. This model also gave similar behaviour when trying different inputs during validation. However, the error was still around 0.001X.\r\n\r\nSince the model architecture of `SwinModel` and its `Donut` Encoder is same, it's highly likely that the issue is with the used inputs. But I will validate this once and get back to you in few days.",
"> Hi @WaterKnight1998 , apologies for late response. I was not able to work actively on the issue past few weeks. However, I have seen similar issues with other models and it was mainly because of the sensitivity to the inputs. This model also gave similar behaviour when trying different inputs during validation. However, the error was still around 0.001X.\r\n> \r\n> Since the model architecture of `SwinModel` and its `Donut` Encoder is same, it's highly likely that the issue is with the used inputs. But I will validate this once and get back to you in few days.\r\n\r\nThank you for the explanation. I am looking forward for your fix :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @WaterKnight1998 @mht-sharma ,\r\n\r\nDo you have inference script for Donut document parsing model using encoder and decoder onnx models? Similar to this [TrOCR gist](https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,675
| 1,675
|
MEMBER
| null |
### System Info
- `transformers` version: 4.24.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
It seems that the default tolerance of 1e-5 in the ONNX configuration for vision-encoder-decoder models is too small for Donut checkpoints (currently seeing 5e-3 - 9e-3 is needed). As a result, many (all?) Donut checkpoints can't be exported using the default values in the CLI.
Having said that, the relatively large discrepancy in the exported models suggests there is a deeper issue involved with tracing these models and it would be great to eliminate this potential source of error before increasing the default value for `atol`.
Steps to reproduce:
1. Pick one of the Donut checkpoints from the [`naver-clover-ix`](https://huggingface.co/naver-clova-ix) org on the Hub
2. Export the model using the ONNX CLI, e.g.
```
python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-docvqa --feature=vision2seq-lm onnx/
```
3. The above gives the following error:
```
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.0091094970703125 for [ -0.6990948 -49.217014 3.7758636 ... 3.2241364 2.7353969
-51.43289 ] vs [ -0.6989002 -49.215897 3.7760048 ... 3.223978 2.7355423
-51.433964 ]
```
<details><summary>Full stack trace</summary>
<p>
```
Framework not requested. Using torch to export to ONNX.
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 4.74k/4.74k [00:00<00:00, 791kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 803M/803M [00:09<00:00, 81.2MB/s]
/Users/lewtun/miniconda3/envs/transformers/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:2895.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 363/363 [00:00<00:00, 85.0kB/s]
Using framework PyTorch: 1.12.1
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if num_channels != self.num_channels:
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:220: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if width % self.patch_size[1] != 0:
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:223: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if height % self.patch_size[0] != 0:
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:536: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if min(input_resolution) <= self.window_size:
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:136: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
batch_size, height // window_size, window_size, width // window_size, window_size, num_channels
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:148: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
windows = windows.view(-1, height // window_size, width // window_size, window_size, window_size, num_channels)
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:622: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
was_padded = pad_values[3] > 0 or pad_values[5] > 0
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:623: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if was_padded:
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:411: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
batch_size // mask_shape, mask_shape, self.num_attention_heads, dim, dim
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:682: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
height_downsampled, width_downsampled = (height + 1) // 2, (width + 1) // 2
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:266: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
should_pad = (height % 2 == 1) or (width % 2 == 1)
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:267: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if should_pad:
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
Validating ONNX model...
-[✓] ONNX model output names match reference model ({'last_hidden_state'})
- Validating ONNX Model output "last_hidden_state":
-[✓] (3, 4800, 1024) matches (3, 4800, 1024)
-[x] values not close enough (atol: 0.0001)
Traceback (most recent call last):
File "/Users/lewtun/miniconda3/envs/transformers/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/lewtun/miniconda3/envs/transformers/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/lewtun/git/hf/transformers/src/transformers/onnx/__main__.py", line 180, in <module>
main()
File "/Users/lewtun/git/hf/transformers/src/transformers/onnx/__main__.py", line 107, in main
validate_model_outputs(
File "/Users/lewtun/git/hf/transformers/src/transformers/onnx/convert.py", line 455, in validate_model_outputs
raise ValueError(
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.0091094970703125 for [ -0.6990948 -49.217014 3.7758636 ... 3.2241364 2.7353969
-51.43289 ] vs [ -0.6989002 -49.215897 3.7760048 ... 3.223978 2.7355423
-51.433964 ]
```
</p>
</details>
### Expected behavior
Donut checkpoints can be exported to ONNX using either a good default value for `atol` or changes to the modeling code enable much better agreement between the original / exported models
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19983/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19983/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19982
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19982/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19982/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19982/events
|
https://github.com/huggingface/transformers/issues/19982
| 1,429,898,600
|
I_kwDOCUB6oc5VOoVo
| 19,982
|
Add MEGA
|
{
"login": "mnaylor5",
"id": 20518095,
"node_id": "MDQ6VXNlcjIwNTE4MDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/20518095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mnaylor5",
"html_url": "https://github.com/mnaylor5",
"followers_url": "https://api.github.com/users/mnaylor5/followers",
"following_url": "https://api.github.com/users/mnaylor5/following{/other_user}",
"gists_url": "https://api.github.com/users/mnaylor5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mnaylor5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mnaylor5/subscriptions",
"organizations_url": "https://api.github.com/users/mnaylor5/orgs",
"repos_url": "https://api.github.com/users/mnaylor5/repos",
"events_url": "https://api.github.com/users/mnaylor5/events{/privacy}",
"received_events_url": "https://api.github.com/users/mnaylor5/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"> I have seen really promising results from my own experiments with MEGA on long documents\r\n\r\nCool! Could you elaborate?\r\n\r\nIt would be very useful indeed, especially if there would be pre-trained weights for longer sequence tasks, like summarization of very long texts, or classifying multiple images (this has been asked a lot for LayoutLM-like models, which only operate on single document images).\r\n\r\nHowever I'm not seeing very useful pre-trained weights at the moment, would be very useful to have a BERT-like, Wav2Vec2 or ViT-like checkpoint for operating on long sequences",
"Thanks for the quick response @NielsRogge!\r\n \r\nI have only experimented with MEGA in a long-document classification setting so far, and I trained the full architecture from scratch without using the pre-trained weights. I used the authors' implementation and set up a BERT-style document classification class, using similar architectural details as in the `Text` task for LRA (Appendix D), but with `encoder_chunk_size=16`.\r\n\r\nFor performance details in my initial experiment: I used roughly 7k documents in training and 2k in validation, with up to ~3k tokens in a document. Using a single T4 GPU, each epoch (train + eval) averaged ~22 seconds. This is quite a bit faster than I've seen with other linear-complexity attention mechanisms, and I suspect it's largely due to the significant decrease in model size (4-6 layers with a single attention head in each). It's hard to compare model performance since I trained fully from scratch, but MEGA certainly seemed to reach competitive performance for my task.\r\n\r\nI agree that the currently available model weights aren't the most generally useful, and that a BERT-like encoder would be great. I'm not sure if the authors intend to release something like that, but if not, hopefully the speed gains reduce the barrier for community LM contributions.",
"Hey @mnaylor5! Apologies if this is implied, but are you working on contributing this model or just indicating it would be great to have? I'd be happy to help implement in the transformers repo if the latter (or in either case if you'd be interested!). I have some 3090s to throw at this, though perhaps this isn't enough compute? \r\n\r\nIn any case, excited to see if I can help & to see this get added to HF! ",
"Hi @MarkRich - no worries, I definitely could have been clearer. At this point, I am mainly just saying that it would be great to have available in the Hugging Face ecosystem. I'd love to contribute, but I doubt I can realistically commit the time over the next few weeks at least. I put up the issue in case anyone from the HF team or community got excited about implementing it 😄 ",
"Sweet, I can take a crack at it. @NielsRogge is there any chance I can get added to a slack channel or something similar so that I can ask questions? My email address is mark.rich388@gmail.com ",
"Sure, I'll create a channel and send you an invite.",
"My research involves the MEGA model. Is there any way that I can contribute to this? Happy to make it available on HuggingFace!",
"Hi,\r\n\r\nThat'd be great. Could you provide your email address? I'll add you to the Slack channel",
"Thank you! My email is lingjzhu at umich.edu",
"@NielsRogge Hi, this is a gentle follow-up about adding MEGA. Could I start to work on it now? ",
"@NielsRogge Nevermind. I have joined. Thank you!",
"Hi there! I was able to set aside some time to pretrain a very basic Mega model using BERT-style masked language modeling. I know this was something that @NielsRogge mentioned as being more useful, so I hope these pretrained weights will be helpful for getting Mega into `transformers`! \r\n\r\nI used the official Mega implementation (specifically the `MegaEncoderLayer` class) and pretrained on wikitext-103 - nothing earth-shattering, but hopefully helpful. :smile: The model specs and code I used for training are in [this Colab notebook](https://colab.research.google.com/drive/1qfUO6o5HRdxBblWlw058HVyvaEPhPpH8?usp=sharing) along with code for loading classes and weights; and the weights and tokenizer are saved in [this repo on the HF model hub](https://huggingface.co/mnaylor/mega-wikitext-103). ",
"Hi there @lingjzhu @MarkRich @NielsRogge - any update on how this is going? I've been using the Mega architecture (from the original implementation) more in my own experiments, and I am super excited about using it more within the HF ecosystem. \r\n\r\nI might have some time to help with the implementation of Mega into Transformers over the next few weeks, so I would be happy to contribute to any ongoing efforts or take a stab at contributing it myself.",
"> Hi there @lingjzhu @MarkRich @NielsRogge - any update on how this is going? I've been using the Mega architecture (from the original implementation) more in my own experiments, and I am super excited about using it more within the HF ecosystem.\r\n> \r\n> I might have some time to help with the implementation of Mega into Transformers over the next few weeks, so I would be happy to contribute to any ongoing efforts or take a stab at contributing it myself.\r\n\r\n@mnaylor5 That would be nice. I have been working on the text version and have an initial WIP codebase. However, due to interruptions by some life events, I haven't completed it yet. I will upload it to my github this weekend and maybe we can work together to complete it. ",
"@lingjzhu cool, no worries! I'll get started and look forward to checking out your code 😄 ",
"@NielsRogge - apologies if there's a better place to ask this, or if I'm missing some documentation that explains this. The Mega paper includes experiments on encoder-only tasks (text and image classification) as well as seq2seq (machine translation, language modeling with encoder-decoder). Is there a preference from the HF team on how to structure these separate approaches? My own work with Mega has been within encoder-only settings (pre-training with masked LM and fine-tuning on sequence or token classification), so I'm inclined to start by implementing it similarly to BERT, but I wasn't sure if this would be a problem.",
"@mnaylor5 My WIP code is [here](https://github.com/lingjzhu/transformers/tree/main). The code is in the `src/transformers/models/src` but it still could not run at the moment. \r\n\r\nI have started by copying the code for T5 model and using mega as a drop-in replacement for the attention module. That said, I have moved all mega-related code from the official repo to `modeling_mega.py` and am now fusing them together with the `pretrained_model` class. Given that T5 has both an encoder and a decoder, it would be great to implement them all in one. I think most of the existing code can be reused. Maybe we could coordinate and finish the rest of the work? \r\n\r\nOnce the implementation is ready, I can pretrain an encoder, a decoder, and an encoder-decoder model on a medium size dataset and push them to the hub. ",
"Thanks @lingjzhu! I ended up doing a similar pure PyTorch reimplementation of the original Mega code - after doing that and reading through the Hugging Face documentation, I think I have a solid understanding for how to proceed. Even though a large part of the Mega architecture is the EMA-based attention, it probably makes sense to implement the full Mega blocks that they propose (including the normalized feed-forward layer) rather than dropping in the EMA portion into another architecture like T5. This approach will keep the implementation in line with what the Mega paper introduces, and using T5 as a base would also make it more difficult to work within encoder-only settings like document classification.\r\n\r\nWith this in mind and in response to my own question above, I think it makes the most sense to approach the Mega implementation similarly to BigBird, which is conceptually similar to the improvements offered by Mega - efficiency improvements over standard self-attention which can be used in encoder-only, decoder-only, and seq2seq settings. The [BigBird implementation](https://github.com/huggingface/transformers/blob/main/src/transformers/models/big_bird/modeling_big_bird.py) follows the approach of BERT, which sets things up in a way that allows `BigBirdModel` to be used as either an encoder or decoder based on the provided config. If my understanding is correct, the extension to seq2seq is then handled by Hugging Face's [`EncoderDecoderModel` class](https://huggingface.co/docs/transformers/model_doc/encoder-decoder). \r\n\r\nI have gotten started by using the `add-new-model-like` command and starting from RoBERTa (since I used a RoBERTa tokenizer in the MLM pretraining in my earlier comment), and I'm working through the implementation now. \r\n\r\n**One question for @NielsRogge / the Hugging Face team**: the original implementation of Mega does not include token type embeddings - it does not preclude their usage, but their tasks did not use token type embeddings. I'm afraid that tasks like QA would be difficult to implement without these embeddings, but including them would introduce a divergence from any of the model checkpoints currently available from the original repo (including the ones I linked above from the BERT-style encoder). Do you have a recommended way of approaching this?",
"Hi,\r\n\r\nSome models like DistilBERT also don't support token_type_ids and they work just fine (thanks to the SEP token). But feel free to add support for token type ids, it can't hurt using them :)",
"@NielsRogge thanks for the quick response. That makes sense, and I'll add support for them 😄 ",
"@mnaylor5 You are a saint for posting that Colab! I have been looking to train Mega too. @NielsRogge How is it coming, integrating MEGA into Huggingface?",
"@mnaylor5 I am getting this error on your colab:\r\n5 frames\r\n\r\n[/content/./mega/fairseq/modules/moving_average_gated_attention.py](https://localhost:8080/#) in forward(self, x, padding_mask, incremental_state, need_weights, attn_mask, before_attn_fn)\r\n 303 # B x L x S -> B x K x C x S\r\n 304 nc = seq_len // self.chunk_size\r\n--> 305 q = q.reshape(bsz, nc, self.chunk_size, self.zdim)\r\n 306 \r\n 307 if ctx_len < self.chunk_size:\r\n\r\nRuntimeError: shape '[32, 621, 2, 64]' is invalid for input of size 2545664\r\n\r\nDo I need to add some padding and the padding mask?",
"Hi,\r\n\r\nMEGA is now available here: https://huggingface.co/docs/transformers/main/model_doc/mega",
"@Tylersuard Yep, you can use MEGA in the `main` branch of Transformers - that PR was merged just a couple of weeks ago. \r\n\r\nI haven't dug into your specific error, but I'd guess that you're using chunking and need to pad inputs to a multiple of your chunk size"
] | 1,667
| 1,680
| 1,679
|
CONTRIBUTOR
| null |
### Model description
MEGA introduces a new attention method which incorporates gating and exponential moving averages to create strong local dependencies, reducing the need for full softmax attention. MEGA set a new SOTA on Long Range Arena, and MEGA-chunk performs nearly as well while achieving linear complexity WRT sequence length. I have seen really promising results from my own experiments with MEGA on long documents -- both in efficiency and model performance. It would be awesome to have MEGA (+ MEGA-chunk) available in the Hugging Face ecosystem!
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
* [Paper](https://arxiv.org/abs/2209.10655)
* [Official implementation](https://github.com/facebookresearch/mega)
* [Links to pretrained weights](https://github.com/facebookresearch/mega#models-checkpoints)
* I'm only aware of @violet-zct through the official MEGA repo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19982/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19982/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19981
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19981/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19981/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19981/events
|
https://github.com/huggingface/transformers/pull/19981
| 1,429,844,040
|
PR_kwDOCUB6oc5B4lDx
| 19,981
|
Add Audio Spectogram Transformer
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey, how can I use it already? I installed the branch, but unsure how to load the model. I'm new with huggingface :D",
"Hey @FrankFundel - hoping @NielsRogge adds a nice example as part of this PR documenting just that 🤞 In the mean time, you can try adapting the example from https://huggingface.co/docs/transformers/tasks/audio_classification\r\n\r\nYou'll need to change the repo names from `facebook/wav2vec2-base` to the appropriate Audio Spectrogram Transformer repo name. You'll also need to change the preprocess function (https://huggingface.co/docs/transformers/tasks/audio_classification#preprocess) to something like:\r\n\r\n```python\r\ndef preprocess_function(examples):\r\n audio_arrays = [x[\"array\"] for x in examples[\"audio\"]]\r\n input_features = feature_extractor(audio_array, sampling_rate=feature_extractor.sampling_rate)\r\n return input_features\r\n```\r\nThis is all currently untested, so might require some playing around to make it work.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19981). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19981). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19981). All of your documentation changes will be reflected on that endpoint."
] | 1,667
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #16383
This PR adds the [Audio Spectogram Transformer (AST)](https://arxiv.org/abs/2104.01778) model from MIT.
Similar to Whisper (actually prior to Whisper), the model treats audio as an image and applies a Vision Transformer on it.
The model gets SOTA results on audio classification benchmarks.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19981/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19981",
"html_url": "https://github.com/huggingface/transformers/pull/19981",
"diff_url": "https://github.com/huggingface/transformers/pull/19981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19981.patch",
"merged_at": 1669053534000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19980
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19980/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19980/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19980/events
|
https://github.com/huggingface/transformers/pull/19980
| 1,429,805,143
|
PR_kwDOCUB6oc5B4crO
| 19,980
|
Update Special Language Tokens for PLBART
|
{
"login": "jordiclive",
"id": 44066010,
"node_id": "MDQ6VXNlcjQ0MDY2MDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/44066010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jordiclive",
"html_url": "https://github.com/jordiclive",
"followers_url": "https://api.github.com/users/jordiclive/followers",
"following_url": "https://api.github.com/users/jordiclive/following{/other_user}",
"gists_url": "https://api.github.com/users/jordiclive/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jordiclive/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jordiclive/subscriptions",
"organizations_url": "https://api.github.com/users/jordiclive/orgs",
"repos_url": "https://api.github.com/users/jordiclive/repos",
"events_url": "https://api.github.com/users/jordiclive/events{/privacy}",
"received_events_url": "https://api.github.com/users/jordiclive/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @ArthurZucker ",
"Hey! Great work 👍 \nWe should make sure the CI tests are all green, and could you add a new test like `test_special_code_tokenization` where we make sure that the expected behavior of #19505 works",
"@ArthurZucker ok, tests are green. I added to the `test_full_multi_tokenizer,` `test_full_base_tokenizer` tests to check for this behaviour as the test tokenizer is already loaded in these tests. ",
"@ArthurZucker bumping this. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19980). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19980). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19980). All of your documentation changes will be reflected on that endpoint.",
"Thanks, @ArthurZucker! I made some changes to the tests with intermediate inputs and accepted several changes. \r\n\r\n> calling `_convert_lang_code_special_format` in the `src_lang.setter` to avoid calling it everywhere / avoid one liner functions\r\n\r\nWith this, wouldn't the `src_lang would.setter` have to be called everywhere instead? \r\nI think its quite difficult to make it backward compat and simpler as you suggest, as there are lots of places the user provides the src_lang, tgt_lang and there is also `self._src_lang` as well as` self.src_lang`. This at least preserves the functionality as before and the mapping as under-the-hood.",
"@ArthurZucker made that readability change. can we merge?"
] | 1,667
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the special tokens for PLBartTokenizer, raised in Issue #19505. Previously the tokenizer treats java, python etc. as special tokens, and removes them when decoding is performed.
@LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19980/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19980",
"html_url": "https://github.com/huggingface/transformers/pull/19980",
"diff_url": "https://github.com/huggingface/transformers/pull/19980.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19980.patch",
"merged_at": 1669049589000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19979
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19979/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19979/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19979/events
|
https://github.com/huggingface/transformers/pull/19979
| 1,429,792,059
|
PR_kwDOCUB6oc5B4Z3H
| 19,979
|
Run shellcheck on all *.sh scripts and attempt to fix errors
|
{
"login": "tripleee",
"id": 2160915,
"node_id": "MDQ6VXNlcjIxNjA5MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2160915?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tripleee",
"html_url": "https://github.com/tripleee",
"followers_url": "https://api.github.com/users/tripleee/followers",
"following_url": "https://api.github.com/users/tripleee/following{/other_user}",
"gists_url": "https://api.github.com/users/tripleee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tripleee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tripleee/subscriptions",
"organizations_url": "https://api.github.com/users/tripleee/orgs",
"repos_url": "https://api.github.com/users/tripleee/repos",
"events_url": "https://api.github.com/users/tripleee/events{/privacy}",
"received_events_url": "https://api.github.com/users/tripleee/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19979). All of your documentation changes will be reflected on that endpoint.",
"Thanks. I'm hoping it could be accepted simply to give users of the code base better examples to copy/paste from; the changes are mainly mechanical.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
NONE
| null |
Also, refactor a few repetitive code patterns
# What does this PR do?
Attempt to fix shell scripting errors in examples etc.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [n/a] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [n/a] Did you write any new necessary tests?
## Who can review?
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19979/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19979",
"html_url": "https://github.com/huggingface/transformers/pull/19979",
"diff_url": "https://github.com/huggingface/transformers/pull/19979.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19979.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19978
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19978/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19978/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19978/events
|
https://github.com/huggingface/transformers/issues/19978
| 1,429,704,158
|
I_kwDOCUB6oc5VN43e
| 19,978
|
LayoutLMv3 Processor - subword does not get assigned -100 with unusual words
|
{
"login": "a-ozbek",
"id": 14084682,
"node_id": "MDQ6VXNlcjE0MDg0Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/14084682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/a-ozbek",
"html_url": "https://github.com/a-ozbek",
"followers_url": "https://api.github.com/users/a-ozbek/followers",
"following_url": "https://api.github.com/users/a-ozbek/following{/other_user}",
"gists_url": "https://api.github.com/users/a-ozbek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/a-ozbek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/a-ozbek/subscriptions",
"organizations_url": "https://api.github.com/users/a-ozbek/orgs",
"repos_url": "https://api.github.com/users/a-ozbek/repos",
"events_url": "https://api.github.com/users/a-ozbek/events{/privacy}",
"received_events_url": "https://api.github.com/users/a-ozbek/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@NielsRogge @sgugger Could this be a problem which can also affect other users as well or am I doing something wrong? (`word_ids()` works fine in this case by the way)\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I've seen other people reporting wrong behaviour with unusual characters as well.\r\n\r\nThe logic to go from word-level labels to token-level labels is [here](https://github.com/huggingface/transformers/blob/3b309818e794cf6ff7fa79f34ea3e7b2386156da/src/transformers/models/layoutlmv3/tokenization_layoutlmv3_fast.py#L635-L660), might be worth looking at this more in depth. \r\n\r\nI'll mark this issue as a good first issue as I currently don't have the bandwidth to look into it.\r\n",
"The problem appears to be that for certain words (like \"0000000000000000\"), the first word piece is the character \"Ġ\", which is not being counted as part of the word. As a result, the offset for the following word piece is 0, causing both words to receive a label. Apparently the issue originates from `encode_batch` and from there to `encode_char_offsets` (which is in Rust). \r\n\r\nThis is my first attempt to contribute here, so I may be completely wrong...what can I do from here to help? @NielsRogge ",
"Hello, may I ask you if there is anything left for me and my friends to contribute for this issue?",
"The same problem arises with all BPE based tokenizers. Example with LayoutXLM:\r\n\r\n```\r\nimport numpy as np\r\nfrom transformers import LayoutXLMTokenizerFast\r\n\r\nprocessor = LayoutXLMTokenizerFast.from_pretrained(\r\n \"microsoft/layoutxlm-base\", apply_ocr=False\r\n)\r\nwords = [\"pencil\", \"0000000000000000\", \"phone\"]\r\nboxes = [[1, 2, 3, 4], [10, 11, 12, 13], [20, 21, 22, 23]]\r\nword_labels = [1, 2, 3]\r\n\r\nencoding = processor(\r\n text=words, boxes=boxes, word_labels=word_labels, return_tensors=\"pt\"\r\n)\r\n\r\nprint(encoding[\"input_ids\"])\r\nprint(processor.convert_ids_to_tokens(encoding[\"input_ids\"].flatten()))\r\nprint(encoding[\"labels\"])\r\n\r\n# Output:\r\n# tensor([[ 0, 5551, 13003, 6, 28568, 197094, 197094, 24089, 2]])\r\n# ['<s>', '▁pen', 'cil', '▁', '0000', '000000', '000000', '▁phone', '</s>']\r\n# tensor([[-100, 1, -100, 2, 2, -100, -100, 3, -100]])\r\n```\r\n\r\nThe main issue is BPE can produce \"empty\" token at the beginning of a word with `offset_mapping = (0, 0)`. Which leads to the following non empty token (which is the continuation of the word) having an `offset_mapping = (0, X)`.\r\n\r\nDirty solution is to check where @NielsRogge indicated and add a guard if previous token was empty. The problem is that it needs to be done for all BPE based tokenizers. Only checking if the `offset_mapping` starts with 0 is not sufficient when an empty token exists.\r\n\r\nThe other solution is to fix BPE (should it even be able to produce empty tokens?) in the Rust source.\r\n\r\nThe problem is NOT present in the NOT fast tokenizer provided by `sentencepiece` because [it operates at word level instead of token level](https://github.com/huggingface/transformers/blob/3b309818e794cf6ff7fa79f34ea3e7b2386156da/src/transformers/models/layoutlmv3/tokenization_layoutlmv3.py#L1124-L1135).",
"Hi! First time open sourcing! Is this still an issue? I can try to take a crack at it! @a-ozbek ",
"Hi, thanks for replying, this issue was fixed so I'll close it. Feel free to take a look at other good first issues."
] | 1,667
| 1,702
| 1,702
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.10
- Python version: 3.8.12
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.10.1+cu113 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import numpy as np
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
image = (np.random.rand(100, 100, 3) * 255).astype(np.uint8) # dummy image
words = ['pencil', '0000000000000000', 'phone']
boxes = [[1, 2, 3, 4], [10, 11, 12, 13], [20, 21, 22, 23]]
word_labels = [0, 0, 0]
encoding = processor(image, words, boxes=boxes, word_labels=word_labels, return_tensors="pt")
print(encoding['input_ids'])
print(processor.tokenizer.convert_ids_to_tokens(encoding['input_ids'].flatten()))
print(encoding['labels'])
# Output:
# tensor([[ 0, 21451, 1437, 49393, 1028, 2]])
# ['<s>', 'Ġpencil', 'Ġ', '0000000000000000', 'Ġphone', '</s>']
# tensor([[-100, 0, 0, 0, 0, -100]])
```
### Expected behavior
Since we are passing only 3 words `words = ['pencil', '0000000000000000', 'phone']`, I am expecting `encoding['labels']` to have only 3 non -100 labels (`(encoding['labels'] != -100).sum() == 3`).
However the output is `tensor([[-100, 0, 0, 0, 0, -100]])` where it contains 4 non -100 labels. So there is a mismatch between the input words and labels after processing. The same thing happens with '**********' word and probably other unusual "word"s.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19978/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19977
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19977/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19977/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19977/events
|
https://github.com/huggingface/transformers/pull/19977
| 1,429,686,708
|
PR_kwDOCUB6oc5B4C6O
| 19,977
|
Add ESMFold
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging for now, there are still a few improvements needed (example in a docstring for instance) but they can go in their own PRs :-)"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
cc @sgugger @LysandreJik @tomsercu @rmrao @nikitos9000
Opening a draft PR because deadlines are getting tight and I'd like to get everyone on the same page!
What's done:
- [X] Create a minimal port of `openfold`
- [X] Port ESMFold as `EsmForProteinFolding`
- [X] Update weight conversion scripts to port ESMFold weights from original repo
- [X] Update config formats to support ESMFold models
TODO:
- [x] Resolve small output discrepancies in ESM-2 stem that cause differences in final protein predictions
- [x] Add documentation
- [x] Add testing
- [x] Ensure everything is importable from the `transformers` root
- [x] ~Add an auto class for protein folding?~
- [x] Ensure non-folding ESM classes can be loaded with AutoModel
- [x] Remove some `openfold` functions/methods that aren't being called
- [x] Clean up the `openfold` port into a single dir/file
- [x] Ensure all `openfold` code is correctly licenced
- [x] ~Add auxiliary method(s) to convert the outputs into bio file formats like `pdb`~
- [ ] Reupload ESM checkpoints with the new formats
- [x] Upload ESMFold_v1 checkpoint
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19977/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19977",
"html_url": "https://github.com/huggingface/transformers/pull/19977",
"diff_url": "https://github.com/huggingface/transformers/pull/19977.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19977.patch",
"merged_at": 1667266379000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19976
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19976/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19976/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19976/events
|
https://github.com/huggingface/transformers/pull/19976
| 1,429,588,214
|
PR_kwDOCUB6oc5B3tiX
| 19,976
|
Speed up TF token classification postprocessing by converting complete tensors to numpy
|
{
"login": "deutschmn",
"id": 37573274,
"node_id": "MDQ6VXNlcjM3NTczMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37573274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deutschmn",
"html_url": "https://github.com/deutschmn",
"followers_url": "https://api.github.com/users/deutschmn/followers",
"following_url": "https://api.github.com/users/deutschmn/following{/other_user}",
"gists_url": "https://api.github.com/users/deutschmn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deutschmn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deutschmn/subscriptions",
"organizations_url": "https://api.github.com/users/deutschmn/orgs",
"repos_url": "https://api.github.com/users/deutschmn/repos",
"events_url": "https://api.github.com/users/deutschmn/events{/privacy}",
"received_events_url": "https://api.github.com/users/deutschmn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @Rocketknight1 ",
"This looks like a great improvement, thank you! I didn't realize how inefficient the postprocessing was there.\r\n\r\nThe PR is failing style checks, but I can fix that here, and will merge once that's done. Thank you!",
"Update: I believe the failing checks are caused by issues unrelated to this PR - you just happened to fork at a bad time. I'll merge and watch tests to make sure nothing goes too terribly wrong. Thanks again!",
"Great! Thanks for the quick review and merge 😊 "
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
The postprocessing of the token classification when using TensorFlow is not as fast as it could be. We discovered that with some experiments and profiling that showed that, in some settings, most time of the pipeline is spent in `gather_pre_entities`:
Before:
<img width="500" alt="before" src="https://user-images.githubusercontent.com/37573274/198983453-8a34e7a8-6f67-4010-9509-359b4fb9bdf7.png">
After:
<img width="500" alt="after" src="https://user-images.githubusercontent.com/37573274/198983760-1f27b4b3-6d55-4137-a22c-707992a11637.png">
This PR speeds it up by converting `input_ids` and `offset_mapping` to numpy before passing them to `gather_pre_entities`. Thereby, the tensor is moved to the appropriate device only once. Besides, it's also in line with the type annotation of `gather_pre_entities`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case: **n/a**
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation): **n/a**
- [ ] Did you write any new necessary tests? **n/a**
## Who can review?
Could you please review, @LysandreJik 😊
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19976/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19976",
"html_url": "https://github.com/huggingface/transformers/pull/19976",
"diff_url": "https://github.com/huggingface/transformers/pull/19976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19976.patch",
"merged_at": 1667494583000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19975
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19975/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19975/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19975/events
|
https://github.com/huggingface/transformers/pull/19975
| 1,429,482,605
|
PR_kwDOCUB6oc5B3W8Q
| 19,975
|
Give `modeling_t5.py` a `_prune_heads`
|
{
"login": "CaffreyR",
"id": 84232793,
"node_id": "MDQ6VXNlcjg0MjMyNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaffreyR",
"html_url": "https://github.com/CaffreyR",
"followers_url": "https://api.github.com/users/CaffreyR/followers",
"following_url": "https://api.github.com/users/CaffreyR/following{/other_user}",
"gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions",
"organizations_url": "https://api.github.com/users/CaffreyR/orgs",
"repos_url": "https://api.github.com/users/CaffreyR/repos",
"events_url": "https://api.github.com/users/CaffreyR/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaffreyR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Great work! Fell free to ping me for a review 👍",
"Hi @ArthurZucker , please pay more attention to the `position_bias`, I think I change it too sharply otherwise the shape will not be the same to the `score`",
"Hi @ArthurZucker , I upload a new commits, it seems better deal with `position_bias`. And if we do not add `head_mask` and `decoder_head_mask` in the model `forward`, the code can run. But we just ignore this line's problem.\r\n\r\nhttps://github.com/huggingface/transformers/blob/c3a93d8d821bc1df3601ba858e7385eada8db3a5/src/transformers/models/t5/modeling_t5.py#L547\r\n\r\n\r\n\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19975). All of your documentation changes will be reflected on that endpoint.",
"Okay! That's way better, the least code we change the better. I remember seeing a similar fix, let me link that and I think we will be able to merge if slow tests pass 👍",
"Great @ArthurZucker ! What other tests should be done? ",
"gently pin @patrickvonplaten Thanks",
"Hey, the tests are simply the CircleCI tests! Try running `make fixup` and `make fix-copies`. The integrations tests have to pass to be able to merge the PR! ",
"Hi @ArthurZucker I have try to pull new request after `make fixup` and `make fix-copies`, but it still can not pass CircleCI :(",
"Hey! You didn't have to open a new PR! I will have a look and help you fix the tests ☺️",
"OK then ! Thanks! It seems my flat can not work on the `make fixup`",
"Okay, it seems that #19097 also wanted to adress part of this issue. Since it has not really progressed, we can do everything here",
"Great! Anything that I can help?\r\n",
"OKay, try running : \r\n- `make style` to pass the `check_code_quality` \r\n- `git pull upstream main` to merge the changes from the `huggingface/transformers/main` branch. \r\nThen we will try to debug the tests that are failing. \r\nYou can also do this by running `RUN_SLOW=1 pytest tests/models/t5/test_modeling_t5.py`. ",
"Hi @ArthurZucker ,I try to `make style`, it reports error ! And this commits seems unsuccessful\r\n```\r\nAll done! ✨ 🍰 ✨\r\n597 files reformatted, 1299 files left unchanged.\r\nisort examples tests src utils\r\nSkipped 1 files\r\n/Library/Developer/CommandLineTools/usr/bin/make autogenerate_code\r\nrunning deps_table_update\r\nupdating src/transformers/dependency_versions_table.py\r\n/Library/Developer/CommandLineTools/usr/bin/make extra_style_checks\r\npython utils/custom_init_isort.py\r\npython utils/sort_auto_mappings.py\r\ndoc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source\r\nmake[1]: doc-builder: No such file or directory\r\nmake[1]: *** [extra_style_checks] Error 1\r\nmake: *** [style] Error 2\r\n\r\n```",
"Maybe it is my machine's problem of `make style`? @ArthurZucker Macbook pro with m1 maybe? I could not install `doc-builder`\r\n",
"No, the error is from the missing `huggingface_doc` package ! Don't worry. Try installing it. \nThe files should still have been formatted ",
"OK! I install it and rerun `make style` and `python utils/check_copies.py --fix_and_overwrite`\r\nHope it will work this time",
"Hi @ArthurZucker it seems that `make style` can run well on linux but cannot run well in macOS system. :) Maybe it is better to find a new method for Apple M1 :)",
"Hi @ArthurZucker , could you please give me a review? Many thanks!",
"Hi @ArthurZucker \r\n",
"Hey, let's try to rebase to `be59316681fca13483da0ac2eac341f7df090e35`, since a loooot of files were modified by make style ( and this is not normal!). The issue most probably comes from your version of `black. `pip install hf-doc-builder` or upgrading it sould solve this! \r\nI will review once that's clean! I can also help with make style if you are still unable to have the expected result! 🤗 ",
"Hi @ArthurZucker , what is rebase? BTW? Could you please help me make style? Since my machine's version is too tricky",
"Let me re-fork the link it seems too dirty at this point"
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) 19960 (not perfectly)
I refer a colab script in issue, it can prune but with forward problems.
You can see here https://colab.research.google.com/drive/1b9mHjtn2UxuHU_Sb_RXts12rDzbebBX0#scrollTo=hUSe4a1oOp6D
I use `opendelta` to visualize the pruning process.
But we seems to be a forward problem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19975/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19975",
"html_url": "https://github.com/huggingface/transformers/pull/19975",
"diff_url": "https://github.com/huggingface/transformers/pull/19975.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19975.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19974
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19974/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19974/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19974/events
|
https://github.com/huggingface/transformers/issues/19974
| 1,429,474,610
|
I_kwDOCUB6oc5VNA0y
| 19,974
|
Potential bug in modeling_utils.py
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Will have a look this morning. Sounds like a bug indeed!",
"@sgugger thanks for fixing, however I'm still encountering an issue that is probably related to this. \r\n\r\nSpecifically, the parameter whose name is the same between a base model and a head model (`self.layernorm` in my case) makes the [test_save_load_fast_init_from_base](https://github.com/huggingface/transformers/blob/243439a8271137aa290d7546e5704feeaa0cd1e5/tests/test_modeling_common.py#L313) test fail.\r\n\r\nIt can be reproduced as follows:\r\n\r\n```\r\nRUN_SLOW=yes pytest tests/models/audio_spectrogram_transformer/test_modeling_audio_spectrogram_transformer.py::AudioSpectrogramTransformerModelTest::test_save_load_fast_init_from_base\r\n```\r\nThis might also be related to the fast init mechanism itself, which doesn't seem to support parameters which have the same name between the base and head model. Should I just skip the test?\r\n",
"No, as this means parameters won't be properly initialized. I won't have any bandwidth to fix this in the near future so someone else will have to fix it.",
"I had a very quick look into it and I don't see any easy fix -> so for now I would advise to use a different name for the weights in the head (like `final_layernorm` maybe?)",
"Ok, I'll do that, thanks for looking into it"
] | 1,667
| 1,668
| 1,667
|
CONTRIBUTOR
| null |
### System Info
Transformers main branch.
### Who can help?
@LysandreJik @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
There currently seems to be a potential bug in modeling_utils.py, revealed by the failure of 2 tests defined in `test_modeling_common.py`, namely [test_correct_missing_keys](https://github.com/huggingface/transformers/blob/243439a8271137aa290d7546e5704feeaa0cd1e5/tests/test_modeling_common.py#L1448) and [test_save_load_fast_init_from_base](https://github.com/huggingface/transformers/blob/243439a8271137aa290d7546e5704feeaa0cd1e5/tests/test_modeling_common.py#L313).
The issue occurs when a head model (like `xxxForSequenceClassification`) defines a parameter that has the same name as one in the base model (`xxxModel`). Let's say the base model defines a `self.layernorm` attribute/parameter, and the head model also defines a `self.layernorm`.
You can reproduce the error by cloning [this branch](https://github.com/NielsRogge/transformers/tree/add_ast_bug) of mine, then run the following tests:
```
pytest tests/models/audio_spectogram_transformer/test_modeling_audio_spectogram_transformer.py::AudioSpectogramTransformerModelTest
```
In that case, both tests fail with the following error:
```
(...)
# Some models may have keys that are not in the state by design, removing them before needlessly warning
# the user.
if cls._keys_to_ignore_on_load_missing is not None:
for pat in cls._keys_to_ignore_on_load_missing:
missing_keys = [k for k in missing_keys if re.search(pat, k) is None]
if cls._keys_to_ignore_on_load_unexpected is not None:
for pat in cls._keys_to_ignore_on_load_unexpected:
unexpected_keys = [k for k in unexpected_keys if re.search(pat, k) is None]
# retrieve weights on meta device and put them back on CPU.
# This is not ideal in terms of memory, but if we don't do that not, we can't initialize them in the next step
if low_cpu_mem_usage:
for key in missing_keys:
if key.startswith(prefix):
key = ".".join(key.split(".")[1:])
param = model_state_dict[key]
if param.device == torch.device("meta"):
if not load_in_8bit:
set_module_tensor_to_device(model, key, "cpu", torch.empty(*param.size(), dtype=dtype))
else:
set_module_8bit_tensor_to_device(model, key, "cpu", torch.empty(*param.size(), dtype=dtype))
# retrieve unintialized modules and initialize before maybe overriding that with the pretrained weights.
if _fast_init:
uninitialized_modules = model.retrieve_modules_from_names(
missing_keys, add_prefix=add_prefix_to_model, remove_prefix=remove_prefix_from_model
)
for module in uninitialized_modules:
model._init_weights(module)
# Make sure we are able to load base models as well as derived models (with heads)
start_prefix = ""
model_to_load = model
if len(cls.base_model_prefix) > 0 and not hasattr(model, cls.base_model_prefix) and has_prefix_module:
start_prefix = cls.base_model_prefix + "."
if len(cls.base_model_prefix) > 0 and hasattr(model, cls.base_model_prefix) and not has_prefix_module:
model_to_load = getattr(model, cls.base_model_prefix)
if any(key in expected_keys_not_prefixed for key in loaded_keys):
> raise ValueError(
"The state dictionary of the model you are trying to load is corrupted. Are you sure it was "
"properly saved?"
)
E ValueError: The state dictionary of the model you are trying to load is corrupted. Are you sure it was properly saved?
```
However, when simply renaming `self.layernorm` to `self.layer_norm` in the head model, both tests pass.
### Expected behavior
Normally, this should work without any error. I think the reason we haven't encountered this issue yet is simply because the case where a head model defines a parameter that has the same name as one defined in the base model is quite rare. However, this should still work as expected, as in this case for instance the layernorm of the base Transformer will have `audio_spectogram_transformer.layernorm` as name and the layernorm of the head model simply `layernorm`.
Unless I'm missing something here ;) happy to discuss.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19974/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19973
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19973/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19973/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19973/events
|
https://github.com/huggingface/transformers/issues/19973
| 1,429,181,069
|
I_kwDOCUB6oc5VL5KN
| 19,973
|
issue with --jit_mode_eval enabled in trainer commandline
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"current behavior\r\nerror log:\r\n[INFO|trainer.py:557] 2022-10-30 20:10:11,309 >> Using cuda_amp half precision backend\r\n10/30/2022 20:10:11 - INFO - __main__ - *** Evaluate ***\r\n[INFO|trainer.py:725] 2022-10-30 20:10:11,309 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence2, sentence1. If idx, sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\nTraceback (most recent call last):\r\n File \"/home/wangyi/project/hugface/transformers/examples/pytorch/text-classification/run_glue.py\", line 622, in <module>\r\n main()\r\n File \"/home/wangyi/project/hugface/transformers/examples/pytorch/text-classification/run_glue.py\", line 560, in main\r\n metrics = trainer.evaluate(eval_dataset=eval_dataset)\r\n File \"/home/wangyi/project/hugface/transformers/src/transformers/trainer.py\", line 2792, in evaluate\r\n output = eval_loop(\r\n File \"/home/wangyi/project/hugface/transformers/src/transformers/trainer.py\", line 2913, in evaluation_loop\r\n model = self._wrap_model(self.model, training=False, dataloader=dataloader)\r\n File \"/home/wangyi/project/hugface/transformers/src/transformers/trainer.py\", line 1299, in _wrap_model\r\n model = self.torch_jit_model_eval(model, dataloader, training)\r\n File \"/home/wangyi/project/hugface/transformers/src/transformers/trainer.py\", line 1263, in torch_jit_model_eval\r\n jit_model = torch.jit.trace(jit_model, jit_inputs, strict=False)\r\n File \"/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/jit/_trace.py\", line 750, in trace\r\n return trace_module(\r\n File \"/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/jit/_trace.py\", line 967, in trace_module\r\n module._c._create_method_from_trace(\r\n File \"/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1118, in _slow_forward\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py\", line 1552, in forward\r\n outputs = self.bert(\r\n File \"/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1118, in _slow_forward\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py\", line 968, in forward\r\n batch_size, seq_length = input_shape\r\nValueError: not enough values to unpack (expected 2, got 1)\r\n",
"case2: if we run only predict. command like\r\npython3 examples/pytorch/text-classification/run_glue.py --model_name_or_path /skyrex01/wangyi/output/mrpc/ --task_name mrpc --do_predict --max_seq_length 128 --output_dir /skyrex01/wangyi/output/mrpc/inference1/ --overwrite_output_dir True --fp16 --jit_mode_eval\r\n\r\nerror changing, since inputdata does not contain \"labels\" in this case\r\njit failure as \"failed to use PyTorch jit mode due to: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select).\"\r\njit error:\r\n[INFO|modeling_utils.py:2616] 2022-10-30 20:13:22,055 >> All the weights of BertForSequenceClassification were initialized from the model checkpoint at /skyrex01/wangyi/output/mrpc/.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.\r\n10/30/2022 20:13:22 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /skyrex01/wangyi/.cache/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-1928edc8ebbd0881.arrow\r\n10/30/2022 20:13:22 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /skyrex01/wangyi/.cache/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-3447598ff1e90f2d.arrow\r\nRunning tokenizer on dataset: 0%| | 0/2 [00:00<?, ?ba/s]\r\n10/30/2022 20:13:22 - INFO - datasets.arrow_dataset - Caching processed dataset at /skyrex01/wangyi/.cache/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-64cad0d6db19155f.arrow\r\nRunning tokenizer on dataset: 50%|████████████████████████████████████████████████████████████████▌ | 1/2 [00:00<00:00, 7.20ba/s]\r\n[INFO|trainer.py:557] 2022-10-30 20:13:25,460 >> Using cuda_amp half precision backend\r\n10/30/2022 20:13:25 - INFO - __main__ - *** Predict ***\r\n[INFO|trainer.py:725] 2022-10-30 20:13:25,462 >> The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence1, sentence2. If idx, sentence1, sentence2 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.\r\n[WARNING|trainer.py:1268] 2022-10-30 20:13:25,714 >> failed to use PyTorch jit mode due to: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select).\r\n[INFO|trainer.py:2925] 2022-10-30 20:13:25,715 >> ***** Running Prediction *****\r\n[INFO|trainer.py:2927] 2022-10-30 20:13:25,715 >> Num examples = 1725\r\n[INFO|trainer.py:2930] 2022-10-30 20:13:25,715 >> Batch size = 8\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 216/216 [00:02<00:00, 87.92it/s]\r\n10/30/2022 20:13:28 - INFO - __main__ - ***** Predict results mrpc *****\r\n[INFO|modelcard.py:444] 2022-10-30 20:13:28,370 >> Dropping the following result as it does not have all the necessary fields:\r\n{'task': {'name': 'Text Classification', 'type': 'text-classification'}, 'dataset': {'name': 'GLUE MRPC', 'type': 'glue', 'args': 'mrpc'}}\r\n\r\n",
"case3: if we run only predict on cpu. command like\r\npython3 examples/pytorch/text-classification/run_glue.py --model_name_or_path /skyrex01/wangyi/output/mrpc/ --task_name mrpc --do_predict --max_seq_length 128 --output_dir /skyrex01/wangyi/output/mrpc/inference1/ --overwrite_output_dir True --bf16 --jit_mode_eval --no_cuda\r\nerror pop like\r\nERROR: Tensor-valued Constant nodes differed in value across invocations. This often indicates that the tracer has encountered untraceable code.\r\n Node:\r\n %13 : Tensor = prim::Constant[value={8}](), scope: __module.bert/__module.bert.encoder/__module.bert.encoder.layer.0/__module.bert.encoder.layer.0.attention/__module.bert.encoder.layer.0.attention.self # /home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py:341:0\r\n Source Location:\r\n /home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(341): forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl\r\n /home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(419): forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl\r\n /home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(489): forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl\r\n /home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(603): forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl\r\n /home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(1014): forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl\r\n /home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(1552): forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/jit/_trace.py(967): trace_module\r\n /skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/jit/_trace.py(750): trace\r\n /home/wangyi/project/hugface/transformers/src/transformers/trainer.py(1263): torch_jit_model_eval\r\n /home/wangyi/project/hugface/transformers/src/transformers/trainer.py(1299): _wrap_model\r\n /home/wangyi/project/hugface/transformers/src/transformers/trainer.py(2913): evaluation_loop\r\n /home/wangyi/project/hugface/transformers/src/transformers/trainer.py(2866): predict\r\n /home/wangyi/project/hugface/transformers/examples/pytorch/text-classification/run_glue.py(588): main\r\n /home/wangyi/project/hugface/transformers/examples/pytorch/text-classification/run_glue.py(622): <module>\r\n Comparison exception: The values for attribute 'shape' do not match: torch.Size([]) != torch.Size([768, 768]).\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.31
- Python version: 3.9.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
Trainer: @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python3 examples/pytorch/text-classification/run_glue.py --model_name_or_path /skyrex01/wangyi/output/mrpc/ --task_name mrpc --do_eval --max_seq_length 128 --output_dir /skyrex01/wangyi/output/mrpc/inference1/ --overwrite_output_dir True --fp16 --jit_mode_eval
### Expected behavior
jit success and run the jit traced model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19973/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19972
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19972/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19972/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19972/events
|
https://github.com/huggingface/transformers/issues/19972
| 1,429,061,495
|
I_kwDOCUB6oc5VLb93
| 19,972
|
DebertaV2 Modeling for SQUAD v2.0
|
{
"login": "yazdanbakhsh",
"id": 7105134,
"node_id": "MDQ6VXNlcjcxMDUxMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7105134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yazdanbakhsh",
"html_url": "https://github.com/yazdanbakhsh",
"followers_url": "https://api.github.com/users/yazdanbakhsh/followers",
"following_url": "https://api.github.com/users/yazdanbakhsh/following{/other_user}",
"gists_url": "https://api.github.com/users/yazdanbakhsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yazdanbakhsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yazdanbakhsh/subscriptions",
"organizations_url": "https://api.github.com/users/yazdanbakhsh/orgs",
"repos_url": "https://api.github.com/users/yazdanbakhsh/repos",
"events_url": "https://api.github.com/users/yazdanbakhsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/yazdanbakhsh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Deberta is not a Seq2Seq model, you can't make a quick fix to enable its use with `run_seq2seq_qa`, as you have experienced it. Deberta has a model qith a QA head, so you will be able to use it with the regular `run_qa` script.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-5.4.0-1087-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@LysandreJik
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python3 transformer-sparsity/examples/pytorch/question-answering/run_seq2seq_qa.py \
--model_name_or_path microsoft/deberta-v3-large \
--dataset_name squad_v2 \
--context_column context \
--question_column question \
--answer_column answers \
--do_train \
--do_eval \
--max_seq_length 512 \
--doc_stride 128 \
--warmup_ratio 0.2 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 8 \
--learning_rate 7e-6 \
--num_train_epochs 3 \
--version_2_with_negative \
--label_names "start_positions", "end_positions" \
--predict_with_generate \
--load_best_model_at_end \
--eval_steps ${eval_steps} \
--save_steps ${eval_steps} \
--evaluation_strategy steps \
--logging_steps ${eval_steps} \
--logging_strategy steps \
--save_total_limit 5 \
--metric_for_best_model "f1" \
--greater_is_better true \
--overwrite_output_dir \
--output_dir ${ckpt_path} 2>&1 | tee ~/${ckpt_path}/finetune_run_$(date +"%Y_%m_%d_%I_%M_%p").log
### Expected behavior
I have used a similar script for SQUADv2 on other models (RoBERTa), but it seems this model is not registered properly in HF, hence the following error:
```
Traceback (most recent call last):
File "transformer-sparsity/examples/pytorch/question-answering/run_seq2seq_qa.py", line 716, in <module>
main()
File "transformer-sparsity/examples/pytorch/question-answering/run_seq2seq_qa.py", line 380, in main
use_auth_token=True if model_args.use_auth_token else None,
File "/home/ayazdan/.local/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 467, in from_pretrained
f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
ValueError: Unrecognized configuration class <class 'transformers.models.deberta_v2.configuration_deberta_v2.DebertaV2Config'> for this kind of AutoModel: AutoModelForSeq2SeqLM.
Model type should be one of BartConfig, BigBirdPegasusConfig, BlenderbotConfig, BlenderbotSmallConfig, EncoderDecoderConfig, FSMTConfig, LEDConfig, LongT5Config, M2M100Config, MarianConfig, MBartConfig, MT5Config, MvpConfig, PegasusConfig, PegasusXConfig, PLBartConfig, ProphetNetConfig, T5Config, XLMProphetNetConfig.
```
I made a quick fix to register the model, however another issue still exists, regarding the model itself.
```
Traceback (most recent call last):
File "transformer-sparsity/examples/pytorch/question-answering/run_seq2seq_qa.py", line 716, in <module>
main()
File "transformer-sparsity/examples/pytorch/question-answering/run_seq2seq_qa.py", line 383, in main
model.resize_token_embeddings(len(tokenizer))
File "/home/ayazdan/.local/lib/python3.7/site-packages/transformers/configuration_utils.py", line 254, in __getattribute__
return super().__getattribute__(key)
AttributeError: 'DebertaV2Config' object has no attribute 'resize_token_embeddings'
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19972/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19971
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19971/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19971/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19971/events
|
https://github.com/huggingface/transformers/issues/19971
| 1,428,909,198
|
I_kwDOCUB6oc5VK2yO
| 19,971
|
Add SpA-Former
|
{
"login": "shivance",
"id": 51750587,
"node_id": "MDQ6VXNlcjUxNzUwNTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/51750587?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shivance",
"html_url": "https://github.com/shivance",
"followers_url": "https://api.github.com/users/shivance/followers",
"following_url": "https://api.github.com/users/shivance/following{/other_user}",
"gists_url": "https://api.github.com/users/shivance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shivance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shivance/subscriptions",
"organizations_url": "https://api.github.com/users/shivance/orgs",
"repos_url": "https://api.github.com/users/shivance/repos",
"events_url": "https://api.github.com/users/shivance/events{/privacy}",
"received_events_url": "https://api.github.com/users/shivance/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"@NielsRogge Would it be a valuable contribution to HuggingFace?",
"Sure this would be valuable! Let me know if you need any help"
] | 1,667
| 1,673
| 1,673
|
NONE
| null |
### Model description
I would like to add [SpA-Former](https://arxiv.org/abs/2206.10910) model to the Transformers.
It is an end-to-end transformer to recover a shadow-free image from a single shaded image. Unlike traditional methods that require two steps for shadow detection and then shadow removal, the SpA-Former unifies these steps into one, which is a one-stage network capable of directly learning the mapping function between shadows and no shadows, it does not require a separate shadow detection. Thus, SpA-former is adaptable to real image de-shadowing for shadows projected on different semantic regions. SpA-Former consists of transformer layer and a series of joint Fourier transform residual blocks and two-wheel joint spatial attention. The network in this paper is able to handle the task while achieving a very fast processing efficiency.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[Link](https://github.com/zhangbaijin/SpA-Former-shadow-removal) to Model Repo
[Link](https://arxiv.org/abs/2206.10910) to Paper
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19971/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19970
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19970/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19970/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19970/events
|
https://github.com/huggingface/transformers/issues/19970
| 1,428,837,997
|
I_kwDOCUB6oc5VKlZt
| 19,970
|
How to annotate these type of data for custom OCR training
|
{
"login": "mohit-217",
"id": 51528367,
"node_id": "MDQ6VXNlcjUxNTI4MzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/51528367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mohit-217",
"html_url": "https://github.com/mohit-217",
"followers_url": "https://api.github.com/users/mohit-217/followers",
"following_url": "https://api.github.com/users/mohit-217/following{/other_user}",
"gists_url": "https://api.github.com/users/mohit-217/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mohit-217/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohit-217/subscriptions",
"organizations_url": "https://api.github.com/users/mohit-217/orgs",
"repos_url": "https://api.github.com/users/mohit-217/repos",
"events_url": "https://api.github.com/users/mohit-217/events{/privacy}",
"received_events_url": "https://api.github.com/users/mohit-217/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@NielsRogge @gante Please review\r\nHow I need to annotate for these type of data.\r\n1.\r\n- 3 ( 7s+ 8 )\r\n- 5 (6s + 7)\r\n- s^2\r\n- s^2 + 3s + 1\r\n\r\n2.\r\n- 3 ( 7 s + 8 )\r\n- 5 ( 6 s + 7 )\r\n- s 2\r\n- s 2 + 3 s + 1\r\n\r\nwhich one is correct ?",
"Hi,\r\n\r\nCould you please ask this question on our [forum](https://discuss.huggingface.co/), rather than here?\r\n\r\nGithub issues are meant for bugs or feature requests.\r\n\r\nThanks!"
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
@NielsRogge @gante can you please explain how to annotate the below files for custom handwritten mathematical equation training. More importantly s^2





_Originally posted by @mohit-217 in https://github.com/huggingface/transformers/issues/16007#issuecomment-1296276393_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19970/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19969
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19969/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19969/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19969/events
|
https://github.com/huggingface/transformers/pull/19969
| 1,428,748,273
|
PR_kwDOCUB6oc5B07pG
| 19,969
|
Removed dependency from Distilbert tokenizer
|
{
"login": "harry7337",
"id": 75776208,
"node_id": "MDQ6VXNlcjc1Nzc2MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/75776208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harry7337",
"html_url": "https://github.com/harry7337",
"followers_url": "https://api.github.com/users/harry7337/followers",
"following_url": "https://api.github.com/users/harry7337/following{/other_user}",
"gists_url": "https://api.github.com/users/harry7337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harry7337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harry7337/subscriptions",
"organizations_url": "https://api.github.com/users/harry7337/orgs",
"repos_url": "https://api.github.com/users/harry7337/repos",
"events_url": "https://api.github.com/users/harry7337/events{/privacy}",
"received_events_url": "https://api.github.com/users/harry7337/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19303. Removes bert dependency from distilbert tokenizer
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19969/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19969",
"html_url": "https://github.com/huggingface/transformers/pull/19969",
"diff_url": "https://github.com/huggingface/transformers/pull/19969.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19969.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19968
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19968/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19968/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19968/events
|
https://github.com/huggingface/transformers/pull/19968
| 1,428,714,619
|
PR_kwDOCUB6oc5B00lH
| 19,968
|
[Doctest] Add configuration_deberta.py
|
{
"login": "Saad135",
"id": 22683922,
"node_id": "MDQ6VXNlcjIyNjgzOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/22683922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saad135",
"html_url": "https://github.com/Saad135",
"followers_url": "https://api.github.com/users/Saad135/followers",
"following_url": "https://api.github.com/users/Saad135/following{/other_user}",
"gists_url": "https://api.github.com/users/Saad135/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saad135/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saad135/subscriptions",
"organizations_url": "https://api.github.com/users/Saad135/orgs",
"repos_url": "https://api.github.com/users/Saad135/repos",
"events_url": "https://api.github.com/users/Saad135/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saad135/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds configuration_deberta.py to utils/documentation_tests.txt
Based on https://github.com/huggingface/transformers/issues/19487
@ydshieh can you please have a look? thanks :D
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19968/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19968",
"html_url": "https://github.com/huggingface/transformers/pull/19968",
"diff_url": "https://github.com/huggingface/transformers/pull/19968.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19968.patch",
"merged_at": 1667233321000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19967
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19967/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19967/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19967/events
|
https://github.com/huggingface/transformers/pull/19967
| 1,428,673,638
|
PR_kwDOCUB6oc5B0r6p
| 19,967
|
Transformer Model
|
{
"login": "AyuavnGautam",
"id": 91385710,
"node_id": "MDQ6VXNlcjkxMzg1NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/91385710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AyuavnGautam",
"html_url": "https://github.com/AyuavnGautam",
"followers_url": "https://api.github.com/users/AyuavnGautam/followers",
"following_url": "https://api.github.com/users/AyuavnGautam/following{/other_user}",
"gists_url": "https://api.github.com/users/AyuavnGautam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AyuavnGautam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AyuavnGautam/subscriptions",
"organizations_url": "https://api.github.com/users/AyuavnGautam/orgs",
"repos_url": "https://api.github.com/users/AyuavnGautam/repos",
"events_url": "https://api.github.com/users/AyuavnGautam/events{/privacy}",
"received_events_url": "https://api.github.com/users/AyuavnGautam/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,667
| 1,667
| 1,667
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19967/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19967",
"html_url": "https://github.com/huggingface/transformers/pull/19967",
"diff_url": "https://github.com/huggingface/transformers/pull/19967.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19967.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19966
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19966/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19966/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19966/events
|
https://github.com/huggingface/transformers/pull/19966
| 1,428,672,889
|
PR_kwDOCUB6oc5B0rwS
| 19,966
|
standardize `DistilBert` class names
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hmmm, I don't feel comfortable doing such a change. I understand it's supposed to be non-breaking, but we have had usage in the past of such classes, and the renaming here seems purely cosmetic.\r\n\r\nI understand that it would ease your conversion in `optimum`, but I'm pretty sure you'll need to adapt the implementation to other models that do not respect the format enforced here. How much effort would be needed from the `optimum` side to support the current DistilBERT layer class names?\r\n\r\nThanks",
"Thanks a lot!\nI see, I was also unsure about adding these changes. No problem, I will try to find a workaroud that should work without this modification "
] | 1,667
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR aims to standardize the module names of `DistilBert`. For example, previously the `DistilBert` layers were named `TransformerBlock` instead of the current convention `xxxLayer`. This PR addresses this, by changing the names of some core modules of `DistilBertModel`.
This way this model that is highly used on the Hub can easily benefit from `BetterTransformers` speedup using `optimum` library as such:
```
import torch
from transformers import AutoModel
from optimum.bettertransformer import BetterTransformer
model = AutoModel.from_pretrained("distilbert-base-uncased").eval()
model = BetterTransformer.transform(model)
input_ids = torch.LongTensor([[1, 1, 1, 1, 1]])
with torch.no_grad():
out = model(input_ids)
```
https://github.com/huggingface/optimum/pull/423
I am not sure but I don't think this is a breaking change since none of the key names of the modules are changed, and this only touches modules that are not present in the automapping. I have also ran a quick test and made sure I am getting the same results as the model card:
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
unmasker("Hello I'm a [MASK] model.")
>>> [{'score': 0.05292877182364464, 'token': 2535, 'token_str': 'role', 'sequence': "hello i'm a role model."}, {'score': 0.039685774594545364, 'token': 4827, 'token_str': 'fashion', 'sequence': "hello i'm a fashion model."}, {'score': 0.03474348038434982, 'token': 2449, 'token_str': 'business', 'sequence': "hello i'm a business model."}, {'score': 0.034622881561517715, 'token': 2944, 'token_str': 'model', 'sequence': "hello i'm a model model."}, {'score': 0.01814521849155426, 'token': 11643, 'token_str': 'modeling', 'sequence': "hello i'm a modeling model."}]
```
cc @sgugger @ydshieh
PS: I am unsure about these CI tests that are failing, they seem to pass on my local laptop, also the error does not give a proper traceback 🤔
```
[gw0] linux -- Python 3.7.12 /home/circleci/.pyenv/versions/3.7.12/bin/python
worker 'gw0' crashed while running 'tests/models/distilbert/test_modeling_distilbert.py::DistilBertModelTest::test_load_with_mismatched_shapes'
=========== xdist: worker gw0 crashed and worker restarting disabled ===========
```
I am seeing the same failure message for CI tests in https://github.com/huggingface/transformers/pull/19946 and https://github.com/huggingface/transformers/pull/19975 so maybe it's unrelated to this PR but I am not sure
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19966/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19966",
"html_url": "https://github.com/huggingface/transformers/pull/19966",
"diff_url": "https://github.com/huggingface/transformers/pull/19966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19966.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19965
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19965/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19965/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19965/events
|
https://github.com/huggingface/transformers/issues/19965
| 1,428,667,604
|
I_kwDOCUB6oc5VJ7zU
| 19,965
|
Cannot load TensorFlow model from PyTorch weights split to multiple files
|
{
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false
| null |
[] |
[
"No this is not supported yet, we'll work on adding support for this later on :-)",
"@sgugger Great to hear! 🔝 Feel free then to close this issue if redundant, thanks! :] ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Haven't forgotten, I plan to look into this in December :-)"
] | 1,667
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117
- Tensorflow version (GPU?): 2.9.2
### Who can help?
@LysandreJik @patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```bash
$ git clone https://github.com/stancld/transformers.git -b tf_longt5
$ cd transformers
$ pip install -e .
$ python
```
```python
>>> from transformers import TFLongT5ForConditionalGeneration
>>> m = TFLongT5ForConditionalGeneration.from_pretrained("google/long-t5-tglobal-xl", from_pt=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/transformers/src/transformers/modeling_tf_utils.py", line 2613, in from_pretrained
raise EnvironmentError(
OSError: google/long-t5-tglobal-xl does not appear to have a file named tf_model.h5 or pytorch_model.bin.
>>> m = TFLongT5ForConditionalGeneration.from_pretrained(MODEL_NAME, from_flax=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/transformers/src/transformers/modeling_tf_utils.py", line 2613, in from_pretrained
raise EnvironmentError(
OSError: google/long-t5-tglobal-xl does not appear to have a file named tf_model.h5 or pytorch_model.bin.
```
### Expected behavior
Being able to load TensorFlow model from PyTorch checkpoint when split to multiple files due to a large size.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19965/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19964
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19964/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19964/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19964/events
|
https://github.com/huggingface/transformers/pull/19964
| 1,428,618,447
|
PR_kwDOCUB6oc5B0gK3
| 19,964
|
Removed mt5 dependency on t5
|
{
"login": "harry7337",
"id": 75776208,
"node_id": "MDQ6VXNlcjc1Nzc2MjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/75776208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harry7337",
"html_url": "https://github.com/harry7337",
"followers_url": "https://api.github.com/users/harry7337/followers",
"following_url": "https://api.github.com/users/harry7337/following{/other_user}",
"gists_url": "https://api.github.com/users/harry7337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harry7337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harry7337/subscriptions",
"organizations_url": "https://api.github.com/users/harry7337/orgs",
"repos_url": "https://api.github.com/users/harry7337/repos",
"events_url": "https://api.github.com/users/harry7337/events{/privacy}",
"received_events_url": "https://api.github.com/users/harry7337/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19964). All of your documentation changes will be reflected on that endpoint.",
"Creating this PR after a long time because of a tensorflow AVX problem on my system due to which I couldn't run any tests. Currently working on my friend's laptop and making changes:)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19303. Removes the dependency of mt5 on t5.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19964/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19964",
"html_url": "https://github.com/huggingface/transformers/pull/19964",
"diff_url": "https://github.com/huggingface/transformers/pull/19964.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19964.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19963
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19963/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19963/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19963/events
|
https://github.com/huggingface/transformers/pull/19963
| 1,428,339,140
|
PR_kwDOCUB6oc5Bzks_
| 19,963
|
Generate: contrastive search with full optional outputs
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,667
| 1,667
| 1,667
|
MEMBER
| null |
# What does this PR do?
This PR further massages PT's `contrastive_search` in advance of the conversion to TF. It does the following modifications:
1. Pipes additional outputs that were missing (e.g. when `output_attentions` is `True`)
2. Rewrites part of the input replication to share logic with beam search -- replicating the input for `top_k` candidates is the same as replicating the input for `num_beams`
3. Removes additional redundant/unused operations
4. Because we now have all outputs (see 1), adds the standard suite of tests for a generation method
5. Moves integration tests to the corresponding model folder
All tests passing locally (`RUN_SLOW=1 py.test tests/* -k contrastive -vv`)
After this PR, we can start with the TF conversion.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19963/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19963",
"html_url": "https://github.com/huggingface/transformers/pull/19963",
"diff_url": "https://github.com/huggingface/transformers/pull/19963.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19963.patch",
"merged_at": 1667326536000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19962
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19962/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19962/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19962/events
|
https://github.com/huggingface/transformers/pull/19962
| 1,428,259,656
|
PR_kwDOCUB6oc5BzUOR
| 19,962
|
Add Onnx Config for PoolFormer
|
{
"login": "BakingBrains",
"id": 51019420,
"node_id": "MDQ6VXNlcjUxMDE5NDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/51019420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BakingBrains",
"html_url": "https://github.com/BakingBrains",
"followers_url": "https://api.github.com/users/BakingBrains/followers",
"following_url": "https://api.github.com/users/BakingBrains/following{/other_user}",
"gists_url": "https://api.github.com/users/BakingBrains/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BakingBrains/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BakingBrains/subscriptions",
"organizations_url": "https://api.github.com/users/BakingBrains/orgs",
"repos_url": "https://api.github.com/users/BakingBrains/repos",
"events_url": "https://api.github.com/users/BakingBrains/events{/privacy}",
"received_events_url": "https://api.github.com/users/BakingBrains/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I have ran \r\n```RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k \"poolformer\"```\r\n\r\n\r\n",
"Conversion output\r\n\r\n\r\n",
"@ChainYo @lewtun Any suggestions here?\r\n\r\nThanks and Regards.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19962). All of your documentation changes will be reflected on that endpoint.",
"Any updates on this PR?",
"@ChainYo Any updates here?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @lewtun and @michaelbenayoun, we should merge this before the Optimum change. \r\n\r\nCould you help on this?",
"Hi @ChainYo,\r\nThe change is already here, I think the PR can be merged once the conflicts are resolved.\r\nAlso @BakingBrains could you add it to Optimum as well? It should not require much effort. If not, I can make it myself.",
"@michaelbenayoun Sure, I will add it to Optimum. \r\n\r\nThank you",
"I tried to resolve the conflicts, but I think I messed up",
"I just reopened a new pull request for the same with resolved conflicts, can you please check @michaelbenayoun \r\nhttps://github.com/huggingface/transformers/pull/20868\r\n\r\nThank you"
] | 1,667
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (https://github.com/huggingface/transformers/issues/16308)
Add changes to make PoolFormer models available for Onnx conversion.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ChainYo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19962/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19962/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19962",
"html_url": "https://github.com/huggingface/transformers/pull/19962",
"diff_url": "https://github.com/huggingface/transformers/pull/19962.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19962.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19961
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19961/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19961/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19961/events
|
https://github.com/huggingface/transformers/pull/19961
| 1,428,242,117
|
PR_kwDOCUB6oc5BzQts
| 19,961
|
Update README.md
|
{
"login": "RohitYandigeri",
"id": 91059418,
"node_id": "MDQ6VXNlcjkxMDU5NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/91059418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RohitYandigeri",
"html_url": "https://github.com/RohitYandigeri",
"followers_url": "https://api.github.com/users/RohitYandigeri/followers",
"following_url": "https://api.github.com/users/RohitYandigeri/following{/other_user}",
"gists_url": "https://api.github.com/users/RohitYandigeri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RohitYandigeri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RohitYandigeri/subscriptions",
"organizations_url": "https://api.github.com/users/RohitYandigeri/orgs",
"repos_url": "https://api.github.com/users/RohitYandigeri/repos",
"events_url": "https://api.github.com/users/RohitYandigeri/events{/privacy}",
"received_events_url": "https://api.github.com/users/RohitYandigeri/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19961). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,667
| 1,670
| 1,670
|
NONE
| null |
[](https://workerb.linearb.io/v2/badge/collaboration-page?magicLinkId=Ds1ztbl)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19961/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19961",
"html_url": "https://github.com/huggingface/transformers/pull/19961",
"diff_url": "https://github.com/huggingface/transformers/pull/19961.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19961.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19960
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19960/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19960/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19960/events
|
https://github.com/huggingface/transformers/issues/19960
| 1,428,152,034
|
I_kwDOCUB6oc5VH97i
| 19,960
|
`return_dict` does not working in `modeling_t5.py` , I set `return_dict==True` but return a turple
|
{
"login": "CaffreyR",
"id": 84232793,
"node_id": "MDQ6VXNlcjg0MjMyNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaffreyR",
"html_url": "https://github.com/CaffreyR",
"followers_url": "https://api.github.com/users/CaffreyR/followers",
"following_url": "https://api.github.com/users/CaffreyR/following{/other_user}",
"gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions",
"organizations_url": "https://api.github.com/users/CaffreyR/orgs",
"repos_url": "https://api.github.com/users/CaffreyR/repos",
"events_url": "https://api.github.com/users/CaffreyR/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaffreyR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @CaffreyR 👋 At a first glance at our code base, I don't see how that bug can arise 🤔 Can you share a script or a notebook where the issue can be reproduced?",
"Hi @gante, yes of course! Many thanks! The code is here https://github.com/CaffreyR/FiD with little revision from https://github.com/facebookresearch/FiD. We can see our problem is here https://github.com/CaffreyR/FiD/blob/main/train_reader.py#L63.\r\n\r\nThe transformer version of this code is different from my experiment.(This is the script that is the easiest for you to produce). Please follow the steps on `readme` on https://github.com/facebookresearch/FiD#download-data to prepare the data(a bit large). And try to run \r\n```\r\npython train_reader.py \\\r\n --use_checkpoint \\\r\n --train_data open_domain_data/NQ/train.json \\ # after we preparing the data\r\n --eval_data open_domain_data/NQ/dev.json\\ # after we preparing the data\r\n --model_size base \\\r\n --per_gpu_batch_size 1 \\\r\n --n_context 100 \\\r\n --name my_experiment \\\r\n --checkpoint_dir checkpoint \\\r\n```\r\nThis data set is `NaturalQuestions`, it is little tricky to get the data prepared. So I am very grateful for your help!:)\r\n\r\n\r\nThank you very much!\r\n",
"Hey @CaffreyR -- with a long script it's hard to pinpoint the issue :) We need a short reproducible script, otherwise we will not prioritize this issue.",
"Hi @gante , it is very interesting that I try to use this code and it runs successfully. The batch is the same from FID, only the model is different. The original facebook code inherited and nested the t5 model.\r\n\r\n```\r\nimport torch\r\nimport transformers\r\nmodel = transformers.T5ForConditionalGeneration.from_pretrained('t5-base')\r\n# model = src.model.FiDT5(t5.config)\r\n# model.load_t5(t5.state_dict())\r\ncontext_ids=torch.tensor([[[ 822, 10, 3, 9, 538, 213, 1442, 9481, 1936, 10687,\r\n 999, 2233, 10, 1862, 12197, 16, 1547, 2625, 10, 1862,\r\n 12197, 16, 1547, 37, 1862, 12197, 16, 1547, 2401, 7,\r\n 12, 3, 9, 1059, 116, 2557, 11402, 47, 12069, 139,\r\n 46, 2913, 358, 788, 12, 8, 9284, 13, 941, 2254,\r\n 11, 748, 224, 38, 8, 169, 13, 306, 6339, 53,\r\n 1196, 41, 15761, 553, 61, 7299, 6, 3, 29676, 6,\r\n 21455, 2465, 6, 6256, 9440, 7, 6, 11, 20617, 277,\r\n 5, 100, 47, 294, 13, 8, 2186, 1862, 9481, 14310,\r\n 16781, 57, 13615, 7254, 40, 402, 122, 6, 84, 11531,\r\n 26, 10687, 585, 11, 748, 12, 993, 10687, 7596, 16,\r\n 8, 2421, 296, 5, 37, 1862, 12197, 441, 1547, 3,\r\n 28916, 16, 8, 778, 8754, 7, 24, 2237, 12, 46,\r\n 993, 16, 542, 8273, 999, 6, 902, 16, 27864, 6,\r\n 3504, 21247, 6, 11, 31251, 22660, 5, 1, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]])\r\n\r\nlabels=torch.tensor([[1547, 1]])\r\ncontext_mask=torch.tensor([[[ True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, True, True,\r\n True, True, True, True, True, True, True, True, False, False,\r\n False, False, False, False, False, False, False, False, False, False,\r\n False, False, False, False, False, False, False, False, False, False,\r\n False, False, False, False, False, False, False, False, False, False,\r\n False, False, False, False, False, False, False, False, False, False,\r\n False, False, False, False, False, False, False, False, False, False]]])\r\n\r\n# print(context_ids)\r\n# print(labels)\r\n# print(context_mask)\r\nn_layers, n_heads = 12, 12\r\nhead_importance = torch.zeros(n_layers, n_heads).to('cpu')\r\nattn_entropy = torch.zeros(n_layers, n_heads).to('cpu')\r\nhead_mask = torch.ones(n_layers, n_heads).to('cpu')\r\nhead_mask.requires_grad_(requires_grad=True)\r\ndecoder_head_mask = torch.ones(n_layers, n_heads).to('cpu')\r\ndecoder_head_mask.requires_grad_(requires_grad=True)\r\n\r\nif context_ids != None:\r\n # inputs might have already be resized in the generate method\r\n # if context_ids.dim() == 3:\r\n # self.encoder.n_passages = context_ids.size(1)\r\n context_ids = context_ids.view(context_ids.size(0), -1)\r\nif context_mask != None:\r\n context_mask = context_mask.view(context_mask.size(0), -1)\r\n\r\noutputs = model.forward(\r\n input_ids=context_ids,\r\n attention_mask=context_mask,\r\n labels=labels,\r\n return_dict=True,\r\n head_mask=head_mask,\r\n decoder_head_mask=decoder_head_mask\r\n )\r\n\r\n# outputs = model(\r\n# input_ids=context_ids.cuda(),\r\n# attention_mask=context_mask.cuda(),\r\n# labels=labels.cuda(),\r\n# return_dict=True,\r\n# head_mask=head_mask.cuda(),\r\n# decoder_head_mask=decoder_head_mask.cuda()\r\n# )\r\nprint(outputs)\r\n```\r\n\r\nIt might be the problem of inheriting, I don't know, it just different when I try to simplify the code. :(\r\n```\r\n def forward(self, input_ids=None, attention_mask=None, **kwargs):\r\n if input_ids != None:\r\n # inputs might have already be resized in the generate method\r\n if input_ids.dim() == 3:\r\n self.encoder.n_passages = input_ids.size(1)\r\n input_ids = input_ids.view(input_ids.size(0), -1)\r\n if attention_mask != None:\r\n attention_mask = attention_mask.view(attention_mask.size(0), -1)\r\n return super().forward(\r\n input_ids=input_ids,\r\n attention_mask=attention_mask,\r\n **kwargs\r\n )\r\n```\r\n",
"@CaffreyR then it's almost surely an upstream problem -- I noticed it uses `transformers==3.0.2`, which may explain the issue you're seeing :) \r\n\r\nWhile I can't provide support in these situations (the problem is not present in `transformers`), my advice would be to open an issue in FID and/or to try to monkey-patch their problematic model code.",
"OK then, I will give it a try ! Thanks!!!"
] | 1,667
| 1,667
| 1,667
|
NONE
| null |
### System Info
- `transformers` version: 4.22.1
- Platform: Linux-5.13.0-48-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patrickvonplaten Many thanks!
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am using the code from facebook research [FID](https://github.com/facebookresearch/FiD), and I try to use this code
```
for i, batch in enumerate(dataloader):
(idx, labels, _, context_ids, context_mask) = batch
outputs = model(
input_ids=context_ids.cuda(),
attention_mask=context_mask.cuda(),
labels=labels.cuda(),
return_dict=True,
head_mask=head_mask,
decoder_head_mask=decoder_head_mask
)
```
And it report error!
```
File "/home/user/anaconda3/envs/uw/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 1695, in forward
encoder_last_hidden_state=encoder_outputs.last_hidden_state,
AttributeError: 'tuple' object has no attribute 'last_hidden_state'
```
So I went to this line to see the output of t5 encoder output
https://github.com/huggingface/transformers/blob/v4.23.1/src/transformers/models/t5/modeling_t5.py#L1609
So I use this code
```
encoder_outputs = self.encoder(
input_ids=input_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds,
head_mask=head_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
print(type(encoder_outputs),"@@@",return_dict)
```
### Expected behavior
It print `<class 'tuple'> @@@ True`, so I set `return_dict==True` but return a turple
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19960/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19959
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19959/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19959/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19959/events
|
https://github.com/huggingface/transformers/issues/19959
| 1,427,873,857
|
I_kwDOCUB6oc5VG6BB
| 19,959
|
Training using accelerate and deepspeed with ZeRO results in model weights mismatch
|
{
"login": "JohnnyRacer",
"id": 77214388,
"node_id": "MDQ6VXNlcjc3MjE0Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/77214388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnnyRacer",
"html_url": "https://github.com/JohnnyRacer",
"followers_url": "https://api.github.com/users/JohnnyRacer/followers",
"following_url": "https://api.github.com/users/JohnnyRacer/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnnyRacer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnnyRacer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnnyRacer/subscriptions",
"organizations_url": "https://api.github.com/users/JohnnyRacer/orgs",
"repos_url": "https://api.github.com/users/JohnnyRacer/repos",
"events_url": "https://api.github.com/users/JohnnyRacer/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnnyRacer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for the report!\r\n\r\nCc @ArthurZucker\r\nThe OPT tokenizer does not have the same length (50265) as the model embeddings (50272), which causes problems with all our language modeling fine-tuning scripts where there is an automatic resize of the model embeddings to the tokenizer length.\r\nI'm guessing this is to get to a multiple of 8, but there should be fake tokens in the tokenizer to accommodate that maybe.\r\n\r\n@JohnnyRacer If you remove the line [here](https://github.com/huggingface/transformers/blob/c87ae86a8f9f56ae193461fa3db6dc20f80eabe4/examples/pytorch/language-modeling/run_clm_no_trainer.py#L381) in the example you're using, you won't have any problem.",
"I see. We should probably have all tokenizers and models have the same embed dim? Seems like people often ask the question and it's a bit confusing + could be good for zero shot learning if we have extract fake tokens. \nWDYT? ",
"In those cases, I think we add fake tokens to the tokenizer. cc @LysandreJik to make sure I'm not saying something wrong.\r\n\r\n**Edit:** Actually talked to him and we can fix the example instead. Will make a PR later today.",
"Should be now fixed by the above PR!"
] | 1,666
| 1,667
| 1,667
|
NONE
| null |
### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am currently trying to use deepspeed to finetune a AutoModelForCausalLM model (facebook/opt1.3b) on a multi-GPU instance with ZeRO optimization with the unmodified `run_clm_no_trainer.py` script from [this blog post on HF](https://huggingface.co/blog/pytorch-fsdp). The model trains correctly but when loading the model using the code snippet below.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b", use_fast=True)
model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b")
model.load_state_dict(torch.load("./opt-1.3b-wikitext/pytorch_model.bin"))
```
It results in an error with the following message below.
```
RuntimeError: Error(s) in loading state_dict for OPTForCausalLM:
size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([50265, 2048])
from checkpoint, the shape in current model is torch.Size([50272, 2048]).
size mismatch for lm_head.weight: copying a param with shape torch.Size([50265, 2048]) from checkpoint, the
shape in current model is torch.Size([50272, 2048]).
```
Which is very confusing since the model does not raise any errors about loading weights during training, even over multiple epochs. I have tried to use a different optimizer as well as trying to disable mixed and half precision but the error still persists. I am unsure if this is a bug or I have something misconfigured, any help would be greatly appreciated.
My ds_config :
```python
{'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 1, 'zero_optimization': {'stage': 2, 'offload_optimizer': {'device': 'none'}, 'offload_param': {'device': 'none'}, 'stage3_gather_16bit_weights_on_model_save': False}, 'steps_per_print': 'inf', 'fp16': {'enabled': True, 'auto_cast': True}}
```
My training command:
```
accelerate launch run_clm_no_trainer.py \
--model_name_or_path facebook/opt-1.3b \
--dataset_name wikitext \
--num_train_epochs 6 \
--block_size 128 \
--output_dir ./opt-1.3b-wikitext
```
### Expected behavior
Models trained using accelerate should be loadable using `model.load_state_dict`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19959/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19958
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19958/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19958/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19958/events
|
https://github.com/huggingface/transformers/issues/19958
| 1,427,831,737
|
I_kwDOCUB6oc5VGvu5
| 19,958
|
Using GPT2 tokenizer with DataCollatorForLanguageModeling
|
{
"login": "martinez-zacharya",
"id": 36873191,
"node_id": "MDQ6VXNlcjM2ODczMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/36873191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/martinez-zacharya",
"html_url": "https://github.com/martinez-zacharya",
"followers_url": "https://api.github.com/users/martinez-zacharya/followers",
"following_url": "https://api.github.com/users/martinez-zacharya/following{/other_user}",
"gists_url": "https://api.github.com/users/martinez-zacharya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/martinez-zacharya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martinez-zacharya/subscriptions",
"organizations_url": "https://api.github.com/users/martinez-zacharya/orgs",
"repos_url": "https://api.github.com/users/martinez-zacharya/repos",
"events_url": "https://api.github.com/users/martinez-zacharya/events{/privacy}",
"received_events_url": "https://api.github.com/users/martinez-zacharya/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The data collator will expect a list of samples from a torch Dataset, but you are passing a single dictionary to it (the output of the tokenizer is one dictionary, with the keys being the arguments expected by the models like `input_ids`, and the values being tensors).\r\n\r\nYou can actually do directly `model(**tokens)`, you don't need the data collator on this example.",
"Thank you very much for your quick response! I'm simply trying to fine-tune the model with CLM, so I figured I needed to use the datacollator."
] | 1,666
| 1,667
| 1,667
|
NONE
| null |
### System Info
- `transformers` version: 4.21.3
- Platform: Linux-5.10.0-18-amd64-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@SaulLu @patil-suraj @patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForLanguageModeling
model = AutoModelForCausalLM.from_pretrained("nferruz/ProtGPT2")
tokenizer = AutoTokenizer.from_pretrained("nferruz/ProtGPT2")
tokenizer.pad_token = tokenizer.eos_token
data_collator = DataCollatorForLanguageModeling(tokenizer = tokenizer, mlm=False)
list_of_seqs = ['GLWSKIKEVGKEAAKAAAKAAGKAALGAVSEAV', 'DGVKLCDVPSGTWSGHCGSSSKCSQQCKDREHFAYGGACHYQFPSVKCFCKRQC']
tokens = tokenizer(list_of_seqs, padding = True, return_tensors='pt', return_special_tokens_mask=True)
collated = data_collator(tokens)
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zachmartinez/miniconda3/lib/python3.9/site-packages/transformers/data/data_collator.py", line 42, in __call__
return self.torch_call(features)
File "/home/zachmartinez/miniconda3/lib/python3.9/site-packages/transformers/data/data_collator.py", line 732, in torch_call
"input_ids": _torch_collate_batch(examples, self.tokenizer, pad_to_multiple_of=self.pad_to_multiple_of)
File "/home/zachmartinez/miniconda3/lib/python3.9/site-packages/transformers/data/data_collator.py", line 404, in _torch_collate_batch
length_of_first = examples[0].size(0)
AttributeError: 'tokenizers.Encoding' object has no attribute 'size'
```
### Expected behavior
I would expect that the collator would accept the tokenizer output and perform the collating. Any help would be appreciated.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19958/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19957
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19957/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19957/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19957/events
|
https://github.com/huggingface/transformers/issues/19957
| 1,427,714,002
|
I_kwDOCUB6oc5VGS_S
| 19,957
|
KeyError: 'eval_loss'
|
{
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It doesn't look like you are providing any labels to your model (your functions preparing the dataset do not generate any `start_positions` and `end_positions`). More generally, please use the [forums](https://discuss.huggingface.co/) for any help to debug your code, as we keep the issues for bugs and feature requests only.",
"```\r\nfrom transformers import AutoModelForQuestionAnswering\r\nfrom transformers import TrainingArguments\r\nfrom transformers import Trainer\r\nfrom tqdm.auto import tqdm\r\nfrom transformers import AutoTokenizer\r\n\r\nimport numpy as np\r\nimport collections\r\nimport evaluate\r\nmetric = evaluate.load(\"squad\")\r\n\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\n\r\n\r\n\r\n\r\n\r\nfrom datasets import load_dataset\r\n \r\ndef sapl(data, n, split):\r\n data_sampled = data[split].shuffle(seed=42).select(range(n))\r\n return data_sampled\r\n \r\n \r\nfrom datasets import load_dataset\r\nraw_datasets = load_dataset('squad')\r\nraw_train = sapl(raw_datasets, 100, 'train') # 100 samples\r\nraw_test = sapl(raw_datasets, 100, 'validation') # 100 samples\r\n\r\n\r\n\r\nn_best = 20\r\nmax_answer_length = 30\r\npredicted_answers = []\r\n\r\n\r\nclass QA_pipeline(object):\r\n \r\n \r\n def __init__(self, model_name, \r\n device = 'cuda', \r\n max_length = 512, \r\n stride = 128):\r\n \r\n \r\n self.model_name = model_name\r\n self.device = device\r\n self.tokenizer = AutoTokenizer.from_pretrained(self.model_name)\r\n self.model = AutoModelForQuestionAnswering.from_pretrained(\r\n self.model_name).to(self.device)\r\n \r\n self.max_length = max_length\r\n self.stride = stride\r\n \r\n \r\n \r\n def _tokenization_train2(self, examples):\r\n questions = [q.strip() for q in examples[\"question\"]]\r\n inputs = tokenizer(\r\n questions,\r\n examples[\"context\"],\r\n max_length=self.max_length,\r\n truncation=\"only_second\",\r\n stride=self.stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\",\r\n )\r\n \r\n return inputs\r\n \r\n \r\n \r\n def _tokenization_train(self, examples):\r\n questions = [q.strip() for q in examples[\"question\"]]\r\n inputs = self.tokenizer(\r\n questions,\r\n examples[\"context\"],\r\n max_length=self.max_length,\r\n truncation=\"only_second\",\r\n stride=self.stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\",\r\n )\r\n\r\n offset_mapping = inputs.pop(\"offset_mapping\")\r\n sample_map = inputs.pop(\"overflow_to_sample_mapping\")\r\n answers = examples[\"answers\"]\r\n start_positions = []\r\n end_positions = []\r\n\r\n for i, offset in enumerate(offset_mapping):\r\n sample_idx = sample_map[i]\r\n answer = answers[sample_idx]\r\n start_char = answer[\"answer_start\"][0]\r\n end_char = answer[\"answer_start\"][0] + len(answer[\"text\"][0])\r\n sequence_ids = inputs.sequence_ids(i)\r\n\r\n # Find the start and end of the context\r\n idx = 0\r\n while sequence_ids[idx] != 1:\r\n idx += 1\r\n context_start = idx\r\n while sequence_ids[idx] == 1:\r\n idx += 1\r\n context_end = idx - 1\r\n\r\n # If the answer is not fully inside the context, label is (0, 0)\r\n if offset[context_start][0] > start_char or offset[context_end][1] < end_char:\r\n start_positions.append(0)\r\n end_positions.append(0)\r\n else:\r\n # Otherwise it's the start and end token positions\r\n idx = context_start\r\n while idx <= context_end and offset[idx][0] <= start_char:\r\n idx += 1\r\n start_positions.append(idx - 1)\r\n\r\n idx = context_end\r\n while idx >= context_start and offset[idx][1] >= end_char:\r\n idx -= 1\r\n end_positions.append(idx + 1)\r\n\r\n inputs[\"start_positions\"] = start_positions\r\n inputs[\"end_positions\"] = end_positions\r\n return inputs\r\n \r\n \r\n def _tokenization_validation(self, examples):\r\n questions = [q.strip() for q in examples[\"question\"]]\r\n inputs = self.tokenizer(\r\n questions,\r\n examples[\"context\"],\r\n max_length=self.max_length,\r\n truncation=\"only_second\",\r\n stride=self.stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\",\r\n )\r\n\r\n sample_map = inputs.pop(\"overflow_to_sample_mapping\")\r\n example_ids = []\r\n\r\n for i in range(len(inputs[\"input_ids\"])):\r\n sample_idx = sample_map[i]\r\n example_ids.append(examples[\"id\"][sample_idx])\r\n\r\n sequence_ids = inputs.sequence_ids(i)\r\n offset = inputs[\"offset_mapping\"][i]\r\n inputs[\"offset_mapping\"][i] = [\r\n o if sequence_ids[k] == 1 else None for k, o in enumerate(offset)\r\n ]\r\n\r\n inputs[\"example_id\"] = example_ids\r\n return inputs\r\n \r\n \r\n \r\n def get_train_dataset(self, train_dataset):\r\n train_dataset = train_dataset.map(self._tokenization_train,\r\n batched=True,\r\n remove_columns=train_dataset.column_names,)\r\n \r\n print(len(train_dataset), len(train_dataset))\r\n return train_dataset\r\n \r\n \r\n \r\n def get_val_dataset(self, val_dataset):\r\n \r\n validation_dataset = val_dataset.map(\r\n self._tokenization_validation,\r\n batched=True,\r\n remove_columns=val_dataset.column_names,)\r\n \r\n print(len(val_dataset), len(validation_dataset))\r\n return validation_dataset\r\n \r\n \r\n \r\n def compute_metrics_eval(self, eval_pred):\r\n print(\"it is working\")\r\n print(eval_pred)\r\n\r\n\r\n\r\n def compute_metrics(self, start_logits, end_logits, features, examples):\r\n example_to_features = collections.defaultdict(list)\r\n for idx, feature in enumerate(features):\r\n example_to_features[feature[\"example_id\"]].append(idx)\r\n\r\n predicted_answers = []\r\n for example in tqdm(examples):\r\n example_id = example[\"id\"]\r\n context = example[\"context\"]\r\n answers = []\r\n\r\n # Loop through all features associated with that example\r\n for feature_index in example_to_features[example_id]:\r\n start_logit = start_logits[feature_index]\r\n end_logit = end_logits[feature_index]\r\n offsets = features[feature_index][\"offset_mapping\"]\r\n\r\n start_indexes = np.argsort(start_logit)[-1 : -n_best - 1 : -1].tolist()\r\n end_indexes = np.argsort(end_logit)[-1 : -n_best - 1 : -1].tolist()\r\n for start_index in start_indexes:\r\n for end_index in end_indexes:\r\n # Skip answers that are not fully in the context\r\n if offsets[start_index] is None or offsets[end_index] is None:\r\n continue\r\n # Skip answers with a length that is either < 0 or > max_answer_length\r\n if (\r\n end_index < start_index\r\n or end_index - start_index + 1 > max_answer_length\r\n ):\r\n continue\r\n\r\n answer = {\r\n \"text\": context[offsets[start_index][0] : offsets[end_index][1]],\r\n \"logit_score\": start_logit[start_index] + end_logit[end_index],\r\n }\r\n answers.append(answer)\r\n\r\n # Select the answer with the best score\r\n if len(answers) > 0:\r\n best_answer = max(answers, key=lambda x: x[\"logit_score\"])\r\n predicted_answers.append(\r\n {\"id\": example_id, \"prediction_text\": best_answer[\"text\"]}\r\n )\r\n else:\r\n predicted_answers.append({\"id\": example_id, \"prediction_text\": \"\"})\r\n\r\n theoretical_answers = [{\"id\": ex[\"id\"], \"answers\": ex[\"answers\"]} for ex in examples]\r\n return metric.compute(predictions=predicted_answers, references=theoretical_answers)\r\n\r\n \r\n \r\n def training(self, train_dataset, val_dataset, epochs = 2):\r\n \r\n self.args = TrainingArguments(f'{self.model_name}_training',\r\n logging_steps = 1,\r\n learning_rate=2e-5,\r\n num_train_epochs=epochs,\r\n \r\n \r\n save_total_limit = 2,\r\n save_strategy = \"epoch\",\r\n load_best_model_at_end=True,\r\n \r\n evaluation_strategy = \"epoch\", #To calculate metrics per epoch\r\n logging_strategy=\"epoch\", #Extra: to log training data stats for loss \r\n weight_decay=0.01,\r\n fp16=True,\r\n push_to_hub=False)\r\n \r\n self.trainer = Trainer(model = self.model,\r\n args = self.args,\r\n compute_metrics=self.compute_metrics_eval,\r\n train_dataset = train_dataset,\r\n eval_dataset = val_dataset,\r\n tokenizer = self.tokenizer,)\r\n self.trainer.train()\r\n self.trainer.save_model()\r\n \r\n \r\n def validation(self, val_dataset, raw_val_dataset):\r\n \r\n self.trainer = Trainer(model=self.model)\r\n self.trainer.model = self.model.cuda()\r\n\r\n predictions, _, _ = self.trainer.predict(val_dataset)\r\n start_logits, end_logits = predictions\r\n output = self.compute_metrics(start_logits, end_logits, val_dataset, raw_val_dataset)\r\n return output\r\n```\r\n \r\n```\r\npipe = QA_pipeline(\"emilyalsentzer/Bio_ClinicalBERT\", device = 'cuda:0')\r\ntrain_d = pipe.get_train_dataset(raw_train)\r\nval_d = pipe.get_val_dataset(raw_test)\r\n\r\nprint(pipe.validation(val_d, raw_test))\r\npipe.training(train_d, val_d, epochs = 5)\r\n```\r\n\r\nThat is my code, while `load_best_model_at_end=True` it's giving error ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@monk1337 It is actually very helpful just to know that this was initiated by using `load_best_model_at_end=True`! I was struggling with the same error. Using `load_best_model_at_end=False` solved it for me.\r\n\r\nHowever, this shouldn't be necessary. @sgugger this is a bug."
] | 1,666
| 1,684
| 1,670
|
NONE
| null |
I am trying to build a Question Answering Pipeline with the Hugginface framework but facing the `KeyError: 'eval_loss'` error. My goal is to train and save the best model at last and evaluate the validation test on the loaded model. My trainer configuration looks like this:
args = TrainingArguments(f'model_training',
evaluation_strategy="epoch",
label_names = ["start_positions", "end_positions"],
logging_steps = 1,
learning_rate=2e-5,
num_train_epochs=epochs,
save_total_limit = 2,
load_best_model_at_end=True,
save_strategy="epoch",
logging_strategy="epoch",
report_to="none",
weight_decay=0.01,
fp16=True,
push_to_hub=False)
While training, getting this error:
Traceback (most recent call last):
File "qa_pipe.py", line 286, in <module>
pipe.training(train_d, val_d, epochs = 2)
File "qa_pipe.py", line 263, in training
self.trainer.train()
File "/home/admin/qa/lib/python3.7/site-packages/transformers/trainer.py", line 1505, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/home/admin/qa/lib/python3.7/site-packages/transformers/trainer.py", line 1838, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/admin/qa/lib/python3.7/site-packages/transformers/trainer.py", line 2090, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/admin/qa/lib/python3.7/site-packages/transformers/trainer.py", line 2193, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_loss'
The minimal working example is provided on [colab][1]
How to avoid this error and save the best model at last?
[1]: https://colab.research.google.com/drive/1JNHK8CnMHTm6VMukvDFJq8nvaHhBkxgM?usp=sharing
### Who can help?
@LysandreJik
@sgugger
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1JNHK8CnMHTm6VMukvDFJq8nvaHhBkxgM?usp=sharing
### Expected behavior
It should run without the error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19957/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19956
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19956/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19956/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19956/events
|
https://github.com/huggingface/transformers/pull/19956
| 1,427,634,295
|
PR_kwDOCUB6oc5BxQtd
| 19,956
|
Add TF image classification example script
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Still no good, I scanned the test fetcher and found a potential bug. COuld you add\r\n```py\r\n elif f.startswith(\"examples/tensorflow\"):\r\n test_files_to_run.append(\"examples/flax/test_tensorflow_examples.py\")\r\n```\r\nat the line 562 of `utils/test_fetcher.py` (in-between PyTorch and Flax)? I think that's what causing the issue of the tests not running.",
"> It looks like your branch is a bit old and does not contain some fixes made to make sure the example tests run when an example is modified (you can see the test examples are not running here 😅 ). Could you try a rebase on main?\r\n\r\n@sgugger I've rebased from upstream main and force pushed again. If I run `git log --oneline` I can see these changes are applied on top of the tip of main. \r\n```\r\n270bfb056 (HEAD -> add-examples-tf-image-classification, origin/add-examples-tf-image-classification) Add tests\r\na2256258b Fix up\r\n1a1594cb8 Update requirements\r\nb6a2f1ef9 TF image classification script\r\n9ccea7acb (upstream/main, main) Fix some doctests after PR 15775 (#20036)\r\na639ea9e8 Add **kwargs (#20037)\r\nec6878f6c Now supporting pathlike in pipelines too. (#20030)\r\naa39967b2 reorganize glossary (#20010)\r\n305e8718b Show installed libraries and their versions in CI jobs (#20026)\r\n```\r\n\r\nThe examples still aren't running and I can't see in the diff where I could be overriding this 😅 I'm sure there's something I'm overlooking. I'll keep digging but let me know if there's something else I should be doing. Sorry to bother.",
"@sgugger - sorry late on the comment as well. I'll add your suggestion! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @amyeroberts, is this PR still going ahead? It looked almost ready!",
"@Rocketknight1 Yes - sorry, this fell down my priority list for a bit. Code is all ready to go - I was trying to find models that make the tests run quickly c.f. [this comment](https://github.com/huggingface/transformers/pull/19956#discussion_r1012991408). ",
"@amyeroberts Ah, that makes sense! It's totally okay to upload your own super-mini model and use that - it doesn't really matter if the accuracy is bad, the test will just let us detect if the outputs from this model class change suddenly",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,675
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Adds the TF equivalent for the PyTorch image classification example script.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19956/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19956",
"html_url": "https://github.com/huggingface/transformers/pull/19956",
"diff_url": "https://github.com/huggingface/transformers/pull/19956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19956.patch",
"merged_at": 1675278576000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19955
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19955/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19955/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19955/events
|
https://github.com/huggingface/transformers/pull/19955
| 1,427,558,875
|
PR_kwDOCUB6oc5BxANK
| 19,955
|
changes for mbart_causal_lm
|
{
"login": "amankhandelia",
"id": 7098967,
"node_id": "MDQ6VXNlcjcwOTg5Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7098967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amankhandelia",
"html_url": "https://github.com/amankhandelia",
"followers_url": "https://api.github.com/users/amankhandelia/followers",
"following_url": "https://api.github.com/users/amankhandelia/following{/other_user}",
"gists_url": "https://api.github.com/users/amankhandelia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amankhandelia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amankhandelia/subscriptions",
"organizations_url": "https://api.github.com/users/amankhandelia/orgs",
"repos_url": "https://api.github.com/users/amankhandelia/repos",
"events_url": "https://api.github.com/users/amankhandelia/events{/privacy}",
"received_events_url": "https://api.github.com/users/amankhandelia/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sanchit-gandhi please have a look when you have some time to spare, for some reason test did not ran. can you check the same and please re-initialize the test pipeline",
"Might need to fix style according to https://huggingface.co/docs/transformers/pr_checks#code-and-documentation-style",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
# What does this PR do?
This PR add FlaxMBartForCausalLM which was previously part of #19831, as discussed in [here](https://github.com/huggingface/transformers/issues/19897#issuecomment-1294648919), I am raising a separate PR for it.
Reason I want to add this model is, that it is a prerequisite to Donut model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19955/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19955",
"html_url": "https://github.com/huggingface/transformers/pull/19955",
"diff_url": "https://github.com/huggingface/transformers/pull/19955.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19955.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19954
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19954/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19954/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19954/events
|
https://github.com/huggingface/transformers/pull/19954
| 1,427,456,168
|
PR_kwDOCUB6oc5Bwqlc
| 19,954
|
clean up vision/text config dict arguments
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Without this PR, we have somehow surprising/confusing results\r\n\r\n```python\r\nfrom transformers import CLIPConfig, CLIPModel\r\n\r\nconfig = CLIPConfig.from_pretrained(\"openai/clip-vit-base-patch16\")\r\nprint(config.vision_config.patch_size)\r\nprint(config.vision_config_dict[\"patch_size\"])\r\n\r\nconfig.vision_config.patch_size = 32\r\nconfig.save_pretrained(\"v2\")\r\n\r\nconfig_v2 = CLIPConfig.from_pretrained(\"v2\")\r\n# This is not `32` which is unexpected!\r\n# In fact, it is `vision_config_dict` is being used during loading to set `vision_config`\r\nprint(config_v2.vision_config.patch_size)\r\n# This is 32 - unexpected!\r\nprint(config_v2.vision_config_dict[\"patch_size\"])\r\n\r\nconfig.vision_config_dict[\"patch_size\"] = 32\r\nconfig.save_pretrained(\"v3\")\r\n\r\nconfig_v3 = CLIPConfig.from_pretrained(\"v3\")\r\n# This is 32 - unexpected!\r\nprint(config_v3.vision_config.patch_size)\r\n# This is 32 - OK\r\nprint(config_v3.vision_config_dict[\"patch_size\"])\r\n```",
"@sgugger If you are happy with the current change, I will apply the changes to some other models, and the testing files.\r\nSo far it is good even if I don't change `to_dict`. It has already\r\n\r\n```python\r\noutput[\"text_config\"] = self.text_config.to_dict()\r\noutput[\"vision_config\"] = self.vision_config.to_dict()\r\n```",
"Awesome that you are working on fixing this!\r\n\r\nEncountered the same issue with a new model I'm working on called CLIPSeg.\r\n\r\nAlso, could we update `GroupViT` as well? This is also a CLIP-like model."
] | 1,666
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
Remove `vision_config_dict` and `text_config_dict`: just use `vision_config` and `text_config`.
- Make code base cleaner
- Avoid surprising behavior (see the comment)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19954/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19954",
"html_url": "https://github.com/huggingface/transformers/pull/19954",
"diff_url": "https://github.com/huggingface/transformers/pull/19954.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19954.patch",
"merged_at": 1667387023000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19953
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19953/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19953/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19953/events
|
https://github.com/huggingface/transformers/pull/19953
| 1,427,444,463
|
PR_kwDOCUB6oc5BwoE5
| 19,953
|
Upload dummy models
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
Upload dummy models
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19953/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19953",
"html_url": "https://github.com/huggingface/transformers/pull/19953",
"diff_url": "https://github.com/huggingface/transformers/pull/19953.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19953.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19952
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19952/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19952/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19952/events
|
https://github.com/huggingface/transformers/pull/19952
| 1,427,439,660
|
PR_kwDOCUB6oc5BwnC4
| 19,952
|
Adding EDSR model
|
{
"login": "venkat-natchi",
"id": 115526526,
"node_id": "U_kgDOBuLLfg",
"avatar_url": "https://avatars.githubusercontent.com/u/115526526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/venkat-natchi",
"html_url": "https://github.com/venkat-natchi",
"followers_url": "https://api.github.com/users/venkat-natchi/followers",
"following_url": "https://api.github.com/users/venkat-natchi/following{/other_user}",
"gists_url": "https://api.github.com/users/venkat-natchi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/venkat-natchi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/venkat-natchi/subscriptions",
"organizations_url": "https://api.github.com/users/venkat-natchi/orgs",
"repos_url": "https://api.github.com/users/venkat-natchi/repos",
"events_url": "https://api.github.com/users/venkat-natchi/events{/privacy}",
"received_events_url": "https://api.github.com/users/venkat-natchi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I will add the other components based on [this](https://huggingface.co/docs/transformers/add_new_model) page. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @alaradirik and @NielsRogge ",
"Sorry for the delay.\r\nCan some one help me on putting the model file into the organisation space?\r\nThanks",
"> Sorry for the delay. Can some one help me on putting the model file into the organisation space? Thanks\r\n\r\nHi @venkat-natchi, thanks for working on this! \r\n\r\nI can help you with that but I saw that there is no conversion script yet. The conversion script (e.g. convert_original_XXX.py) loads the pre-trained original model and the randomly initialized HF model with the corresponding configuration, and replaces each parameter of the HF model with the corresponding learned parameter of the original model. We also have a convenient `push_to_hub()` method that can be added to the conversion script to create a repo on the hub and push the converted / pre-trained HF model and files. See an example conversion script over [here.](https://github.com/huggingface/transformers/blob/main/src/transformers/models/dpt/convert_dpt_to_pytorch.py)\r\n\r\ncc @sgugger @NielsRogge ",
"@venkat-natchi I guess you also need to rebase your branch on main as TensorFlow new release broke a lot of things so tests won't pass unless you do this.\r\n",
"Thanks guys.!!\r\n\r\nStarted with a convert_script and rebased with main branch. ",
"There is multiprocessing [here](https://github.com/sanghyun-son/EDSR-PyTorch/blob/master/src/dataloader.py) for data loading. I need some help in disengaging it and implement a simple processing step.\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19952). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@alaradirik and @NielsRogge Friendly ping here.",
"Hi @venkat-natchi, would it be possible to rebase your branch on the main branch of transformers?\r\n\r\nThis way, the CI becomes greener, and allows us to review the PR in depth.",
"Sure, will do. Thanks",
"Hello, can I work on this issue? Although I'm new to open-source contributions, I've worked on super-resolution models in the past and I was wondering why HuggingFace did not have these. I am familiar with PyTorch.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> Hello, can I work on this issue? Although I'm new to open-source contributions, I've worked on super-resolution models in the past and I was wondering why HuggingFace did not have these. I am familiar with PyTorch.\r\n\r\nHi @asrimanth, perhaps you could collaborate with @venkat-natchi on this PR if they are okay with it? Super resolution is definitely a task we would like to add to transformers and this would be a great first addition :)",
"Hello @alaradirik, Sure! I am interested. How do I get started?",
"> Hello @alaradirik, Sure! I am interested. How do I get started?\r\n\r\n@venkat-natchi can add you as a contributor to their forked transformers repo and you two could collaborate on this branch if they are okay with it. @venkat-natchi would you prefer to work on the PR on your own or hand it over to @asrimanth instead?\r\n\r\nIn any case, you can refer to the [guidelines](https://huggingface.co/docs/transformers/add_new_model) to get started with adding a model. I'd recommend first checking you can run the original repo without any issues though. Here are some summarized points that might help:\r\n- Each model, including different checkpoints of the same model, has it's own repo on the Hub (see [DETR-ResNet-50 repo](https://huggingface.co/facebook/detr-resnet-50) as an example). This is basically a git repo that stores the checkpoint specific configuration, preprocessing configuration and the model weights.\r\n- The code (this PR) added to transformers acts as a boilerplate to load different checkpoints - EDSR trained on different datasets or with different resolution or larger / smaller architecture.\r\n- configuration_edsr.py should contain all the hyperparameters, the input image size and architectural details (e.g. number of hidden layers) to initialize the model.\r\n- image_processing_edsr.py should contain the ImageProcessor class that takes in the raw input image and preprocesses it to the format expected as input to the model (resizing to a fixed input size, normalization, cropping, etc.)\r\n- modeling_edsr.py should contain the model definition.\r\n- The conversion script:\r\n - Loads the pretrained original model and randomly initializes the HF implementation with the corresponding configuration\r\n - Copies the pretrained parameters (weights and biases) of the original model to the corresponding parameters of the randomly initialized HF model (the conversion step)\r\n - Forward propagates an arbitrary input through both the original model and converted HF model and checks if the outputs match\r\n - Uploads the converted HF model to the hub\r\n - Each model and image processor class is tested with scripts under `tests/models/<MODEL_NAME>/ `, you can refer to other test files to see what tests to add.\r\n\r\nOnce you are done, you would need to run the following commands to check the PR passes all CI tests:\r\n```\r\nmake style\r\nmake quality\r\nmake repo-consistency\r\n\r\nRUN_SLOW=TRUE pytest tests/models/edsr/test_modeling_edsr.py\r\nRUN_SLOW=TRUE pytest tests/models/edsr/test_image_processor_edsr.py\r\n```\r\n\r\nWe can do an in-depth review once the PR passes most tests or the configuration, preprocessing and modeling is mostly complete.\r\n\r\nHope this helps!",
"Sure, I will add you as collaborator. ",
"Sorry for the delay. \r\n@asrimanth\r\nI added you as a collaborator. ",
"You can find the working version of the original repository here\r\nhttps://github.com/venkat-natchi/EDSR-PyTorch/blob/master/src/trial.py\r\n",
"Hello @alaradirik and the HuggingFace Team, I seem to run into an error where the ```EDSR_PRETRAINED_MODEL_ARCHIVE_LIST``` is empty and this appears to be causing some problems. From my understanding, I have to upload the model weights to a URL and mention it in that list. The existing pre-trained models are as follows: \r\n\r\n```\r\nurl = {\r\n \"r16f64x2\": \"https://cv.snu.ac.kr/research/EDSR/models/edsr_baseline_x2-1bc95232.pt\",\r\n \"r16f64x3\": \"https://cv.snu.ac.kr/research/EDSR/models/edsr_baseline_x3-abf2a44e.pt\",\r\n \"r16f64x4\": \"https://cv.snu.ac.kr/research/EDSR/models/edsr_baseline_x4-6b446fab.pt\",\r\n \"r32f256x2\": \"https://cv.snu.ac.kr/research/EDSR/models/edsr_x2-0edfb8a3.pt\",\r\n \"r32f256x3\": \"https://cv.snu.ac.kr/research/EDSR/models/edsr_x3-ea3ef2c6.pt\",\r\n \"r32f256x4\": \"https://cv.snu.ac.kr/research/EDSR/models/edsr_x4-4f62e9ef.pt\",\r\n}\r\n```\r\n\r\nShould I upload these weights into the hub? If so, should I upload these to my profile? Is there a way to load these weights from the URL like torch.hub.load? Please let me know.",
"> Should I upload these weights into the hub? If so, should I upload these to my profile? Is there a way to load these weights from the URL like torch.hub.load? Please let me know.\r\n\r\nHi @asrimanth , that's correct, the `EDSR_PRETRAINED_MODEL_ARCHIVE_LIST` contains the links to the uploaded checkpoints's configuration files on the Hugging Face Hub, see an example repo over [here](https://huggingface.co/kakaobrain/align-base). Note that each link / repo contains the converted model, **not** the original model weights. So you should first complete the configuration, preprocessing, modeling and conversion scripts, and then convert and upload each checkpoint released by the authors.\r\n\r\nRepos on the hub are placed under the organization that wrote the paper (Seoul National University in this case). We can ask them to create an organization on the hub but we will place the repos under the huggingface organization until they do so. \r\n\r\nSince model conversion is the last step, you can fill in the list with the repo paths you intend to create. For example:\r\n```\r\nEDSR_PRETRAINED_MODEL_ARCHIVE_LIST = {\r\n \"huggingface/edsr-base-x2\": \"https://huggingface.co/huggingface/edsr-base-x2/resolve/main/config.json\",\r\n \"huggingface/edsr-base-x3\": \"https://huggingface.co/huggingface/edsr-base-x3/resolve/main/config.json\",\r\n \"huggingface/edsr-base-x4\": \"https://huggingface.co/huggingface/edsr-base-x4/resolve/main/config.json\",\r\n \"huggingface/edsr-x2\": \"https://huggingface.co/huggingface/edsr-x2/resolve/main/config.json\",\r\n ...\r\n}\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,682
| 1,682
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19631
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19952/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19952",
"html_url": "https://github.com/huggingface/transformers/pull/19952",
"diff_url": "https://github.com/huggingface/transformers/pull/19952.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19952.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19951
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19951/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19951/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19951/events
|
https://github.com/huggingface/transformers/issues/19951
| 1,427,430,066
|
I_kwDOCUB6oc5VFNqy
| 19,951
|
The scipt exit when calling "pretrainmodel.save_pretrained" because of OOM, but the previous training phase is well
|
{
"login": "Zcchill",
"id": 83019888,
"node_id": "MDQ6VXNlcjgzMDE5ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/83019888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zcchill",
"html_url": "https://github.com/Zcchill",
"followers_url": "https://api.github.com/users/Zcchill/followers",
"following_url": "https://api.github.com/users/Zcchill/following{/other_user}",
"gists_url": "https://api.github.com/users/Zcchill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zcchill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zcchill/subscriptions",
"organizations_url": "https://api.github.com/users/Zcchill/orgs",
"repos_url": "https://api.github.com/users/Zcchill/repos",
"events_url": "https://api.github.com/users/Zcchill/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zcchill/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> ### System Info\r\n> NVIDIA A100 Tensor Core GPUs (80G); python 3.8.13; torch 1.10.0+cu113; transformers 4.20.1;\r\n> \r\n> ### Who can help?\r\n> @sgugger\r\n> \r\n> ### Information\r\n> * [x] The official example scripts\r\n> * [ ] My own modified scripts\r\n> \r\n> ### Tasks\r\n> * [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n> * [ ] My own task or dataset (give details below)\r\n> \r\n> ### Reproduction\r\n> When I fine-tune the OPT model based on Transformer.Trainer() (deepspeed, zero2), there is no problem in training process, however, the script fails to load the final best model in the end. It's really confused since the problem. I checked the function \"pretrainmodel.save_pretrained\", it seems that it has doing some operations like \"del state_dict\" to save memory. Thus, it's really confused that our gpu memory is enough(80G) since it can finish the training process, but it fails to load the finally best model into memory. Here is the corresponding traceback.\r\n> \r\n> ### Traceback\r\n> ```\r\n> [INFO|trainer.py:1834] 2022-09-07 21:46:26,324 >> Loading best model from /ssdwork/results/results_mc6k_6.7b/20220907-1655/checkpoint-68 (score: 0.77099609375).\r\n> \r\n> Traceback (most recent call last):\r\n> File \"finetune.py\", line 497, in <module>\r\n> main()\r\n> \r\n> File \"finetune.py\", line 460, in main\r\n> train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py\", line 1409, in train\r\n> return inner_training_loop(\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py\", line 1771, in _inner_training_loop\r\n> self._load_best_model()\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py\", line 1867, in _load_best_model\r\n> load_result = load_sharded_checkpoint(model, self.state.best_model_checkpoint, strict=strict_load)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py\", line 445, in load_sharded_checkpoint\r\n> state_dict = torch.load(os.path.join(folder, shard_file))\r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 607, in load\r\n> return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 882, in _load\r\n> result = unpickler.load()\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 857, in persistent_load\r\n> load_tensor(data_type, size, key, _maybe_decode_ascii(location))\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 846, in load_tensor\r\n> loaded_storages[key] = restore_location(storage, location)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 175, in default_restore_location\r\n> result = fn(storage, location)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 157, in _cuda_deserialize\r\n> return obj.cuda(device)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/_utils.py\", line 79, in _cuda\r\n> return new_type(self.size()).copy_(self, non_blocking)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/cuda/__init__.py\", line 606, in _lazy_new\r\n> return super(_CudaBase, cls).__new__(cls, *args, **kwargs)\r\n> \r\n> RuntimeError: CUDA out of memory. Tried to allocate 12.40 GiB (GPU 0; 79.17 GiB total capacity; 0 bytes already allocated; 3.48 GiB free; 0 bytes reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n> \r\n> Traceback (most recent call last):\r\n> File \"finetune.py\", line 497, in <module>\r\n> main()\r\n> \r\n> File \"finetune.py\", line 460, in main\r\n> train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py\", line 1409, in train\r\n> return inner_training_loop(\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py\", line 1771, in _inner_training_loop\r\n> self._load_best_model()\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py\", line 1867, in _load_best_model\r\n> load_result = load_sharded_checkpoint(model, self.state.best_model_checkpoint, strict=strict_load)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py\", line 445, in load_sharded_checkpoint\r\n> state_dict = torch.load(os.path.join(folder, shard_file))\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 607, in load\r\n> return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 882, in _load\r\n> result = unpickler.load()\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 857, in persistent_load\r\n> load_tensor(data_type, size, key, _maybe_decode_ascii(location))\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 846, in load_tensor\r\n> loaded_storages[key] = restore_location(storage, location)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 175, in default_restore_location\r\n> result = fn(storage, location)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py\", line 157, in _cuda_deserialize\r\n> return obj.cuda(device)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/_utils.py\", line 79, in _cuda\r\n> return new_type(self.size()).copy_(self, non_blocking)\r\n> \r\n> File \"/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/cuda/__init__.py\", line 606, in _lazy_new\r\n> return super(_CudaBase, cls).__new__(cls, *args, **kwargs)\r\n> \r\n> RuntimeError: CUDA out of memory. Tried to allocate 12.40 GiB (GPU 0; 79.17 GiB total capacity; 0 bytes already allocated; 3.38 GiB free; 0 bytes reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n> \r\n> [2022-09-07 21:46:38,873] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859055\r\n> [2022-09-07 21:46:38,873] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859056\r\n> [2022-09-07 21:46:38,874] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859057\r\n> [2022-09-07 21:46:38,874] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859058\r\n> [2022-09-07 21:46:38,874] [ERROR] [launch.py:184:sigkill_handler] ['/home/anaconda3/envs/tk-instruct/bin/python', '-u', 'finetune.py', '--local_rank=3', '--deepspeed', '/home/opt/ds_config.json', '--model_name_or_path', 'facebook/opt-6.7b', '--train_file', '/home/opt/data/train_mc6k.csv', '--validation_file', '/home/opt/data/valid_mc_300.csv', '--do_train', '--do_eval', '--fp16', '--output_dir', '/ssdwork/opt/results/results_mc6k_6.7b/20220907-1655/', '--num_train_epochs', '1000', '--per_device_train_batch_size', '4', '--evaluation_strategy', 'epoch', '--save_strategy', 'epoch', '--load_best_model_at_end', '--metric_for_best_model', 'eval_loss', '--greater_is_better', 'False', '--gradient_accumulation_steps', '32', '--use_fast_tokenizer', 'False', '--learning_rate', '1e-05', '--warmup_steps', '10', '--save_total_limit', '1', '--overwrite_cache', '--block_size', '2048'] exits with return code = 1\r\n> \r\n> /home/anaconda3/envs/tk-instruct/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown\r\n> warnings.warn('resource_tracker: There appear to be %d \r\n> ```\r\n> \r\n> ### Expected behavior\r\n> It should be noted that the total process is good when fine-tuning model with 2.7B parameters and below but will exit for model with 6.7B parameters and above. I think it may not simply due to OOM, since our single GPU memory is already 80G , and we mainly call the Trainer() to finish the process (based on deepspeed zero2 optimizer). I think if I ignored some significant parameters like \"--save_on_each_node\".\r\n\r\nI have the same problem as you. How did you solve it? ",
"After using deepspeed for large models(xl、xxl), the parameters will be stored in pieces, but the loading method of the best model parameters will change, and deepspeed is not used."
] | 1,666
| 1,673
| 1,670
|
NONE
| null |
### System Info
NVIDIA A100 Tensor Core GPUs (80G);
python 3.8.13;
torch 1.10.0+cu113;
transformers 4.20.1;
### Who can help?
@sgugger
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I fine-tune the OPT model based on Transformer.Trainer() (deepspeed, zero2), there is no problem in training process, however, the script fails to load the final best model in the end. It's really confused since the problem. I checked the function "pretrainmodel.save_pretrained", it seems that it has doing some operations like "del state_dict" to save memory. Thus, it's really confused that our gpu memory is enough(80G) since it can finish the training process, but it fails to load the finally best model into memory. Here is the corresponding traceback.
### Traceback
```
[INFO|trainer.py:1834] 2022-09-07 21:46:26,324 >> Loading best model from /ssdwork/results/results_mc6k_6.7b/20220907-1655/checkpoint-68 (score: 0.77099609375).
Traceback (most recent call last):
File "finetune.py", line 497, in <module>
main()
File "finetune.py", line 460, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1409, in train
return inner_training_loop(
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1771, in _inner_training_loop
self._load_best_model()
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in _load_best_model
load_result = load_sharded_checkpoint(model, self.state.best_model_checkpoint, strict=strict_load)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 445, in load_sharded_checkpoint
state_dict = torch.load(os.path.join(folder, shard_file))
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 857, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 846, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 157, in _cuda_deserialize
return obj.cuda(device)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/_utils.py", line 79, in _cuda
return new_type(self.size()).copy_(self, non_blocking)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/cuda/__init__.py", line 606, in _lazy_new
return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
RuntimeError: CUDA out of memory. Tried to allocate 12.40 GiB (GPU 0; 79.17 GiB total capacity; 0 bytes already allocated; 3.48 GiB free; 0 bytes reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
File "finetune.py", line 497, in <module>
main()
File "finetune.py", line 460, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1409, in train
return inner_training_loop(
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1771, in _inner_training_loop
self._load_best_model()
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in _load_best_model
load_result = load_sharded_checkpoint(model, self.state.best_model_checkpoint, strict=strict_load)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 445, in load_sharded_checkpoint
state_dict = torch.load(os.path.join(folder, shard_file))
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 857, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 846, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 157, in _cuda_deserialize
return obj.cuda(device)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/_utils.py", line 79, in _cuda
return new_type(self.size()).copy_(self, non_blocking)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/cuda/__init__.py", line 606, in _lazy_new
return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
RuntimeError: CUDA out of memory. Tried to allocate 12.40 GiB (GPU 0; 79.17 GiB total capacity; 0 bytes already allocated; 3.38 GiB free; 0 bytes reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
[2022-09-07 21:46:38,873] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859055
[2022-09-07 21:46:38,873] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859056
[2022-09-07 21:46:38,874] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859057
[2022-09-07 21:46:38,874] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859058
[2022-09-07 21:46:38,874] [ERROR] [launch.py:184:sigkill_handler] ['/home/anaconda3/envs/tk-instruct/bin/python', '-u', 'finetune.py', '--local_rank=3', '--deepspeed', '/home/opt/ds_config.json', '--model_name_or_path', 'facebook/opt-6.7b', '--train_file', '/home/opt/data/train_mc6k.csv', '--validation_file', '/home/opt/data/valid_mc_300.csv', '--do_train', '--do_eval', '--fp16', '--output_dir', '/ssdwork/opt/results/results_mc6k_6.7b/20220907-1655/', '--num_train_epochs', '1000', '--per_device_train_batch_size', '4', '--evaluation_strategy', 'epoch', '--save_strategy', 'epoch', '--load_best_model_at_end', '--metric_for_best_model', 'eval_loss', '--greater_is_better', 'False', '--gradient_accumulation_steps', '32', '--use_fast_tokenizer', 'False', '--learning_rate', '1e-05', '--warmup_steps', '10', '--save_total_limit', '1', '--overwrite_cache', '--block_size', '2048'] exits with return code = 1
/home/anaconda3/envs/tk-instruct/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d
```
### Expected behavior
It should be noted that the total process is good when fine-tuning model with 2.7B parameters and below but will exit for model with 6.7B parameters and above. I think it may not simply due to OOM, since our single GPU memory is already 80G , and we mainly call the Trainer() to finish the process (based on deepspeed zero2 optimizer). I think if I ignored some significant parameters like "--save_on_each_node".
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19951/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19951/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19950
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19950/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19950/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19950/events
|
https://github.com/huggingface/transformers/pull/19950
| 1,427,398,518
|
PR_kwDOCUB6oc5BweJo
| 19,950
|
Fix ONNX tests for ONNX Runtime v1.13.1
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Does the new name work with the minimum version of ONNX pinned?\r\n\r\nAre you referring to the version we list in `setup.py` or somewhere else? In `setup.py` we have `onnxruntime>=1.4.0` so users with a pre-existing installation of ONNX Runtime would have to upgrade to `v1.13.1`.\r\n\r\nAn alternative is to have an if/else statement that checks the `onnxruntime` version to guarantee backwards compatibility - I'll implement that instead :)",
"> An alternative is to have an if/else statement that checks the onnxruntime version to guarantee backwards compatibility - I'll implement that instead :)\r\n\r\nYes please!",
"Backwards compatibility added in https://github.com/huggingface/transformers/pull/19950/commits/01ccbea13800e6912c1e34cbece4311cd2b6b420 :)"
] | 1,666
| 1,667
| 1,667
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the slow tests for ONNX in `test_onnx.py` which were failing due `input_qType` being renamed by `activation_qType` in `onnxruntime` v1.13.1
With this fix, the follow passes:
```
RUN_SLOW=1 pytest tests/onnx/test_onnx.py
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19950/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19950",
"html_url": "https://github.com/huggingface/transformers/pull/19950",
"diff_url": "https://github.com/huggingface/transformers/pull/19950.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19950.patch",
"merged_at": 1667204505000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19949
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19949/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19949/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19949/events
|
https://github.com/huggingface/transformers/issues/19949
| 1,427,393,712
|
I_kwDOCUB6oc5VFEyw
| 19,949
|
Feature extraction pipeline increasing memory use
|
{
"login": "quancore",
"id": 15036825,
"node_id": "MDQ6VXNlcjE1MDM2ODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/15036825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quancore",
"html_url": "https://github.com/quancore",
"followers_url": "https://api.github.com/users/quancore/followers",
"following_url": "https://api.github.com/users/quancore/following{/other_user}",
"gists_url": "https://api.github.com/users/quancore/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quancore/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quancore/subscriptions",
"organizations_url": "https://api.github.com/users/quancore/orgs",
"repos_url": "https://api.github.com/users/quancore/repos",
"events_url": "https://api.github.com/users/quancore/events{/privacy}",
"received_events_url": "https://api.github.com/users/quancore/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @quancore ,\r\n\r\nIs this line normal ? \r\n```\r\n dataset = ListDataset(df[col].tolist())\r\n```\r\nThis will load the entire dataset in memory, not sure if 65k entries are big or not but they can add up fast if they are sizeable documents.\r\n` df = pd.read_csv(os.path.join(args.df_path), sep='\\t', header=0)` \r\n\r\nAs far as I understand you're loading the entire file here (so larger than just the col entries you're looking for)\r\n\r\n\r\nFinally you're setting `max_length=512` which means your larges embedding is `512 * 1024` x `65 000` that's roughly `30Go` .\r\nIn `transformers` there's no reduction of the embedding which is done sometimes by `sentence-transformers` (either looking at the embedding of the first token, averaging, maxing or other reduction mecanisms)\r\nCould this part be missing ?\r\n\r\n`pad_to_max_length=True,` means all the tensors will be 512 x 1024.\r\n\r\nAlso I'm not sure, but I think the output of the pipeline by default is a raw list meaning it will take up more space than it's `numpy` equivalent. You could try using the the new `return_tensors=True` parameter to receive directly the embedding in tensor format.\r\n\r\nTell me if any of this helps solve your use case !\r\n",
"HI @Narsil , thank you for the answer.\r\n```\r\nIs this line normal ?\r\n\r\ndataset = ListDataset(df[col].tolist())\r\n\r\nThis will load the entire dataset in memory, not sure if 65k entries are big or not but they can add up fast if they are sizeable documents.\r\ndf = pd.read_csv(os.path.join(args.df_path), sep='\\t', header=0)\r\n```\r\nThe dataset is not that big and ListDataset instance example actually comes from one of the answers on the forum: https://discuss.huggingface.co/t/progress-bar-for-hf-pipelines/20498/2\r\nThat's why I am loading the data frame and converting it to a list for use, do you have a better suggestion while seeing the progress?\r\n\r\n```\r\nFinally you're setting max_length=512 which means your larges embedding is 512 * 1024 x 65 000 that's roughly 30Go .\r\nIn transformers there's no reduction of the embedding which is done sometimes by sentence-transformers (either looking at the embedding of the first token, averaging, maxing or other reduction mecanisms)\r\nCould this part be missing?\r\n```\r\nAs you have calculated, it should be around 30GB or around, but it is reaching up to 128 GB which is insane. Right now, I am only interested in the CLS token, so if I set the max token to 1, will it give me only the CLS token?\r\n\r\n```\r\nAlso I'm not sure, but I think the output of the pipeline by default is a raw list meaning it will take up more space than it's numpy equivalent. You could try using the the new return_tensors=True parameter to receive directly the embedding in tensor format.\r\n```\r\nI will try this.",
"> The dataset is not that big and ListDataset instance example actually comes from one of the answers on the forum: https://discuss.huggingface.co/t/progress-bar-for-hf-pipelines/20498/2\r\n\r\nIf the dataset is not that big it will work fine, but since you already have the data you don't need to do `.tolist()`, I would do something like\r\n\r\n```python\r\nclass MyDataset:\r\n def __init__(self, panda_frame, col):\r\n self.panda_fram = panda_frame\r\n self.col = col\r\n def __len__(self):\r\n return len(self.panda_frame[self.col])\r\n def __getitem__(self, i):\r\n return self.panda_frame[self.col][i]\r\n ```\r\nI haven't tested it, but with this gist you could get away without copying anything.\r\n\r\n\r\n> As you have calculated, it should be around 30GB or around, but it is reaching up to 128 GB which is insane. Right now, I am only interested in the CLS token, so if I set the max token to 1, will it give me only the CLS token?\r\n\r\nDon't use `pad_to_max_length` imo. This is unnecessary in a lot of cases. What you want to do is to run the model on the actual full sentence, but keep around only the embedding for the first token (which should be token[0], but I don't know the model you are using, it may not exist depending on how it's setup.)\r\n\r\n\r\n```\r\n embeddings.append(embedding[0]) # Embedding should be `seq_len, hidden_dim` so `embedding[0] should be `hidden_dim`.\r\n ```\r\n \r\n Also since you are in a pipeline, you could also write to disk the results, in a dataset, a different file for each embeddings or something like that. Then you could run the entire pipeline on very little memory, that's basically the whole point of pipeline, to try and limit aggressively the memory necessary. \r\n \r\nFor 30Go and 128Go, it's hard to answer exactly without checking, but constants do pile up fast in Python. Anything reference another viable will keep the whole data alive for instance.\r\nThe pipeline itself shouldn't keep anything around but `lists` are much more wasteful than tensors memory wise, also at those levels of memory there's probably a big chunk of fragmentation of memory which might increase the overall usage. Garbage collection might not be able to keep up with the amount of data you generate etc, etc..\r\n\r\nIf you could reproduce an issue on a smaller scale example using something like `busybox time ...` to showcase that we're using too much memory for a given task (can't really reproduce your example right now) that'd be lovely.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
### System Info
UBUNTU 22.04
### Who can help?
@Narsil
### Reproduction
```python
import os
import argparse
import hickle as hkl
import numpy as np
import pandas as pd
from sklearn import model_selection
from transformers import pipeline, AutoTokenizer, AutoModel
import torch
from torch.utils.data import Dataset
from tqdm import tqdm
def is_file_path(path):
if os.path.isfile(path):
return path
else:
raise argparse.ArgumentTypeError(f"{path} is not a valid file path")
def is_dir_path(path):
if os.path.isdir(path):
return path
else:
raise argparse.ArgumentTypeError(f"{path} is not a valid dir path")
parser = argparse.ArgumentParser()
parser.add_argument("-d", "--dataframe", help="Path of the dataframe", type=is_file_path, dest="df_path")
parser.add_argument("-ed", "--embedding_dir", help="Path of embeddings stored", type=is_dir_path, dest="embedding_dir")
parser.add_argument("-V", "--version", help="show program version", action="store_true", dest="version")
args = parser.parse_args()
MODEL_NAME = "anferico/bert-for-patents"
pipe = None
test_size = 1000
input_text_column = "text"
label_column = "SUBFIELD_ID"
class ListDataset(Dataset):
def __init__(self, original_list):
self.original_list = original_list
def __len__(self):
return len(self.original_list)
def __getitem__(self, i):
return self.original_list[i]
def get_pipeline(device_index):
global pipe
if pipe is None:
print(f"Creating pipeline for device: {device_index}")
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME, do_lower_case=True, model_max_length=512, truncation=True, padding=True, pad_to_max_length=True
)
device = torch.device(f"cuda:{device_index}" if torch.cuda.is_available() else "cpu")
model = AutoModel.from_pretrained(MODEL_NAME).to(device)
local_pipe = pipeline(
"feature-extraction",
model=model,
tokenizer=tokenizer,
max_length=512,
truncation=True,
padding=True,
pad_to_max_length=True,
device=device,
framework="pt",
batch_size=16,
)
pipe = local_pipe
return pipe
def extract_embeddings(df, col, mode="cls"):
dataset = ListDataset(df[col].tolist())
if mode == "cls":
pipe = get_pipeline(0)
embeddings = []
for embedding in tqdm(pipe(dataset, max_length=512, truncation=True, num_workers=8), total=len(dataset)):
embeddings.append(np.squeeze(embedding)[0])
return embeddings
def main():
df = pd.read_csv(os.path.join(args.df_path), sep='\t', header=0)
train_df, test_df = model_selection.train_test_split(df, shuffle=True, stratify=df[label_column],
train_size=len(df) - test_size, random_state=50)
print("Extracting train embeddings...")
x_train_bert = extract_embeddings(train_df, input_text_column)
print("Extracting test embeddings...")
x_test_bert = extract_embeddings(test_df, input_text_column)
if __name__ == "__main__":
main()
```
### Expected behavior
I am trying to extract text embeddings from a text column of around 65k entries. I need to store the embeddings in the memory t train the downstream sklearn classifier (no online, incremental mode). The memory use of the above feature extraction gets insane like 128 GB, may I ask why this is the case (I am just storing one NumPy array of 1024 float numbers per entry, so it has to be much much smaller).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19949/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19948
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19948/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19948/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19948/events
|
https://github.com/huggingface/transformers/issues/19948
| 1,427,312,850
|
I_kwDOCUB6oc5VExDS
| 19,948
|
Transformers logging setLevel method seems not to work
|
{
"login": "AndreaSottana",
"id": 48888970,
"node_id": "MDQ6VXNlcjQ4ODg4OTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/48888970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaSottana",
"html_url": "https://github.com/AndreaSottana",
"followers_url": "https://api.github.com/users/AndreaSottana/followers",
"following_url": "https://api.github.com/users/AndreaSottana/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaSottana/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreaSottana/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaSottana/subscriptions",
"organizations_url": "https://api.github.com/users/AndreaSottana/orgs",
"repos_url": "https://api.github.com/users/AndreaSottana/repos",
"events_url": "https://api.github.com/users/AndreaSottana/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreaSottana/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I'll try",
"I am a beginner and unfortunately could not find an example with your issue. I know the logging library and probably could help, but I need to reproduce the example",
"Any thoughts @LysandreJik @sgugger ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @AndreaSottana, sorry to have missed this.\r\n\r\nThe `logging` module for `transformers` acts on the code within `transformers` itself (this is how the `logging` library works), not on user code. \r\n\r\nHowever, by doing the following line:\r\n```py\r\nlogger = logging.get_logger(__name__)\r\n```\r\nThe logger created will not depend from `transformers` but from the module you're currently running. It will, therefore, not be impacted by the methods which affect `transformers`' logging module.\r\n\r\nThis is not the cleanest workaround, but you could get what you want by specifying that this logger instance should behave as a `transformers` module with something like the following:\r\n```py\r\nfrom transformers.utils import logging\r\nlogger = logging.get_logger('transformers.custom.' + __name__)\r\nlogger.setLevel(\"INFO\")\r\nlogger.info(\"Hello World\")\r\n```\r\n\r\nThis will trick it into understanding that `logger` is the logger for a module that lives within `transformers.custom`. This should print out\r\n```\r\nHello World\r\n```\r\njust fine."
] | 1,666
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.21.3
- Platform: Linux-5.15.0-33-generic-x86_64-with-glibc2.35
- Python version: 3.10.4
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@LysandreJik @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am subclassing the `Trainer` object because I want to inject some custom features in the `_inner_training_loop` method. What I noticed is that once I do that, the `logger.info` printouts, which when using the standard `Trainer` are printed to the console, are now no longer printed. Even If I try to explicitly force the logger level, this seems not to work. To reproduce the behaviour, run the script below
```python3
from transformers.utils import logging
logger = logging.get_logger(__name__)
logger.setLevel("INFO")
logger.info("Hello World")
```
### Expected behavior
I would expect `Hello World` to be printed to the console, but it is not.
Why is this the case? How can I set it such that it also prints out INFO level logs?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19948/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19947
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19947/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19947/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19947/events
|
https://github.com/huggingface/transformers/issues/19947
| 1,427,231,502
|
I_kwDOCUB6oc5VEdMO
| 19,947
|
unexpected OOM error when creating pipeline
|
{
"login": "hcscctv",
"id": 55230835,
"node_id": "MDQ6VXNlcjU1MjMwODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/55230835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hcscctv",
"html_url": "https://github.com/hcscctv",
"followers_url": "https://api.github.com/users/hcscctv/followers",
"following_url": "https://api.github.com/users/hcscctv/following{/other_user}",
"gists_url": "https://api.github.com/users/hcscctv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hcscctv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hcscctv/subscriptions",
"organizations_url": "https://api.github.com/users/hcscctv/orgs",
"repos_url": "https://api.github.com/users/hcscctv/repos",
"events_url": "https://api.github.com/users/hcscctv/events{/privacy}",
"received_events_url": "https://api.github.com/users/hcscctv/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"sorry, there are some errors on the server in fact",
"Ok, if you could share what happened it could help potential readers. But glad you fixed it !"
] | 1,666
| 1,666
| 1,666
|
NONE
| null |
### System Info
```
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2',device=0)
```
When i run these code, an unexpected error occured.
```
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 47.46 GiB total capacity; 148.00 MiB already allocated; 22.31 MiB free; 148.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
the gpu has enough space
```
Fri Oct 28 05:53:34 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.141.03 Driver Version: 470.141.03 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro RTX 8000 Off | 00000000:1A:00.0 Off | Off |
| 33% 46C P2 74W / 260W | 2504MiB / 48601MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 15308 C python 833MiB |
| 0 N/A N/A 16307 C python 833MiB |
| 0 N/A N/A 17441 C python 833MiB |
+-----------------------------------------------------------------------------+
```
it is interesting that 47.46 GiB total capacity; 148.00 MiB already allocated; 22.31 MiB free; 148.00 MiB reserved in total by PyTorch, and i cannot allocate 20Mb.
i use slurm to upload the job, i did not know whether it has effect
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2',device=0)
# upload job by slurm
### Expected behavior
successfully use pipeline, which is a quite cool things.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19947/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19946
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19946/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19946/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19946/events
|
https://github.com/huggingface/transformers/pull/19946
| 1,427,217,794
|
PR_kwDOCUB6oc5Bv3-B
| 19,946
|
gradient checkpointing for GPT-NeoX
|
{
"login": "chiaolun",
"id": 3288950,
"node_id": "MDQ6VXNlcjMyODg5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3288950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiaolun",
"html_url": "https://github.com/chiaolun",
"followers_url": "https://api.github.com/users/chiaolun/followers",
"following_url": "https://api.github.com/users/chiaolun/following{/other_user}",
"gists_url": "https://api.github.com/users/chiaolun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiaolun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiaolun/subscriptions",
"organizations_url": "https://api.github.com/users/chiaolun/orgs",
"repos_url": "https://api.github.com/users/chiaolun/repos",
"events_url": "https://api.github.com/users/chiaolun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiaolun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The current failing tests look like infra issues.",
"Yes, looks like a flaky failure!"
] | 1,666
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Add gradient checkpointing to GPT-NeoX model, in the style of GPT-J
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19946/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19946",
"html_url": "https://github.com/huggingface/transformers/pull/19946",
"diff_url": "https://github.com/huggingface/transformers/pull/19946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19946.patch",
"merged_at": 1667233967000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19945
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19945/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19945/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19945/events
|
https://github.com/huggingface/transformers/pull/19945
| 1,427,018,236
|
PR_kwDOCUB6oc5BvNiw
| 19,945
|
Add Japanese translated README
|
{
"login": "eltociear",
"id": 22633385,
"node_id": "MDQ6VXNlcjIyNjMzMzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/22633385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eltociear",
"html_url": "https://github.com/eltociear",
"followers_url": "https://api.github.com/users/eltociear/followers",
"following_url": "https://api.github.com/users/eltociear/following{/other_user}",
"gists_url": "https://api.github.com/users/eltociear/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eltociear/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eltociear/subscriptions",
"organizations_url": "https://api.github.com/users/eltociear/orgs",
"repos_url": "https://api.github.com/users/eltociear/repos",
"events_url": "https://api.github.com/users/eltociear/events{/privacy}",
"received_events_url": "https://api.github.com/users/eltociear/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for adding this new translation! @younesbelkada do you want to give it a quick proofreading?\r\nAlso @eltociear to make sure the Japanese README stays up to date with the other ones, could you fill the [fololowing dict](https://github.com/huggingface/transformers/blob/main/utils/check_copies.py#L39) with the proper prompts/templates? Thanks!",
"@sgugger Thanks!\r\nAdded a fix to the this file.",
"Thanks! Can you also just quickly run `make fix-copies` to make the CI happy?",
"@younesbelkada さん初めまして!\r\nThank you for contacting!\r\nI have added an explanation to the top of the README.\r\nAlso, thanks for checking and suggesting a fix!",
"@eltociear It looks like the start prompt you added is not present in the Japanese README. Could you make sure it's correct?",
"Thanks again for your contribution!",
"@sgugger THANKS too!"
] | 1,666
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Add Japanese README
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19945/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19945/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19945",
"html_url": "https://github.com/huggingface/transformers/pull/19945",
"diff_url": "https://github.com/huggingface/transformers/pull/19945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19945.patch",
"merged_at": 1667308689000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19944
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19944/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19944/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19944/events
|
https://github.com/huggingface/transformers/issues/19944
| 1,427,000,185
|
I_kwDOCUB6oc5VDkt5
| 19,944
|
Problem of computing entropy in `run_bertology.py`
|
{
"login": "CaffreyR",
"id": 84232793,
"node_id": "MDQ6VXNlcjg0MjMyNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaffreyR",
"html_url": "https://github.com/CaffreyR",
"followers_url": "https://api.github.com/users/CaffreyR/followers",
"following_url": "https://api.github.com/users/CaffreyR/following{/other_user}",
"gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions",
"organizations_url": "https://api.github.com/users/CaffreyR/orgs",
"repos_url": "https://api.github.com/users/CaffreyR/repos",
"events_url": "https://api.github.com/users/CaffreyR/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaffreyR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Note that this is a research project, so an example provided as is not really maintained :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.13.0.dev20220709 (False)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @thomwolf @stas00 @patrickvonplaten @LysandreJik Many thanks!
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In [`run_bertology.py` ](https://github.com/huggingface/transformers/blob/main/examples/research_projects/bertology/run_bertology.py#L108), we use `def entropy` to calculate the `entropy` of attention matrix. But what if the matrix has negative elements, the `plogp = p * torch.log(p)` will be `nan`
### Expected behavior
`plogp = p * torch.log(p)` will be `nan`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19944/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19943
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19943/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19943/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19943/events
|
https://github.com/huggingface/transformers/issues/19943
| 1,426,888,565
|
I_kwDOCUB6oc5VDJd1
| 19,943
|
NllbTokenizer/NllbTokenizerFast inserts language code incorrectly when tokenizing target text
|
{
"login": "ddaspit",
"id": 3261883,
"node_id": "MDQ6VXNlcjMyNjE4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3261883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddaspit",
"html_url": "https://github.com/ddaspit",
"followers_url": "https://api.github.com/users/ddaspit/followers",
"following_url": "https://api.github.com/users/ddaspit/following{/other_user}",
"gists_url": "https://api.github.com/users/ddaspit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ddaspit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddaspit/subscriptions",
"organizations_url": "https://api.github.com/users/ddaspit/orgs",
"repos_url": "https://api.github.com/users/ddaspit/repos",
"events_url": "https://api.github.com/users/ddaspit/events{/privacy}",
"received_events_url": "https://api.github.com/users/ddaspit/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"After further research, I believe that it is intended that `PretrainedConfig.decoder_start_token_id` should be used to insert the lang code at the beginning of the sentence during fine tuning of NLLB and that the `NllbTokenizer` class is working as intended. If that is true, then the documentation for `NllbTokenizer` should be corrected, and the `run_translation.py` script should be fixed to properly set `decoder_start_token_id` when the `NllTokenizer` is being used (similar to mBART).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@ddaspit where did you get the information \"after further research\" ?\r\nWhen I read the mBart PAper, indeed, the Lang Token is suffixed after the source sequence and the the target lang token is prefixed.\r\nWhen reading the NLLB paper, it says the Lang token are both prefixed to SRC and TGT (page 48).\r\nWhat I don't understand is what and where exact BOS / EOS tokens are used.\r\nUsually we don't put any EOS/EOS in Source and we put BOS/EOS at the beg/end of the target sequence (at training).\r\n\r\nDid you get different info?\r\n\r\n\r\n",
"You are correct @vince62s , the paper clearly states that contrary to other model, the `src_lang` token is placed at the beginning of the input sequence. When adding the `NLLB-200` to the library, I checked that the output are a tiny bit different if you change this behavior. The fix is to change the prefix token and the suffix token attributes of the tokenizer class. Will open a PR after checking that this will not affect the current setup. \r\nThe `BOS` token is never used in fairseq, while the `EOS` is used as the `BOS` token. Indeed the `decoder_input_ids` that are passed to the model are `[eos, tgt_lang, ...]` (when generating) and `[tgt_lang, ..., eos]` when training. ",
"One slight correction: [eos, tgt_lang, ..., eos] on target side at training.\r\nSince I implemented NLLB-200 support in OpenNMT-py, I can confirm that prefixing instead of suffixing improves BLEU scores slightly. https://forum.opennmt.net/t/nllb-200-with-opennmt-py-the-good-the-bad-and-the-ugly/5151\r\n",
"The \"research\" I was referring to was entirely about how to properly use HuggingFace Transformers to prefix the target sentences with the lang code during decoding. I was not aware that source sentences were supposed to be prefixed with the lang code. We use NLLB pretty heavily, so I would be very happy to see this fixed.",
"Hi everyone!\r\nI was also concerned with the behavior of the NLLB tokenizer at HF, so, even before discovering this issue, I made two of my own \"experiments\" to verify that the behavior of the tokenizer should be fixed. \r\n\r\n1. I computed BLEU for one high-resource language pair (English-French) and one low-resource (Yoruba->Bashkir) from FLORES-200 with `[..., src_lang, eos]` and `[src_lang, ..., eos]` templates. For both directions, a significant proportion of translations (20% and 66%) is different depending on the tokenization method. For eng-fra, the new tokenization leads to a small loss in BLEU (-0.09), whereas for yor-bak, there is a larger gain (+0.21). While a thorough research would require investingating more translation direction, these results already hint that `[src_lang, ..., eos]` tokenization may be beneficial.\r\n\r\n2. I tweaked the Fairseq code for inferencing the original NLLB implementation so that it prints full tokenized source and translation during inference. The output confirms that the original implementation uses `[src_lang, ..., eos]` as source and `[tgt_lang, ..., eos]` as translation output (which implies using `[eos, tgt_lang, ...]` as a translation decoder input during training, because, as stated in the comments above, Fairseq uses `eos` instead of `bos` for translation).\r\n\r\nThe code and outputs for both experiments can be found [in my Colab notebook](https://colab.research.google.com/drive/1Zl-a9sbuC0YgRBFUHByTKiKy9GqlDd7u?usp=sharing).\r\n\r\nThese experiments confirm that #22313 is implementing exactly the right tokenization method; great thanks for that! ",
"Awesome, thanks for fixing this issue.",
"Hi All, a slight continuation of the above because I had assumed the inclusion of language code was a bug.. and now seeing it's intentionally included, my question is:\r\n\r\nQ: is it possible to have the language code NOT included in the output of the inference results?\r\n\r\nIn practice, I already know what the output language code is because I sent it to the model. I don't need it duplicated in the results. This only causes unnecessary post-process cleaning that adds more if-then specific cases to my code. \r\n\r\nI've tried removing \"forced_bos_token_id=bos_target_lang,\" from the generation, but that impairs the translation completely.\r\n\r\nIs it possible to switch this behaviour off? the only output I want from inference is the translation of the input text.\r\n\r\nThanks\r\n\r\n",
"Self-helped.. answer my own question.. add skip_special_tokens in the input_toks = tokenizer('textstuff', skip_special_tokens=True)",
"I got the same issue with src language being set to Latin by default and not wanting to change at all. \r\nthis seems like a straight up bug because the code from the huggingface website returns the wrong result. \r\n\r\nI can overwrite stuff manually but this is not to the spec",
"for anyone in the new version of hf that has issues with this here is the easy (dirty) fix \r\n```python\r\ndef translate_text(text):\r\n # Tokenize and translate the text\r\n encoded_text = tokenizer(text, return_tensors=\"pt\")\r\n #manual fix to hf bug \r\n encoded_text['input_ids'][:,1]=tokenizer.lang_code_to_id[tokenizer.src_lang]\r\n \r\n generated_tokens = model.generate(**encoded_text,forced_bos_token_id=tokenizer.lang_code_to_id[tokenizer.tgt_lang])\r\n\r\n # Decode and return the translated text\r\n return tokenizer.decode(generated_tokens[0])#, skip_special_tokens=True)\r\n\r\n\r\ntext_to_translate = \"Hello, how are you?\"\r\n\r\ntranslated_text = translate_text(text_to_translate)\r\nprint(translated_text)\r\n\r\n```\r\n\r\nnotice I am overwriting wrong tokens generated by the tokenizer that shouldn't be there. this wont break versions with a working tokenizer. and since hf version causes these issues I would recommand not relying on the tokenizer to do its job if you plan on upgrading versions.",
"Could you share a repro to your issue and the transformers version you are using? \r\nI am not sure I understand your recommendation: \r\n> I would recommand not relying on the tokenizer to do its job if you plan on upgrading versions. \r\n\r\nthe bug is fix and the language is properly handled so could you elaborate?"
] | 1,666
| 1,704
| 1,680
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.8.10
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.10.1+cu111 (True)
- Tensorflow version (GPU?): 2.7.3 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@LysandreJik @sgugger @SaulLu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
According to the [documentation](https://huggingface.co/docs/transformers/model_doc/nllb#transformers.NllbTokenizer) for `NllbTokenizer`,
> The tokenization method is `<tokens> <eos> <language code>` for source language documents, and `<language code> <tokens> <eos>` for target language documents.
When you tokenize target text, it incorrectly inserts the language code at the end of the sentence instead of the beginning.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
tokenizer.tgt_lang = "eng_Latn"
article = "UN Chief says there is no military solution in Syria"
tokens = tokenizer(text_target=article).tokens()
```
`tokens` has the value:
```
['▁UN', '▁Chief', '▁says', '▁there', '▁is', '▁no', '▁military', '▁solution', '▁in', '▁Syria', '</s>', 'eng_Latn']
```
### Expected behavior
`tokens` should have the value:
```
['eng_Latn', '▁UN', '▁Chief', '▁says', '▁there', '▁is', '▁no', '▁military', '▁solution', '▁in', '▁Syria', '</s>']
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19943/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19943/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19942
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19942/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19942/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19942/events
|
https://github.com/huggingface/transformers/issues/19942
| 1,426,846,103
|
I_kwDOCUB6oc5VC_GX
| 19,942
|
'FlaubertTokenizer' object has no attribute 'do_lowercase'
|
{
"login": "davidavdav",
"id": 5497303,
"node_id": "MDQ6VXNlcjU0OTczMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5497303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidavdav",
"html_url": "https://github.com/davidavdav",
"followers_url": "https://api.github.com/users/davidavdav/followers",
"following_url": "https://api.github.com/users/davidavdav/following{/other_user}",
"gists_url": "https://api.github.com/users/davidavdav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidavdav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidavdav/subscriptions",
"organizations_url": "https://api.github.com/users/davidavdav/orgs",
"repos_url": "https://api.github.com/users/davidavdav/repos",
"events_url": "https://api.github.com/users/davidavdav/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidavdav/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks for reporting! I believe this has been fixed on the main branch. While waiting for the next release, could you do a source install?",
"Yes, when installing from main \r\n```\r\npip install git+https://github.com/huggingface/transformers.git\r\n```\r\n\r\nthings work again as expected"
] | 1,666
| 1,666
| 1,666
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: not essential
- Using distributed or parallel set-up in script?: no
### Who can help?
@SaulLu
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_uncased")
tokenizer.tokenize("bonjour")
AttributeError: 'FlaubertTokenizer' object has no attribute 'do_lowercase'
```
This was not the case back in the days of transformers-2
I can fix it by saying
```python
tokenizer.do_lowercase = True
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_uncased")
tokenizer.tokenize("bonjour")
AttributeError: 'FlaubertTokenizer' object has no attribute 'do_lowercase'
```
### Expected behavior
```
['bonjour</w>']
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19942/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19941
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19941/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19941/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19941/events
|
https://github.com/huggingface/transformers/issues/19941
| 1,426,815,064
|
I_kwDOCUB6oc5VC3hY
| 19,941
|
HFArgumentParser using a mix of json file and command line?
|
{
"login": "sgunasekar",
"id": 8418631,
"node_id": "MDQ6VXNlcjg0MTg2MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8418631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgunasekar",
"html_url": "https://github.com/sgunasekar",
"followers_url": "https://api.github.com/users/sgunasekar/followers",
"following_url": "https://api.github.com/users/sgunasekar/following{/other_user}",
"gists_url": "https://api.github.com/users/sgunasekar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgunasekar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgunasekar/subscriptions",
"organizations_url": "https://api.github.com/users/sgunasekar/orgs",
"repos_url": "https://api.github.com/users/sgunasekar/repos",
"events_url": "https://api.github.com/users/sgunasekar/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgunasekar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to ask questions like this as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
Is there a way to make HFArgumentParser to load first from a json/dict and then update any command line arguments on top of it? e.g., if i want to keep a custom defaults file for TrainingArguments but also update some arguments from command line. TIA
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19941/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19940
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19940/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19940/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19940/events
|
https://github.com/huggingface/transformers/pull/19940
| 1,426,704,774
|
PR_kwDOCUB6oc5BuNS9
| 19,940
|
Fixing failure when labels have different length
|
{
"login": "haotianteng",
"id": 11155295,
"node_id": "MDQ6VXNlcjExMTU1Mjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/11155295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haotianteng",
"html_url": "https://github.com/haotianteng",
"followers_url": "https://api.github.com/users/haotianteng/followers",
"following_url": "https://api.github.com/users/haotianteng/following{/other_user}",
"gists_url": "https://api.github.com/users/haotianteng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haotianteng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haotianteng/subscriptions",
"organizations_url": "https://api.github.com/users/haotianteng/orgs",
"repos_url": "https://api.github.com/users/haotianteng/repos",
"events_url": "https://api.github.com/users/haotianteng/events{/privacy}",
"received_events_url": "https://api.github.com/users/haotianteng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19940). All of your documentation changes will be reflected on that endpoint.",
"Thanks for your PR. I don't see what is failing with the current code, could you provide a reproducible example of a bug?",
"This is a minimum example:\r\n```python\r\nimport numpy as np\r\nimport datasets\r\nfrom transformers import AutoTokenizer, BertForTokenClassification,DataCollatorForTokenClassification,Trainer,TrainingArguments\r\ndef tokenize_function(examples):\r\n input = tokenizer(examples[\"tokens\"], is_split_into_words=True, truncation=True)\r\n return input\r\n\r\nraw_datasets = datasets.load_dataset('conllpp')\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')\r\nn_label = len(raw_datasets['train'].features['ner_tags'].feature.names)\r\nmodel = BertForTokenClassification.from_pretrained('bert-base-uncased',num_labels=n_label)\r\ntokenized_datasets = raw_datasets.map(tokenize_function, \r\n batched=True)\r\ntokenized_datasets = tokenized_datasets.rename_column(\"ner_tags\", \"labels\")\r\ntokenized_datasets.set_format(\"torch\") \r\n#This will make the collector to use the torch_call function and lead to failure when the label has different length.\r\ndata_collator = DataCollatorForTokenClassification(tokenizer)\r\ntraining_args = TrainingArguments(output_dir = \"bert_finetune\", \r\n evaluation_strategy = \"epoch\", \r\n save_strategy=\"epoch\",\r\n learning_rate=1e-5, \r\n num_train_epochs=1, \r\n per_device_train_batch_size=16, \r\n per_device_eval_batch_size=16, \r\n weight_decay=0.01, \r\n push_to_hub = False)\r\ntrainer = Trainer(model,\r\n training_args,\r\n train_dataset=tokenized_datasets['train'],\r\n eval_dataset=tokenized_datasets['validation'],\r\n data_collator=data_collator,\r\n tokenizer=tokenizer)\r\ntrainer.train()\r\n```\r\nThe error:\r\nTraceback (most recent call last):\r\n File \"collector_issue.py\", line 40, in <module>\r\n trainer.train()\r\n File \"/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py\", line 1500, in train\r\n return inner_training_loop(\r\n File \"/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py\", line 1716, in _inner_training_loop\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 628, in __next__\r\n data = self._next_data()\r\n File \"/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 671, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 61, in fetch\r\n return self.collate_fn(data)\r\n File \"/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/data/data_collator.py\", line 42, in __call__\r\n return self.torch_call(features)\r\n File \"/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/data/data_collator.py\", line 306, in torch_call\r\n batch = self.tokenizer.pad(\r\n File \"/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 2981, in pad\r\n return BatchEncoding(batch_outputs, tensor_type=return_tensors)\r\n File \"/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 206, in __init__\r\n self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)\r\n File \"/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 732, in convert_to_tensors\r\n raise ValueError(\r\nValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).\r\n\r\nWhich is caused by torch_call function in transformers/data/data_collator.py\r\nSpecifically:\r\nhttps://github.com/huggingface/transformers/blob/2e35bac4e73558d334ea5bbf96a1116f7c0d7fb3/src/transformers/data/data_collator.py#L307-L313\r\n\r\nNotice the comment \"# Conversion to tensors will fail if we have labels as they are not of the same length yet.\"\r\nSo I just fixed this bug by poping out the labels first and then feed them back, and then the afterward code would take care of the padding.",
"You have not aligned your TAGS with the tokens in this example, so it won't work anyway.",
"The error is due to the nested label list which has different lengths. I am not sure what you mean by aligning TAGS with token, I supposed you mean padding the label so they would have same length, but wasn't the collector supposed to do the auto-padding given nested a nested list label?",
"The collator will do the padding of the labels to add as many pad tokens as in the inputs. But the tokenizer splits a word in multiple subwords, and you haven't done anything for that in your labels. So you still end up with labels being of different sizes.",
"I think the label padding is after the tokenizer padding:\r\n\r\nhttps://github.com/huggingface/transformers/blob/2e35bac4e73558d334ea5bbf96a1116f7c0d7fb3/src/transformers/data/data_collator.py#L320-325\r\n\r\nAnd the padding is dynamically added for every batch, so I think there will be no problem?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
Pop out label when padding the signal, add it back afterward to avoid the failure caused by different label length.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19940/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19940",
"html_url": "https://github.com/huggingface/transformers/pull/19940",
"diff_url": "https://github.com/huggingface/transformers/pull/19940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19940.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19939
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19939/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19939/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19939/events
|
https://github.com/huggingface/transformers/issues/19939
| 1,426,648,445
|
I_kwDOCUB6oc5VCO19
| 19,939
|
Errors when using "torch_dtype='auto" in "AutoModelForCausalLM.from_pretrained()" to load model
|
{
"login": "Zcchill",
"id": 83019888,
"node_id": "MDQ6VXNlcjgzMDE5ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/83019888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zcchill",
"html_url": "https://github.com/Zcchill",
"followers_url": "https://api.github.com/users/Zcchill/followers",
"following_url": "https://api.github.com/users/Zcchill/following{/other_user}",
"gists_url": "https://api.github.com/users/Zcchill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zcchill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zcchill/subscriptions",
"organizations_url": "https://api.github.com/users/Zcchill/orgs",
"repos_url": "https://api.github.com/users/Zcchill/repos",
"events_url": "https://api.github.com/users/Zcchill/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zcchill/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I believe this has been fixed in more recent versions of Transformers (can't be entirely sure since your code sample and traceback are not properly formatted between three backticks, so very hard to read).\r\nCould you try to upgrade to the latest version?",
"> I believe this has been fixed in more recent versions of Transformers (can't be entirely sure since your code sample and traceback are not properly formatted between three backticks, so very hard to read). Could you try to upgrade to the latest version?\r\n\r\nalright, I will try to upgeade the version of Transformers.",
"> I believe this has been fixed in more recent versions of Transformers (can't be entirely sure since your code sample and traceback are not properly formatted between three backticks, so very hard to read). Could you try to upgrade to the latest version?\r\n\r\nHello, I've updated the verson of transformer, and there is still the bug. I've update the comment with a screen shot of bug for read.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
### System Info
python 3.8.13;
torch 1.10.0+cu113;
transformers 4.20.1;
### Who can help?
@stas00,@sgugger,@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`
import torch
from transformers import (
AutoModelForCausalLM,
AutoConfig
)
from transformers.modeling_utils import PreTrainedModel
path = "./opt-6.7b-ori"
config = AutoConfig.from_pretrained('facebook/opt-6.7b')
model = AutoModelForCausalLM.from_pretrained('facebook/opt-6.7b',torch_dtype='auto',config=config,cache_dir='/ssdwork/cache/')
pretrainmodel = PreTrainedModel(config=config)
pretrainmodel.model = model
pretrainmodel.save_pretrained(save_directory=path, is_main_process=True , state_dict=None)
`
### BUG
`Traceback (most recent call last):
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 308, in _check_seekable
f.seek(f.tell())
AttributeError: 'list' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 461, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 594, in load
with _open_file_like(f, 'rb') as opened_file:
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 235, in _open_file_like
return _open_buffer_reader(name_or_buffer)
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 220, in __init__
_check_seekable(buffer)
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 311, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 304, in raise_err_msg
raise type(e)(msg)
AttributeError: 'list' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "re_generate_best_model_from_shard.py", line 114, in <module>
model = AutoModelForCausalLM.from_pretrained(folder_name, torch_dtype='auto', cache_dir='/ssdwork/cache/')
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2148, in from_pretrained
one_state_dict = load_state_dict(resolved_archive_file)
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 464, in load_state_dict
with open(checkpoint_file) as f:
TypeError: expected str, bytes or os.PathLike object, not list
`
<img width="702" alt="image" src="https://user-images.githubusercontent.com/83019888/198872310-03848039-fef4-47e4-826b-2d1224c38eca.png">
### Expected behavior
The original model weight's data type is fp16. As we known, the process of loading models with 'from_pretrained()' will change the dtype from fp16 to fp32 as default, thus I add torch_dtype='auto' as official guideline, but it turn out to be a error. Howerve, if we use torch_dtype=torch.float16, we will get desired result.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19939/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19938
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19938/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19938/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19938/events
|
https://github.com/huggingface/transformers/issues/19938
| 1,426,491,082
|
I_kwDOCUB6oc5VBobK
| 19,938
|
Incorrect Document Content in BlenderBot Tokenizer
|
{
"login": "chujiezheng",
"id": 37283853,
"node_id": "MDQ6VXNlcjM3MjgzODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/37283853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chujiezheng",
"html_url": "https://github.com/chujiezheng",
"followers_url": "https://api.github.com/users/chujiezheng/followers",
"following_url": "https://api.github.com/users/chujiezheng/following{/other_user}",
"gists_url": "https://api.github.com/users/chujiezheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chujiezheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chujiezheng/subscriptions",
"organizations_url": "https://api.github.com/users/chujiezheng/orgs",
"repos_url": "https://api.github.com/users/chujiezheng/repos",
"events_url": "https://api.github.com/users/chujiezheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/chujiezheng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Might be of interest to @ArthurZucker ",
"Okay, this is simply because the model used, `BlenderbotTokenizer.from_pretrained(\"facebook/blenderbot-3B\")` has the attribute `add_prefix_space` set to `True` by default. If you set it to `False` we have the expected different output. \r\nLet me open a PR to fix this. "
] | 1,666
| 1,668
| 1,668
|
NONE
| null |
`The BlenderBot tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not.` However, the examples in BlenderBot Tokenizer (`BlenderbotTokenizer`) are the same:
https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/blenderbot/tokenization_blenderbot.py#L105
The same issue also occurs in `BlenderbotTokenizerFast`:
https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/blenderbot/tokenization_blenderbot_fast.py#L64
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19938/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19937
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19937/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19937/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19937/events
|
https://github.com/huggingface/transformers/issues/19937
| 1,426,447,043
|
I_kwDOCUB6oc5VBdrD
| 19,937
|
Onnx CLIP Model outputs "Shape mismatch" warining on inference
|
{
"login": "abnokubi",
"id": 85205333,
"node_id": "MDQ6VXNlcjg1MjA1MzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/85205333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abnokubi",
"html_url": "https://github.com/abnokubi",
"followers_url": "https://api.github.com/users/abnokubi/followers",
"following_url": "https://api.github.com/users/abnokubi/following{/other_user}",
"gists_url": "https://api.github.com/users/abnokubi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abnokubi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abnokubi/subscriptions",
"organizations_url": "https://api.github.com/users/abnokubi/orgs",
"repos_url": "https://api.github.com/users/abnokubi/repos",
"events_url": "https://api.github.com/users/abnokubi/events{/privacy}",
"received_events_url": "https://api.github.com/users/abnokubi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @lewtun ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Convert clip to onnx `python -m transformers.onnx -m openai/clip-vit-base-patch32 onnx/`
2. Try to inference
```
import transformers
import onnxruntime
import numpy as np
from PIL import Image
import torch
model_path = "onnx/model.onnx"
example_image = "swatch01.png"
session = onnxruntime.InferenceSession(model_path)
processor = transformers.CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
labels = [
"a close up of a person's eye photo",
"a person's arm with a bunch of lipstick swatches on it",
# (snip)
]
img = Image.open(example_image)
inputs = processor(text=candidate_labels, images=img,
return_tensors="np", padding=True)
ort_inputs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"pixel_values": inputs["pixel_values"]
}
ort_outputs = session.run(None, ort_inputs)
```
3. It work but some warinings:
```
2022-10-27 09:23:18.299824270 [W:onnxruntime:, execution_frame.cc:594 AllocateMLValueTensorPreAllocateBuffer] Shape mismatch attempting to re-use buffer. {42,512} != {1,512}. Validate usage of dim_value (values should be > 0) and dim_param (all values with the same string should equate to the same size) in shapes in the model.
2022-10-27 09:23:18.300383069 [W:onnxruntime:, execution_frame.cc:594 AllocateMLValueTensorPreAllocateBuffer] Shape mismatch attempting to re-use buffer. {42,1} != {1,1}. Validate usage of dim_value (values should be > 0) and dim_param (all values with the same string should equate to the same size) in shapes in the model.
```
(`42` is the number of labels)
### Expected behavior
The model work find, but the waring outputs every inference.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19937/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19936
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19936/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19936/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19936/events
|
https://github.com/huggingface/transformers/pull/19936
| 1,426,445,026
|
PR_kwDOCUB6oc5BtW0A
| 19,936
|
[Doctest] Add configuration_fsmt.py
|
{
"login": "sha016",
"id": 92833633,
"node_id": "U_kgDOBYiHYQ",
"avatar_url": "https://avatars.githubusercontent.com/u/92833633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sha016",
"html_url": "https://github.com/sha016",
"followers_url": "https://api.github.com/users/sha016/followers",
"following_url": "https://api.github.com/users/sha016/following{/other_user}",
"gists_url": "https://api.github.com/users/sha016/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sha016/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sha016/subscriptions",
"organizations_url": "https://api.github.com/users/sha016/orgs",
"repos_url": "https://api.github.com/users/sha016/repos",
"events_url": "https://api.github.com/users/sha016/events{/privacy}",
"received_events_url": "https://api.github.com/users/sha016/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@sha016 I force pushed to this PR with a tiny update.\r\nOnce @sgugger approves this PR, we can merge it to `main`.\r\n\r\nThank you again for the contribution!\r\n\r\n"
] | 1,666
| 1,669
| 1,669
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds `configuration_fsmt.py` to `utils/documentation_tests.txt`
Based on https://github.com/huggingface/transformers/issues/19487
@ydshieh please review, thanks!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19936/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19936",
"html_url": "https://github.com/huggingface/transformers/pull/19936",
"diff_url": "https://github.com/huggingface/transformers/pull/19936.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19936.patch",
"merged_at": 1669646866000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19935
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19935/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19935/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19935/events
|
https://github.com/huggingface/transformers/pull/19935
| 1,426,388,612
|
PR_kwDOCUB6oc5BtK-E
| 19,935
|
Update Code of Conduct to Contributor Covenant v2.1
|
{
"login": "pankali",
"id": 108031802,
"node_id": "U_kgDOBnBvOg",
"avatar_url": "https://avatars.githubusercontent.com/u/108031802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pankali",
"html_url": "https://github.com/pankali",
"followers_url": "https://api.github.com/users/pankali/followers",
"following_url": "https://api.github.com/users/pankali/following{/other_user}",
"gists_url": "https://api.github.com/users/pankali/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pankali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pankali/subscriptions",
"organizations_url": "https://api.github.com/users/pankali/orgs",
"repos_url": "https://api.github.com/users/pankali/repos",
"events_url": "https://api.github.com/users/pankali/events{/privacy}",
"received_events_url": "https://api.github.com/users/pankali/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19935/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19935",
"html_url": "https://github.com/huggingface/transformers/pull/19935",
"diff_url": "https://github.com/huggingface/transformers/pull/19935.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19935.patch",
"merged_at": 1666969418000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19934
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19934/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19934/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19934/events
|
https://github.com/huggingface/transformers/issues/19934
| 1,426,366,487
|
I_kwDOCUB6oc5VBKAX
| 19,934
|
UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch.
|
{
"login": "FurkanGozukara",
"id": 19240467,
"node_id": "MDQ6VXNlcjE5MjQwNDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/19240467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FurkanGozukara",
"html_url": "https://github.com/FurkanGozukara",
"followers_url": "https://api.github.com/users/FurkanGozukara/followers",
"following_url": "https://api.github.com/users/FurkanGozukara/following{/other_user}",
"gists_url": "https://api.github.com/users/FurkanGozukara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FurkanGozukara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FurkanGozukara/subscriptions",
"organizations_url": "https://api.github.com/users/FurkanGozukara/orgs",
"repos_url": "https://api.github.com/users/FurkanGozukara/repos",
"events_url": "https://api.github.com/users/FurkanGozukara/events{/privacy}",
"received_events_url": "https://api.github.com/users/FurkanGozukara/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @ArthurZucker so it's on your radar.",
"Interesting, we might have more model to refactor ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,676
| 1,676
|
NONE
| null |
### System Info
C:\Python399\lib\site-packages\transformers\models\bigbird_pegasus\modeling_bigbird_pegasus.py:807: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
torch.arange(indices.shape[0] * indices.shape[1] * num_indices_to_gather, device=indices.device)
```
import logging
from transformers import pipeline
f = open("TextFile1.txt", "r")
ARTICLE = f.read()
summarizer = pipeline("summarization", model="google/bigbird-pegasus-large-bigpatent" )
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19934/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19933
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19933/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19933/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19933/events
|
https://github.com/huggingface/transformers/pull/19933
| 1,426,287,127
|
PR_kwDOCUB6oc5Bs1tV
| 19,933
|
Map `RealmBertModel` for `AutoModel`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @younesbelkada \r\nWe should also work on `src/transformers/__init__.py`, but otherwise LGTM, thanks!",
"Ah yes, great catch! Will add it now",
"Thanks a lot for the explanation!\r\nYes let's stay pragmatic, I will probably just remove it from the `BetterTransformers` test ",
"@sgugger But we usually expose the base model, no, like `BertModel`?",
"Yes, but I'm very unsure that this is the base REALM model. It's more of a building block towards it."
] | 1,666
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR maps `RealmModel` to be able to use it with `AutoModel` - for consistency with `BertModel`
Why this PR? I wanted to automate tests for `BetterTransformers` integration in `optimum` without having to import manually the class, see here: https://github.com/younesbelkada/optimum/blob/49575c5b016392383f0c2ebc1565ef56747b87e6/tests/bettertransformers/test_bettertransformers.py#L68
Since `Realm` should be supported by `BetterTransformers`, this PR would help me design easier-to-implement tests
cc @sgugger @ydshieh
https://github.com/huggingface/optimum/pull/423
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19933/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19933",
"html_url": "https://github.com/huggingface/transformers/pull/19933",
"diff_url": "https://github.com/huggingface/transformers/pull/19933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19933.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19932
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19932/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19932/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19932/events
|
https://github.com/huggingface/transformers/pull/19932
| 1,426,267,369
|
PR_kwDOCUB6oc5BsxkO
| 19,932
|
Add LayoutLMv3 resource
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,667
| 1,667
|
MEMBER
| null |
From #19848, this PR adds resources for LayoutLMv3.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19932/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19932",
"html_url": "https://github.com/huggingface/transformers/pull/19932",
"diff_url": "https://github.com/huggingface/transformers/pull/19932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19932.patch",
"merged_at": 1667326246000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19931
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19931/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19931/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19931/events
|
https://github.com/huggingface/transformers/pull/19931
| 1,426,257,026
|
PR_kwDOCUB6oc5BsvZk
| 19,931
|
Add wav2vec2 resources
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
MEMBER
| null |
From #19848, this PR adds resources for Wav2Vec2.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19931/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19931",
"html_url": "https://github.com/huggingface/transformers/pull/19931",
"diff_url": "https://github.com/huggingface/transformers/pull/19931.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19931.patch",
"merged_at": 1666988899000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19930
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19930/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19930/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19930/events
|
https://github.com/huggingface/transformers/pull/19930
| 1,426,219,914
|
PR_kwDOCUB6oc5Bsnjz
| 19,930
|
Add DistilBERT resources
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
MEMBER
| null |
From #19848, this PR adds resources for DistilBERT.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19930/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19930",
"html_url": "https://github.com/huggingface/transformers/pull/19930",
"diff_url": "https://github.com/huggingface/transformers/pull/19930.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19930.patch",
"merged_at": 1666988168000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19929
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19929/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19929/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19929/events
|
https://github.com/huggingface/transformers/issues/19929
| 1,426,213,487
|
I_kwDOCUB6oc5VAkpv
| 19,929
|
Token indices sequence length is longer than the specified maximum sequence length for this model (11261 > 1024). Running this sequence through the model will result in indexing errors
|
{
"login": "FurkanGozukara",
"id": 19240467,
"node_id": "MDQ6VXNlcjE5MjQwNDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/19240467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FurkanGozukara",
"html_url": "https://github.com/FurkanGozukara",
"followers_url": "https://api.github.com/users/FurkanGozukara/followers",
"following_url": "https://api.github.com/users/FurkanGozukara/following{/other_user}",
"gists_url": "https://api.github.com/users/FurkanGozukara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FurkanGozukara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FurkanGozukara/subscriptions",
"organizations_url": "https://api.github.com/users/FurkanGozukara/orgs",
"repos_url": "https://api.github.com/users/FurkanGozukara/repos",
"events_url": "https://api.github.com/users/FurkanGozukara/events{/privacy}",
"received_events_url": "https://api.github.com/users/FurkanGozukara/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for questions like this as we keep the issues for bugs (in the library) and feature requests only.",
"@sgugger you are right i am closing this thread\r\n\r\ncould you answer there: https://discuss.huggingface.co/t/which-summarization-model-of-huggingface-supports-more-than-1024-tokens-which-model-is-more-suitable-for-programming-related-articles/25095"
] | 1,666
| 1,666
| 1,666
|
NONE
| null |
### System Info
I tested 2 models (sshleifer/distilbart-cnn-12-6 , facebook/bart-large-cnn) and they both have very small 1024 max token length
So which model supports full length or the most token count?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I tested 2 models (sshleifer/distilbart-cnn-12-6 , facebook/bart-large-cnn) and they both have very small 1024 max token length
So which model supports full length or the most token count?
### Expected behavior
I tested 2 models (sshleifer/distilbart-cnn-12-6 , facebook/bart-large-cnn) and they both have very small 1024 max token length
So which model supports full length or the most token count?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19929/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19928
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19928/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19928/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19928/events
|
https://github.com/huggingface/transformers/pull/19928
| 1,426,184,633
|
PR_kwDOCUB6oc5BsgC-
| 19,928
|
Add BART resources
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
MEMBER
| null |
From #19848, this PR adds resources for BART.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19928/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19928",
"html_url": "https://github.com/huggingface/transformers/pull/19928",
"diff_url": "https://github.com/huggingface/transformers/pull/19928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19928.patch",
"merged_at": 1666988143000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19927
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19927/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19927/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19927/events
|
https://github.com/huggingface/transformers/pull/19927
| 1,426,009,079
|
PR_kwDOCUB6oc5Br6Sh
| 19,927
|
Add `accelerate` support for BART-like models
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot! Merging since https://github.com/huggingface/accelerate/pull/792 has been merged 🟢 "
] | 1,666
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds `accelerate` support for BART-like models, so that these models can be loaded in 8bit using `load_in_8bit=True`.
Follows the same logic as https://github.com/huggingface/transformers/pull/19912 regarding shared embeddings
Do not merge before https://github.com/huggingface/accelerate/pull/792 gets merged!
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19927/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19927",
"html_url": "https://github.com/huggingface/transformers/pull/19927",
"diff_url": "https://github.com/huggingface/transformers/pull/19927.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19927.patch",
"merged_at": 1666905293000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19926
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19926/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19926/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19926/events
|
https://github.com/huggingface/transformers/issues/19926
| 1,425,678,156
|
I_kwDOCUB6oc5U-h9M
| 19,926
|
About the `head_mask` of the Bert model `forward` really speed up?
|
{
"login": "CaffreyR",
"id": 84232793,
"node_id": "MDQ6VXNlcjg0MjMyNzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/84232793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaffreyR",
"html_url": "https://github.com/CaffreyR",
"followers_url": "https://api.github.com/users/CaffreyR/followers",
"following_url": "https://api.github.com/users/CaffreyR/following{/other_user}",
"gists_url": "https://api.github.com/users/CaffreyR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaffreyR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaffreyR/subscriptions",
"organizations_url": "https://api.github.com/users/CaffreyR/orgs",
"repos_url": "https://api.github.com/users/CaffreyR/repos",
"events_url": "https://api.github.com/users/CaffreyR/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaffreyR/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @CaffreyR, the head mask isn't there to speed things up.\r\n\r\nYou can read a bit more about it in the [bertology](https://huggingface.co/docs/transformers/v4.23.1/en/bertology) documentation. It's mostly to see which heads impact your prediction, it's not made to speed things up.",
"So you means that only if we prune the head according to the `head mask`, we can speed up our model?",
"I mean that looking at `head_mask` as a way to speed up the model doesn't work :)\r\nIf you'd like to speed up your model, you can look at changing the precision, quantizing it, distillating it; but removing heads isn't going to speed it up or very very marginally.",
"Great! Many thanks!"
] | 1,666
| 1,666
| 1,666
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.13.0.dev20220709 (False)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@LysandreJik Many thanks!
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, in `bert` model or other model, when we try to get the output, we usually use `output = model(**batch)`. And in the source code of bert model, we actually have a parameter called `head_mask`. So when we give them different head_mask of the model, like `outputs = model(head_mask=head_mask, **batch)`, the time will be different?
So I try to run the two codes, and the time of the code that have more-zero `head_mask` do not speed up
```
head_mask = torch.ones(12, 12)
print(head_mask)
for batch in dataloader:
for k, v in batch.items():
batch[k] = v
import time
start=time.perf_counter()
outputs = model(head_mask=head_mask, **batch)
end = time.perf_counter()
print(end-start)
```
```
head_mask = torch.zeros(12, 12)
for i in range (12):
head_mask[i][0] = 1
print(head_mask)
for batch in dataloader:
for k, v in batch.items():
batch[k] = v
import time
start=time.perf_counter()
outputs = model(head_mask=head_mask, **batch)
end = time.perf_counter()
print(end-start)
```
### Expected behavior
So I try to run the two codes, and the time of the code that have more-zero `head_mask` do not speed up
Many thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19926/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19925
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19925/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19925/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19925/events
|
https://github.com/huggingface/transformers/issues/19925
| 1,425,485,544
|
I_kwDOCUB6oc5U9y7o
| 19,925
|
Does transformers have Swin Object Detection?
|
{
"login": "BakingBrains",
"id": 51019420,
"node_id": "MDQ6VXNlcjUxMDE5NDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/51019420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BakingBrains",
"html_url": "https://github.com/BakingBrains",
"followers_url": "https://api.github.com/users/BakingBrains/followers",
"following_url": "https://api.github.com/users/BakingBrains/following{/other_user}",
"gists_url": "https://api.github.com/users/BakingBrains/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BakingBrains/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BakingBrains/subscriptions",
"organizations_url": "https://api.github.com/users/BakingBrains/orgs",
"repos_url": "https://api.github.com/users/BakingBrains/repos",
"events_url": "https://api.github.com/users/BakingBrains/events{/privacy}",
"received_events_url": "https://api.github.com/users/BakingBrains/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to ask questions like this, as we keep issues for bugs and feature requests only."
] | 1,666
| 1,666
| 1,666
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19925/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19924
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19924/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19924/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19924/events
|
https://github.com/huggingface/transformers/pull/19924
| 1,425,420,164
|
PR_kwDOCUB6oc5Bp6a1
| 19,924
|
Support segformer fx
|
{
"login": "dwlim-nota",
"id": 61265665,
"node_id": "MDQ6VXNlcjYxMjY1NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/61265665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwlim-nota",
"html_url": "https://github.com/dwlim-nota",
"followers_url": "https://api.github.com/users/dwlim-nota/followers",
"following_url": "https://api.github.com/users/dwlim-nota/following{/other_user}",
"gists_url": "https://api.github.com/users/dwlim-nota/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwlim-nota/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwlim-nota/subscriptions",
"organizations_url": "https://api.github.com/users/dwlim-nota/orgs",
"repos_url": "https://api.github.com/users/dwlim-nota/repos",
"events_url": "https://api.github.com/users/dwlim-nota/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwlim-nota/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I noticed that I previously made commit in the main branch(forked my branch).\r\nso I reopend PR again.\r\nCould you review again ? @michaelbenayoun \r\n\r\nthis PR is same with PR 19917(https://github.com/huggingface/transformers/pull/19917)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,667
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I wrote simple test code to use fx in the Segformer model.
But failed.
```python
import torch
from transformers import SegformerModel, SegformerConfig, SegformerFeatureExtractor
from transformers.utils.fx import symbolic_trace
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b0")
model = SegformerModel.from_pretrained("nvidia/mit-b0")
inputs = feature_extractor(image, return_tensors="pt")
traced_model = symbolic_trace(model, ["pixel_values"])
with torch.no_grad():
outputs = model(**inputs)
traced_outputs = traced_model(**inputs)
assert torch.allclose(outputs.last_hidden_state, traced_outputs["last_hidden_state"])
```
when I tried to apply fx to Segformer model, HFTracer class could not pass transpose_for_scores function


because Proxy(Torch.Size ) is not iterable object.
so, simply fixed as belows.

also there was same case in the forward function.

to overcome `check_if_model_is_supported`, added segformer to


## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@NielsRogge @michaelbenayoun
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19924/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19924",
"html_url": "https://github.com/huggingface/transformers/pull/19924",
"diff_url": "https://github.com/huggingface/transformers/pull/19924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19924.patch",
"merged_at": 1666961078000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19923
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19923/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19923/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19923/events
|
https://github.com/huggingface/transformers/pull/19923
| 1,425,418,989
|
PR_kwDOCUB6oc5Bp6Kw
| 19,923
|
Some fixes regarding auto mappings and test class names
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
COLLABORATOR
| null |
# What does this PR do?
Add `pegasus_x` to some auto mappings, and fix the incorrect class names in ViTMSN testing file.
Also fix `ESM` checkpoint
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19923/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19923",
"html_url": "https://github.com/huggingface/transformers/pull/19923",
"diff_url": "https://github.com/huggingface/transformers/pull/19923.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19923.patch",
"merged_at": 1666874339000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19922
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19922/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19922/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19922/events
|
https://github.com/huggingface/transformers/pull/19922
| 1,425,396,292
|
PR_kwDOCUB6oc5Bp1PH
| 19,922
|
Remove embarrassing debug print() in save_pretrained
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This is already part of #19900 which is awaiting your review 😛 ",
"If it is, you haven't pushed it!",
"Oh! Then let's merge this."
] | 1,666
| 1,666
| 1,666
|
MEMBER
| null |
@sgugger spotted this one, sorry about that! (cc @gante)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19922/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19922",
"html_url": "https://github.com/huggingface/transformers/pull/19922",
"diff_url": "https://github.com/huggingface/transformers/pull/19922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19922.patch",
"merged_at": 1666882609000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19921
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19921/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19921/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19921/events
|
https://github.com/huggingface/transformers/pull/19921
| 1,425,315,319
|
PR_kwDOCUB6oc5Bpju_
| 19,921
|
[Whisper Tokenizer] Make more user-friendly
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm not sure if Patrick currently has the bandwidth to review this, @sgugger would you be able to take a look if you've got a spare few minutes? Thanks! 🙏",
"Test for `set_prefix_tokens` in https://github.com/huggingface/transformers/pull/19921/commits/e98821f12ba9a899d2907ebcb7d114aff8712c0b",
"Cool good to merge for me"
] | 1,666
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #19864.
In summary, the Whisper tokenizer is modified to prepend several tokens to the start-of-sequence:
- BOS token id (`<|startoftranscript|>`) -> consistent with other sequence-to-sequence models such as _BART_.
- Language token id (e.g. `<|es|>` for Spanish) -> set only when the tokenizer is instantiated with argument `language=X`. Otherwise omitted.
- Task token id (e.g. `<|translate|>` for speech translation) -> set only when the tokenizer is instantiated with argument `task=Y`. Otherwise omitted.
- No time stamps id (`<|notimestamps|>`) ->set only when the tokenizer is instantiated with argument `predict_timestamps=False`. For `predict_timestamps=True`, it is omitted.
In addition, it is modified to always append the end-of-sequence token to the end of the label sequence (`<|endoftext|>`).
The updated tokenizer behaves as follows:
```python
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny", language="english", task="transcribe", predict_timestamps=False)
input_ids = tokenizer("hey").input_ids
text_with_special = tokenizer.decode(input_ids, skip_special_tokens=False)
text = tokenizer.decode(input_ids, skip_special_tokens=True)
print("Input ids :", input_ids)
print("Text w/ special :", text_with_special)
print("Text :", text)
```
**Print Output:**
```
Input ids : [50258, 50259, 50359, 50363, 17230, 50257]
Text w/ special : <|startoftranscript|><|en|><|transcribe|><|notimestamps|>hey<|endoftext|>
Text : hey
```
The attention mask functionality of the Whisper tokenizer **is** retained (_c.f._ https://github.com/huggingface/transformers/issues/19864#issuecomment-1291799687).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19921/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19921",
"html_url": "https://github.com/huggingface/transformers/pull/19921",
"diff_url": "https://github.com/huggingface/transformers/pull/19921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19921.patch",
"merged_at": 1667485360000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19920
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19920/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19920/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19920/events
|
https://github.com/huggingface/transformers/pull/19920
| 1,425,257,530
|
PR_kwDOCUB6oc5BpXVW
| 19,920
|
donut -> donut-swin
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,667
| 1,667
|
COLLABORATOR
| null |
# What does this PR do?
The model type "donut" doesn't exist, and we don't have `DonutConfig` or `DonutModel`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19920/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19920",
"html_url": "https://github.com/huggingface/transformers/pull/19920",
"diff_url": "https://github.com/huggingface/transformers/pull/19920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19920.patch",
"merged_at": 1667224577000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19919
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19919/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19919/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19919/events
|
https://github.com/huggingface/transformers/issues/19919
| 1,425,223,587
|
I_kwDOCUB6oc5U8y-j
| 19,919
|
During the evaluation, the gpu stops, not working
|
{
"login": "RockMiin",
"id": 52374789,
"node_id": "MDQ6VXNlcjUyMzc0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/52374789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RockMiin",
"html_url": "https://github.com/RockMiin",
"followers_url": "https://api.github.com/users/RockMiin/followers",
"following_url": "https://api.github.com/users/RockMiin/following{/other_user}",
"gists_url": "https://api.github.com/users/RockMiin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RockMiin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RockMiin/subscriptions",
"organizations_url": "https://api.github.com/users/RockMiin/orgs",
"repos_url": "https://api.github.com/users/RockMiin/repos",
"events_url": "https://api.github.com/users/RockMiin/events{/privacy}",
"received_events_url": "https://api.github.com/users/RockMiin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It's possible there are tensors not all of the same lengths across processes (maybe the labels since they are not padded?). When trying to gather them, torch.distributed just hangs instead of throwing an error.",
"@sgugger \r\nFirst of all, thank you for your answer.\r\nThe label is padded.\r\nWhen generating, it seems to be a problem that occurs because the end point of the sentence is different for each gpu process.\r\nHow can we solve this?",
"@sgugger Can you help me about this issue?\r\nThank you",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,671
| 1,671
|
NONE
| null |
### System Info
I am fine-tuning Summariation Task with gpt model using multi-gpu. There is no problem training. However, during the evaluation, the following phenomenon occurs.
The gpu utilization is maintained at 100%, but the temperature is very low.
And the evaluation process is no longer in progress.
How can I solve this problem?
<img width="474" alt="스크린샷 2022-10-27 오후 5 01 10" src="https://user-images.githubusercontent.com/52374789/198227290-ee01b21f-8f1a-4231-8595-546611c96098.png">
### Who can help?
@patil-suraj @patrickvonplaten @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. I use Seq2Seq TrainingArguements, Trainer
2. I modify prediction_step in Seq2Seq Trainer
```
def prediction_step(
self,
model: nn.Module,
inputs: Dict[str, Union[torch.Tensor, Any]],
prediction_loss_only: bool,
ignore_keys: Optional[List[str]] = None,
) -> Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]:
"""
Perform an evaluation step on :obj:`model` using obj:`inputs`.
Subclass and override to inject custom behavior.
Args:
model (:obj:`nn.Module`):
The model to evaluate.
inputs (:obj:`Dict[str, Union[torch.Tensor, Any]]`):
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument :obj:`labels`. Check your model's documentation for all accepted arguments.
prediction_loss_only (:obj:`bool`):
Whether or not to return the loss only.
Return:
Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]: A tuple with the loss, logits and
labels (each being optional).
"""
if not self.args.predict_with_generate or prediction_loss_only:
return super().prediction_step(
model, inputs, prediction_loss_only=prediction_loss_only, ignore_keys=ignore_keys
)
has_labels = "labels" in inputs
inputs = self._prepare_inputs(inputs)
if has_labels:
labels = inputs['labels']
else:
labels = None
generation_inputs = {"input_ids": inputs["input_ids"]}
slice_start = inputs["input_ids"].shape[-1]
# XXX: adapt synced_gpus for fairscale as well
max_length = slice_start + self.generation_max_length if slice_start + self.generation_max_length < 2048 else 2048
# print(f'num beams : {self.generation_num_beams}')
gen_kwargs = {
"max_length": max_length,
# "min_length": max_length,
"num_beams": self.generation_num_beams,
"pad_token_id": self.tokenizer.pad_token_id,
"eos_token_id": self.tokenizer.eos_token_id,
"early_stopping": True,
"synced_gpus": True,
}
if self.args.predict_with_generate and not self.args.prediction_loss_only:
generated_tokens = self.model.generate(
**generation_inputs,
**gen_kwargs,
)
generated_tokens = generated_tokens[:, slice_start:]
if generated_tokens.shape[-1] < gen_kwargs["max_length"]:
generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_kwargs["max_length"])
with torch.no_grad():
with self.autocast_smart_context_manager():
outputs = model(input_ids=inputs['input_ids'], labels=inputs['input_ids'])
if has_labels:
if self.label_smoother is not None:
loss = self.label_smoother(outputs, inputs["input_ids"])
else:
loss = (outputs["loss"] if isinstance(outputs, dict) else outputs[0])
else:
loss = None
loss = None
if self.args.prediction_loss_only:
return (loss, None, None)
return (loss, generated_tokens, labels)
```
### Expected behavior
I also tried fine-tuning the translation task
However, there was no error in the translation task, and it seems to occur when the length of the sentence generated is long from experience
It has been tried several times, but it has occurred at different points, and it has been confirmed that it is not a data problem.
I'd like your help.
Thank you
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19919/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19918
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19918/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19918/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19918/events
|
https://github.com/huggingface/transformers/issues/19918
| 1,425,082,129
|
I_kwDOCUB6oc5U8QcR
| 19,918
|
Why training on Multiple GPU is slower than training on Single GPU for fine tuning Speech to Text Model
|
{
"login": "ishamnewsreels",
"id": 110966055,
"node_id": "U_kgDOBp01Jw",
"avatar_url": "https://avatars.githubusercontent.com/u/110966055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ishamnewsreels",
"html_url": "https://github.com/ishamnewsreels",
"followers_url": "https://api.github.com/users/ishamnewsreels/followers",
"following_url": "https://api.github.com/users/ishamnewsreels/following{/other_user}",
"gists_url": "https://api.github.com/users/ishamnewsreels/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ishamnewsreels/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ishamnewsreels/subscriptions",
"organizations_url": "https://api.github.com/users/ishamnewsreels/orgs",
"repos_url": "https://api.github.com/users/ishamnewsreels/repos",
"events_url": "https://api.github.com/users/ishamnewsreels/events{/privacy}",
"received_events_url": "https://api.github.com/users/ishamnewsreels/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sanchit-gandhi could you take a look here?",
"Hey @ishamnewsreels! A couple of things:\r\n- Do we need to set `OMP_NUM_THREADS=1`? Looks like this affects multi-processing, wondering if it's interfering with distributed training\r\n- Whilst distributed training is running, could you open a new command line window and execute the Unix command:\r\n```bash\r\nwatch -n 0.1 nvidia-smi\r\n```\r\nThis will launch the NVIDIA system management interface, and display individual GPU usage. We'd expect all three of your GPUs to be in use for distributed training. If < 3 are being used there's an issue with launching distributed training!",
"Hi @sanchit-gandhi. \r\n\r\nIf I do not set `OMP_NUM_THREADS=1`, the code doesn't execute at all. \r\nAlso, I have used the unix command that you mentioned and I can observe that all 3 gpus are used. \r\n\r\n",
"Hey @ishamnewsreels - that's good to see that all three GPUs are used. I think I see what the problem is! You have the number of epochs fixed as 30, but are changing the effective batch size for single vs multi GPU training. This changes the number of optimisation steps (= num epochs * epoch-size / batch-size).\r\n\r\nWith single GPU, your settings were as follows:\r\n- `per_device_batch_size` = 8\r\n- `gradient_accumulation_steps` = 2\r\n- Number of devices = 1\r\n- Effective batch size = `per_device_batch_size` * `gradient_accumulation_steps` * number of devices = 16\r\n- For 30 epochs, this gives **11940** optimisation steps\r\n\r\nWith three GPUs, your settings were as follows:\r\n- `per_device_batch_size` = 8\r\n- `gradient_accumulation_steps` = 1\r\n- Number of devices = 3\r\n- Effective batch size = `per_device_batch_size` * `gradient_accumulation_steps` * number of devices = 24 (1.5x more what we had for single GPU)\r\n- For 30 epochs, this gives **7980** optimisation steps (1.5x less than what we had for single GPU)\r\n\r\nThe progress bars that we see during training are **not** the number of epochs, but rather the number of **optimisation steps**. With multi-GPU, we're training for fewer optimisation steps (as the batch size is larger), and so we expect the number of optimisation steps to be less after 4 minutes. After 4 minutes, the % of training completed is 1.67% for single GPU, and 1.00% for multi GPU -> so the training progress is quite similar after this time. We can attribute the difference in training progress to the added communication cost in using multi GPU vs single GPU (we have to sync the GPU's up when we do multi GPU training, giving a communication overhead).\r\n\r\nHope that makes sense!"
] | 1,666
| 1,667
| 1,667
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-5.15.0-52-generic-x86_64-with-debian-bookworm-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.10.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
Speech: @patrickvonplaten, @anton-l, @sanchit-gandhi
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
For training Wav2Vec2 model on multiple gpus, small changes I have made in the script - `run_speech_recognition_ctc.py` for loading custom hindi dataset and no further changes made. Just modified parameter: `nproc_per_node` for number of gpus in run.sh:
```
OMP_NUM_THREADS=1 python -m torch.distributed.launch \
--nproc_per_node 3 run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-xls-r-300m" \
--output_dir="./new_output" \
--overwrite_output_dir \
--num_train_epochs="30" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="8" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--length_column_name="input_length" \
--save_steps="200" \
--eval_steps="200" \
--logging_steps="200" \
--layerdrop="0.0" \
--save_total_limit="2" \
--freeze_feature_encoder \
--gradient_checkpointing \
--chars_to_ignore \। \| \’ \– \, \? \. \! \- \; \: \" \“ \% \‘ \” \� \' \
--fp16 \
--group_by_length \
--do_train --do_eval
```
Original code provided in this [repository](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#common-voice-ctc).
For training on single gpu, small changes were made for loading custom dataset. Code used from this blog: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2. Rest of the code is same.
### Expected behavior
Using one GPU:

At 4 mins, already reached 200 epochs.
Using Multiple GPU:

At 4 mins, only reached 80 epochs.
Using multiple gpu should speed up the training/fine-tuning process. But instead, it is slower. Kindly need your support to check this issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19918/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19917
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19917/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19917/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19917/events
|
https://github.com/huggingface/transformers/pull/19917
| 1,425,080,765
|
PR_kwDOCUB6oc5Box-m
| 19,917
|
Support segformer fx
|
{
"login": "dwlim-nota",
"id": 61265665,
"node_id": "MDQ6VXNlcjYxMjY1NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/61265665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwlim-nota",
"html_url": "https://github.com/dwlim-nota",
"followers_url": "https://api.github.com/users/dwlim-nota/followers",
"following_url": "https://api.github.com/users/dwlim-nota/following{/other_user}",
"gists_url": "https://api.github.com/users/dwlim-nota/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwlim-nota/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwlim-nota/subscriptions",
"organizations_url": "https://api.github.com/users/dwlim-nota/orgs",
"repos_url": "https://api.github.com/users/dwlim-nota/repos",
"events_url": "https://api.github.com/users/dwlim-nota/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwlim-nota/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you @michaelbenayoun :)\r\nI missed `fx_compatible = True` attribute.\r\nUpdated it!",
"also, I checked that glpn model was copied from segformer.\r\nto overcome consistency test, updated glpn code too."
] | 1,666
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I wrote simple test code to use fx in the Segformer model.
But failed.
```python
import torch
from transformers import SegformerModel, SegformerConfig, SegformerFeatureExtractor
from transformers.utils.fx import symbolic_trace
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b0")
model = SegformerModel.from_pretrained("nvidia/mit-b0")
inputs = feature_extractor(image, return_tensors="pt")
traced_model = symbolic_trace(model, ["pixel_values"])
with torch.no_grad():
outputs = model(**inputs)
traced_outputs = traced_model(**inputs)
assert torch.allclose(outputs.last_hidden_state, traced_outputs["last_hidden_state"])
```
when I tried to apply fx to Segformer model, HFTracer class could not pass transpose_for_scores function


because Proxy(Torch.Size ) is not iterable object.
so, simply fixed as belows.

also there was same case in the forward function.

to overcome `check_if_model_is_supported`, added segformer to


## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@NielsRogge @michaelbenayoun
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19917/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19917",
"html_url": "https://github.com/huggingface/transformers/pull/19917",
"diff_url": "https://github.com/huggingface/transformers/pull/19917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19917.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19916
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19916/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19916/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19916/events
|
https://github.com/huggingface/transformers/issues/19916
| 1,425,049,492
|
I_kwDOCUB6oc5U8IeU
| 19,916
|
Fine-tuning translation model speed anomalies
|
{
"login": "chaodreaming",
"id": 49591435,
"node_id": "MDQ6VXNlcjQ5NTkxNDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/49591435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chaodreaming",
"html_url": "https://github.com/chaodreaming",
"followers_url": "https://api.github.com/users/chaodreaming/followers",
"following_url": "https://api.github.com/users/chaodreaming/following{/other_user}",
"gists_url": "https://api.github.com/users/chaodreaming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chaodreaming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chaodreaming/subscriptions",
"organizations_url": "https://api.github.com/users/chaodreaming/orgs",
"repos_url": "https://api.github.com/users/chaodreaming/repos",
"events_url": "https://api.github.com/users/chaodreaming/events{/privacy}",
"received_events_url": "https://api.github.com/users/chaodreaming/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to debug training like this as we keep features for bugs (with a clear reproducer) and feature requests only.",
"I think it is a bug in itself, I tried many devices and the speed is almost the same, obviously not reasonable",
"> t4 graphics card half precision about 10, a100 is more than 300, even if not up to 30 times the speed should not be almost the same speed, and here the multi-threaded seems to be bad\r\n\r\nI read this 3 times, and I still don't understand. What do you mean ?\r\n\r\nJust as a note for here or the forums, trying to be over explicit might help readers understand what you're trying to do and what are your expectations.",
"Sorry, the message is machine translation, I mean there are two problems, the first problem is that different gpu speed should be different, a100 is better than t4, especially the semi-precision gap is big, so turn on the semi-precision training speed should be a big gap, however the speed is almost the same. The second problem is that multi-threading does not seem to have any accelerating effect, and can be improved a lot on cv, I do not know if this is the case in the nlp field\r\n",
"8 card speed is no different from 1 card",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
### System Info
python 3.8.12
ubuntu18.04
transformers 4.23.1

### Who can help?
@Narsil
@patil-suraj
@sgugger, @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1FwXCjUVvrNpCuf0KoxhxQKTciRscva6t?usp=sharing
t4 graphics card half precision about 10, a100 is more than 300, even if not up to 30 times the speed should not be almost the same speed, and here the multi-threaded seems to be bad
### Expected behavior
After experiments found that bs has a greater impact on the speed, gradient accumulation also has a slight impact, but the same bs, a100 and t4 speed is almost the same, the expected is to start multi-threaded or other methods to accelerate, otherwise, training once to more than 100 days
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19916/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19915
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19915/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19915/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19915/events
|
https://github.com/huggingface/transformers/issues/19915
| 1,425,027,605
|
I_kwDOCUB6oc5U8DIV
| 19,915
|
Unable to see the weight files after quantization
|
{
"login": "pradeepdev-1995",
"id": 41164884,
"node_id": "MDQ6VXNlcjQxMTY0ODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/41164884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pradeepdev-1995",
"html_url": "https://github.com/pradeepdev-1995",
"followers_url": "https://api.github.com/users/pradeepdev-1995/followers",
"following_url": "https://api.github.com/users/pradeepdev-1995/following{/other_user}",
"gists_url": "https://api.github.com/users/pradeepdev-1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pradeepdev-1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pradeepdev-1995/subscriptions",
"organizations_url": "https://api.github.com/users/pradeepdev-1995/orgs",
"repos_url": "https://api.github.com/users/pradeepdev-1995/repos",
"events_url": "https://api.github.com/users/pradeepdev-1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/pradeepdev-1995/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Maybe of interest to @michaelbenayoun :)",
"Hi @pradeepdev-1995 ,\r\nYou don't get this issue first ?\r\n```\r\nAttributeError: 'torch.dtype' object has no attribute 'numel'\r\n```",
"Yes. @michaelbenayoun \nBut after rerun in second time in the google colab,\nIt worked without error.\nBut only config file is there.",
"Yes, I observe the same thing. In any case, I do not think this will work because you have dtypes in your state dict, which is not handled correctly by `save_pretrained` for now.",
"@michaelbenayoun\nGot it. So how can i do dynamic quantization on a model and save it in local for future use?\nPlease share the code snippet if possible ",
"This should work:\r\n```python\r\nimport torch\r\nimport os\r\nfrom transformers import AutoConfig, AutoModel\r\nmodel = AutoModel.from_pretrained(\"bert-base-uncased\")\r\nmodel_quantized = torch.quantization.quantize_dynamic(\r\n model, {torch.nn.Linear}, dtype=torch.qint8\r\n) \r\nquantized_output_dir = \"quantized/\"\r\nif not os.path.exists(quantized_output_dir):\r\n os.makedirs(quantized_output_dir)\r\n model_quantized.config.save_pretrained(quantized_output_dir)\r\n torch.save(model_quantized.state_dict(), \"quantized/pytorch_model.bin\")\r\n```\r\n\r\nBut note that you will not be able to restore your model afterwards, at least with a `from_pretrained`. You will need to:\r\n\r\n1. Load the model, either with the pre-trained weights, or random ones\r\n2. Convert the model to its dynamically quantized version\r\n3. Do: `model.load_state_dict(torch.load(path_to_the_state_dict))`\r\n\r\nYou have other ways of saving your model:\r\n\r\n- You can `jit.trace` / `jit.script` it\r\n- You can use another approach, such as [quantization with ONNX Runtime with Optimum](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization)",
"@michaelbenayoun Thank you very much for the comments.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi, \r\n\r\nI tried this, but when I checked the `config.json`, it's showing float16. Do I need to worry about it or can I ignore it?"
] | 1,666
| 1,685
| 1,669
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have tried the following code for dynamic quantization
```
import torch
import os
from transformers import AutoConfig, AutoModel
model = AutoModel.from_pretrained("bert-base-uncased")
model_quantized = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
quantized_output_dir = "quantized/"
if not os.path.exists(quantized_output_dir):
os.makedirs(quantized_output_dir)
model_quantized.save_pretrained(quantized_output_dir)
```
After the execution, I could see that there is a new folder named quantized created in the directory which contains only the ```config.json``` file.
contents are as follows
```
{
"_name_or_path": "bert-base-uncased",
"architectures": [
"BertModel"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"torch_dtype": "float32",
"transformers_version": "4.23.1",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
```
I can't see any other .bin or .wt files after quantization. Why it is so?
### Expected behavior
The model should be quantized and save the new quantized weight files in the provided folder along with the config.json file
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19915/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19914
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19914/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19914/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19914/events
|
https://github.com/huggingface/transformers/issues/19914
| 1,424,997,823
|
I_kwDOCUB6oc5U772_
| 19,914
|
Transformer XL div_val != 1 does not work with fp16
|
{
"login": "StefanHeng",
"id": 43276957,
"node_id": "MDQ6VXNlcjQzMjc2OTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/43276957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StefanHeng",
"html_url": "https://github.com/StefanHeng",
"followers_url": "https://api.github.com/users/StefanHeng/followers",
"following_url": "https://api.github.com/users/StefanHeng/following{/other_user}",
"gists_url": "https://api.github.com/users/StefanHeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StefanHeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StefanHeng/subscriptions",
"organizations_url": "https://api.github.com/users/StefanHeng/orgs",
"repos_url": "https://api.github.com/users/StefanHeng/repos",
"events_url": "https://api.github.com/users/StefanHeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/StefanHeng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"According to git blame, @thomwolf added Transformer Xl. Can you help? ",
"I don't think TransformerXL supports FP16 as this is an old model with very specific code for the softmax layer. This won't be an issue we will fix ourselves given that Transformer-XL is not very used anymore, but if someone wants to make a PR, we'll review!",
"I see. I will think about make a PR. Thank you! "
] | 1,666
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
### System Info
python version `3.10.4`,
Package versions
torch 1.12.0+cu116
torchaudio 0.12.0+cu116
torchvision 0.13.0+cu116
transformers 4.22.2
### Who can help?
@patrickvonplaten
@thomwolf
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here's a MWE to reproduce the bug.
```python
import torch
from transformers import (
TransfoXLConfig, TransfoXLTokenizer, TransfoXLLMHeadModel, Trainer, DataCollatorForLanguageModeling
)
from transformers.training_args import TrainingArguments
import datasets
config = TransfoXLConfig.from_pretrained('transfo-xl-wt103')
config.d_model = 128
config.n_head = 8
config.n_layer = 4
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
tokenizer.model_max_length = 16
tokenizer.add_special_tokens(dict(pad_token='[PAD]'))
model = TransfoXLLMHeadModel(config)
dataset = datasets.Dataset.from_dict(dict(text=['Hello world', 'XL blah']))
# mic(dataset)
dataset = dataset.map(lambda x: tokenizer(x['text'], return_tensors='pt', padding='max_length'), batched=True)
train_args = TrainingArguments(
output_dir='./debug',
fp16=torch.cuda.is_available(),
num_train_epochs=100,
per_device_train_batch_size=2
)
# mic(train_args)
trainer = Trainer(
model=model, train_dataset=dataset, args=train_args,
data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
)
trainer.train()
```
Here's the stack trace I got
```bash
Traceback (most recent call last):
File "/home/stefanhg/Music-with-NLP/Symbolic-Music-Generation/test-lang.py", line 992, in <module>
check_xl_fp16()
File "/home/stefanhg/Music-with-NLP/Symbolic-Music-Generation/test-lang.py", line 991, in check_xl_fp16
trainer.train()
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/trainer.py", line 1521, in train
return inner_training_loop(
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/trainer.py", line 1763, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/trainer.py", line 2499, in training_step
loss = self.compute_loss(model, inputs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/trainer.py", line 2531, in compute_loss
outputs = model(**inputs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/models/transfo_xl/modeling_transfo_xl.py", line 1094, in forward
transformer_outputs = self.transformer(
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/models/transfo_xl/modeling_transfo_xl.py", line 929, in forward
word_emb = self.word_emb(input_ids)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/models/transfo_xl/modeling_transfo_xl.py", line 451, in forward
emb_flat.index_copy_(0, indices_i, emb_i)
RuntimeError: index_copy_(): self and source expected to have the same dtype, but got (self) Float and (source) Half
```
### Expected behavior
In short, something wrong with adaptive softmax, I assume type cast for fp16 not working properly
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19914/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19913
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19913/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19913/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19913/events
|
https://github.com/huggingface/transformers/issues/19913
| 1,424,954,961
|
I_kwDOCUB6oc5U7xZR
| 19,913
|
VideoMAE assumes channel_num==3
|
{
"login": "tarokiritani",
"id": 1145404,
"node_id": "MDQ6VXNlcjExNDU0MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1145404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tarokiritani",
"html_url": "https://github.com/tarokiritani",
"followers_url": "https://api.github.com/users/tarokiritani/followers",
"following_url": "https://api.github.com/users/tarokiritani/following{/other_user}",
"gists_url": "https://api.github.com/users/tarokiritani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tarokiritani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tarokiritani/subscriptions",
"organizations_url": "https://api.github.com/users/tarokiritani/orgs",
"repos_url": "https://api.github.com/users/tarokiritani/repos",
"events_url": "https://api.github.com/users/tarokiritani/events{/privacy}",
"received_events_url": "https://api.github.com/users/tarokiritani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @NielsRogge ",
"Hi,\r\n\r\nThanks for your interest in VideoMAE. I took the unnormalization from the original implementation as can be seen here: https://github.com/MCG-NJU/VideoMAE/blob/b6af64a997da1a2f52ce1cb2f300712faa2444a1/engine_for_pretraining.py#L38-L41. \r\n\r\nThe unnormalization is done to \"undo\" the normalization done during data preprocessing (as the model needs to predict raw pixel values). So I assume something similar needs to be done when working with greyscale videos; one needs to unnormalize them before calculating the loss.\r\n\r\n",
"If self.config.norm_pix_loss is true, normalization of each patch undoes the effect of unnormalization:\r\nhttps://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/videomae/modeling_videomae.py#L852-L854\r\nAnyhow, I suppose the goal of your implementation is to replicate the original model published by the authors. For now, I will just comment out the [unnormalization](https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/videomae/modeling_videomae.py#L824-L826) to deal with my gray scale videos."
] | 1,666
| 1,667
| 1,667
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`VideoMAEForPreTraining` assumes the channel number is 3. The code below works if `num_channels = 3`.
```python
from transformers import VideoMAEForPreTraining, VideoMAEConfig
import numpy as np
import torch
num_frames = 16
num_channels = 1
config = VideoMAEConfig(num_channels=num_channels)
model = VideoMAEForPreTraining(config)
num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2
seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame
bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool()
model(torch.rand([1, num_frames, num_channels, 224, 224]), bool_masked_pos)
```
The above code spits out this error message:
```
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:530: UserWarning: Using a target size (torch.Size([1, 760, 1536])) that is different to the input size (torch.Size([1, 760, 512])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-6-46b1d4563ea7>](https://localhost:8080/#) in <module>
11 seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame
12 bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool()
---> 13 model(torch.rand([1, num_frames, num_channels, 224, 224]), bool_masked_pos)
5 frames
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/videomae/modeling_videomae.py](https://localhost:8080/#) in forward(self, pixel_values, bool_masked_pos, head_mask, output_attentions, output_hidden_states, return_dict)
884
885 loss_fct = MSELoss()
--> 886 loss = loss_fct(logits, labels)
887
888 if not return_dict:
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py](https://localhost:8080/#) in forward(self, input, target)
528
529 def forward(self, input: Tensor, target: Tensor) -> Tensor:
--> 530 return F.mse_loss(input, target, reduction=self.reduction)
531
532
[/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in mse_loss(input, target, size_average, reduce, reduction)
3277 reduction = _Reduction.legacy_get_string(size_average, reduce)
3278
-> 3279 expanded_input, expanded_target = torch.broadcast_tensors(input, target)
3280 return torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
3281
[/usr/local/lib/python3.7/dist-packages/torch/functional.py](https://localhost:8080/#) in broadcast_tensors(*tensors)
71 if has_torch_function(tensors):
72 return handle_torch_function(broadcast_tensors, tensors, *tensors)
---> 73 return _VF.broadcast_tensors(tensors) # type: ignore[attr-defined]
74
75
RuntimeError: The size of tensor a (512) must match the size of tensor b (1536) at non-singleton dimension 2
```
In line [886](https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/videomae/modeling_videomae.py#L886), the dimension of `labels` is 3 times as large as it should be. This dimension mismatch is caused by this [unnormalization](https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/videomae/modeling_videomae.py#L824). Since the `mean` and `std` are 3 dimensional, the `pixel_values` are broadcast in L826. I am not sure if this "unnormalization" operation is necessary. I was a bit surprised to see it because I usually do this kind of transformations during data loading.
### Expected behavior
VideoMAEForPretraining should accept tensors even if the input channel number is not 3.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19913/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19912
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19912/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19912/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19912/events
|
https://github.com/huggingface/transformers/pull/19912
| 1,424,782,138
|
PR_kwDOCUB6oc5BnzD2
| 19,912
|
Add `accelerate` support for M2M100
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds `accelerate` support to `M2M100`, therefore this enables loading NLLB models in 8-bit using `load_in_8bit=True`.
This might contain a breaking change but I am not sure.
When initializing the model in the meta device using `accelerate` the module `self.shared` is intialized and set to the correct device using `set_tensor_to_device` thrice - since it is shared by 3 modules (base model, encoder, decoder) - so it somehow ends up being on the `meta` device.
Therefore manually assigning a new module with the weights that correspond to the weights of the `shared` module should do the trick. But I am wondering if this is a breaking change since the `shared` module of the Encoder & Decoder won't be "shared" anymore. It should not be a problem at inference time, but can be problematic when training the model.
cc @sgugger
Also I know T5 also supports `accelerate` and uses `shared` embeddings. The only difference I see from both implementations are the `_keys_to_ignore_on_load_missing` that contains the `shared` weights for `T5` and doesn't contain the shared weights for M2M100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19912/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19912",
"html_url": "https://github.com/huggingface/transformers/pull/19912",
"diff_url": "https://github.com/huggingface/transformers/pull/19912.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19912.patch",
"merged_at": 1666886815000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19911
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19911/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19911/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19911/events
|
https://github.com/huggingface/transformers/pull/19911
| 1,424,572,234
|
PR_kwDOCUB6oc5BnFtC
| 19,911
|
Add RoBERTa resources
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
MEMBER
| null |
From #19848, this PR adds resources for RoBERTa.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19911/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19911",
"html_url": "https://github.com/huggingface/transformers/pull/19911",
"diff_url": "https://github.com/huggingface/transformers/pull/19911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19911.patch",
"merged_at": 1666895595000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19910
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19910/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19910/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19910/events
|
https://github.com/huggingface/transformers/pull/19910
| 1,424,563,386
|
PR_kwDOCUB6oc5BnDvo
| 19,910
|
Add checkpoint links in a few config classes
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
COLLABORATOR
| null |
# What does this PR do?
Add checkpoint links in the following config classes:
- CLIPConfig
- GroupViTConfig
- OwlViTConfig
- XCLIPConfig
A necessary condition to make the tiny model creation work (PR #19901) for those models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19910/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19910",
"html_url": "https://github.com/huggingface/transformers/pull/19910",
"diff_url": "https://github.com/huggingface/transformers/pull/19910.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19910.patch",
"merged_at": 1666855571000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19909
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19909/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19909/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19909/events
|
https://github.com/huggingface/transformers/issues/19909
| 1,424,549,996
|
I_kwDOCUB6oc5U6Ohs
| 19,909
|
Transformers from GCS (or custom filesystem).
|
{
"login": "nsthorat",
"id": 1100749,
"node_id": "MDQ6VXNlcjExMDA3NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1100749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsthorat",
"html_url": "https://github.com/nsthorat",
"followers_url": "https://api.github.com/users/nsthorat/followers",
"following_url": "https://api.github.com/users/nsthorat/following{/other_user}",
"gists_url": "https://api.github.com/users/nsthorat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nsthorat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsthorat/subscriptions",
"organizations_url": "https://api.github.com/users/nsthorat/orgs",
"repos_url": "https://api.github.com/users/nsthorat/repos",
"events_url": "https://api.github.com/users/nsthorat/events{/privacy}",
"received_events_url": "https://api.github.com/users/nsthorat/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi there! We don't plan on adding support for something else than the Hub/local disk for pretrained model in Transformers.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
### Feature request
Hi! I'm wondering if there will be support for a custom filesystem argument to "from_pretrained" for transformers, just like there is for datasets (https://huggingface.co/docs/datasets/filesystems).
### Motivation
This ideally would be great for running models in the cloud in "diskless" mode where there is no access to a real filesystem and model assets could be read into RAM via the same filesystem API that is used for datasets.
This would solve the issue of decoupling a binary from its data dependencies, in the same way it's done for datasets.
Thank you!
### Your contribution
Would love to help here, but presumably a HF expert would be much more suited to solve this problem. Happy to be eyes and a tester!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19909/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19908
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19908/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19908/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19908/events
|
https://github.com/huggingface/transformers/issues/19908
| 1,424,398,311
|
I_kwDOCUB6oc5U5pfn
| 19,908
|
Any ideas on how we can convert a model from huggingface (transformers library )to tensorflow lite?
|
{
"login": "BENSAFOUAN-Abdelhalim",
"id": 74852971,
"node_id": "MDQ6VXNlcjc0ODUyOTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/74852971?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BENSAFOUAN-Abdelhalim",
"html_url": "https://github.com/BENSAFOUAN-Abdelhalim",
"followers_url": "https://api.github.com/users/BENSAFOUAN-Abdelhalim/followers",
"following_url": "https://api.github.com/users/BENSAFOUAN-Abdelhalim/following{/other_user}",
"gists_url": "https://api.github.com/users/BENSAFOUAN-Abdelhalim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BENSAFOUAN-Abdelhalim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BENSAFOUAN-Abdelhalim/subscriptions",
"organizations_url": "https://api.github.com/users/BENSAFOUAN-Abdelhalim/orgs",
"repos_url": "https://api.github.com/users/BENSAFOUAN-Abdelhalim/repos",
"events_url": "https://api.github.com/users/BENSAFOUAN-Abdelhalim/events{/privacy}",
"received_events_url": "https://api.github.com/users/BENSAFOUAN-Abdelhalim/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Maybe of interest to @gante @Rocketknight1 ",
"Hi @BENSAFOUAN-Abdelhalim, `CamembertForQuestionAnswering` is a PyTorch model. The TF model is `TFCamembertForQuestionAnswering`. That's why you're seeing the missing methods on that model!\r\n\r\nIn general, though, we don't support TFLite conversions for all of our models. There are some operations that TFLite can't support, and we don't guarantee that everything in a model will work for it. However, you can absolutely try to convert it and see what you get!",
"ok, thanks @Rocketknight1 for your answer.",
"@BENSAFOUAN-Abdelhalim \r\nRefer this colab for more details on how to convert HF TF model to TFlite model \r\nhttps://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/tflite_from_huggingface_whisper.ipynb\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
### System Info
I want to convert CamembertQuestionAnsewring model to tensoflow lite, i download it from huggingface platform, because when i want to save the model locally it gives me the model with 'bin' format.
i'm asking here because huggingface use pytorch pretrained models.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
when i try to convert the model it gives me this error : AttributeError: 'CamembertForQuestionAnswering' object has no attribute 'call' by using tf_model.h5 file.
Also i can't load it using : tf.keras.models.load_model() it gives me : ValueError: No model config found in the file at <tensorflow.python.platform.gfile.GFile object at 0x7f27cceb1810>.
when i want to save the transformers model locally it gives me the model with 'bin' format, so i download it from the platform.
### Expected behavior
https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf?context=Etalab+est+une+administration+publique+fran%C3%A7aise+qui+fait+notamment+office+de+Chief+Data+Officer+de+l%27%C3%89tat+et+coordonne+la+conception+et+la+mise+en+%C5%93uvre+de+sa+strat%C3%A9gie+dans+le+domaine+de+la+donn%C3%A9e+%28ouverture+et+partage+des+donn%C3%A9es+publiques+ou+open+data%2C+exploitation+des+donn%C3%A9es+et+intelligence+artificielle...%29.+Ainsi%2C+Etalab+d%C3%A9veloppe+et+maintient+le+portail+des+donn%C3%A9es+ouvertes+du+gouvernement+fran%C3%A7ais+data.gouv.fr.+Etalab+promeut+%C3%A9galement+une+plus+grande+ouverture+l%27administration+sur+la+soci%C3%A9t%C3%A9+%28gouvernement+ouvert%29+%3A+transparence+de+l%27action+publique%2C+innovation+ouverte%2C+participation+citoyenne...+elle+promeut+l%E2%80%99innovation%2C+l%E2%80%99exp%C3%A9rimentation%2C+les+m%C3%A9thodes+de+travail+ouvertes%2C+agiles+et+it%C3%A9ratives%2C+ainsi+que+les+synergies+avec+la+soci%C3%A9t%C3%A9+civile+pour+d%C3%A9cloisonner+l%E2%80%99administration+et+favoriser+l%E2%80%99adoption+des+meilleures+pratiques+professionnelles+dans+le+domaine+du+num%C3%A9rique.+%C3%80+ce+titre+elle+%C3%A9tudie+notamment+l%E2%80%99opportunit%C3%A9+de+recourir+%C3%A0+des+technologies+en+voie+de+maturation+issues+du+monde+de+la+recherche.+Cette+entit%C3%A9+charg%C3%A9e+de+l%27innovation+au+sein+de+l%27administration+doit+contribuer+%C3%A0+l%27am%C3%A9lioration+du+service+public+gr%C3%A2ce+au+num%C3%A9rique.+Elle+est+rattach%C3%A9e+%C3%A0+la+Direction+interminist%C3%A9rielle+du+num%C3%A9rique%2C+dont+les+missions+et+l%E2%80%99organisation+ont+%C3%A9t%C3%A9+fix%C3%A9es+par+le+d%C3%A9cret+du+30+octobre+2019.%E2%80%89+Dirig%C3%A9+par+Laure+Lucchesi+depuis+2016%2C+elle+rassemble+une+%C3%A9quipe+pluridisciplinaire+d%27une+trentaine+de+personnes.&question=Comment+s%27appelle+le+portail+open+data+du+gouvernement+%3F
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19908/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19907
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19907/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19907/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19907/events
|
https://github.com/huggingface/transformers/pull/19907
| 1,424,381,895
|
PR_kwDOCUB6oc5Bmbmo
| 19,907
|
Enables torchrun for XLA-based accelerators
|
{
"login": "jeffhataws",
"id": 56947987,
"node_id": "MDQ6VXNlcjU2OTQ3OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/56947987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffhataws",
"html_url": "https://github.com/jeffhataws",
"followers_url": "https://api.github.com/users/jeffhataws/followers",
"following_url": "https://api.github.com/users/jeffhataws/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffhataws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffhataws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffhataws/subscriptions",
"organizations_url": "https://api.github.com/users/jeffhataws/orgs",
"repos_url": "https://api.github.com/users/jeffhataws/repos",
"events_url": "https://api.github.com/users/jeffhataws/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffhataws/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19907). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR enables torchrun for XLA-based accelerators (TPU/NeuronCore) by using torch.distributed XLA backend. It is dependent on the torch/xla change https://github.com/pytorch/xla/pull/3609.
Example application is the AWS Neuron tutorial with HF Trainer that uses torchrun:
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19907/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19907",
"html_url": "https://github.com/huggingface/transformers/pull/19907",
"diff_url": "https://github.com/huggingface/transformers/pull/19907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19907.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19906
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19906/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19906/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19906/events
|
https://github.com/huggingface/transformers/pull/19906
| 1,424,338,752
|
PR_kwDOCUB6oc5BmSDZ
| 19,906
|
`accelerate` support for `RoBERTa` family
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds `accelerate` support for:
- `RoBERTa`
- `data2vec_text`
- `Lilt`
- `Luke`
- `XLM-RoBERTa`
- `CamemBERT`
- `LongFormer`
This way, any of the models above can be loaded in 8bit using `load_in_8bit=True`.
Since these models copy the same `xxxLMHead` from `RoBERTa` I had to change the copied modules too - happy also to break down this PR into several smaller PRs,
This PR also fixes a small bug on `accelerate` tests where the variable `input_dict` is overriden by `xxForMultipleChoice` models.
Can also confirm all slow tests pass (single + multiple GPUs)
cc @sgugger @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19906/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19906",
"html_url": "https://github.com/huggingface/transformers/pull/19906",
"diff_url": "https://github.com/huggingface/transformers/pull/19906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19906.patch",
"merged_at": 1666816913000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19905
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19905/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19905/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19905/events
|
https://github.com/huggingface/transformers/pull/19905
| 1,424,297,833
|
PR_kwDOCUB6oc5BmJV-
| 19,905
|
Update check_copies.py
|
{
"login": "AkshitGulyan",
"id": 103456810,
"node_id": "U_kgDOBiqgKg",
"avatar_url": "https://avatars.githubusercontent.com/u/103456810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AkshitGulyan",
"html_url": "https://github.com/AkshitGulyan",
"followers_url": "https://api.github.com/users/AkshitGulyan/followers",
"following_url": "https://api.github.com/users/AkshitGulyan/following{/other_user}",
"gists_url": "https://api.github.com/users/AkshitGulyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AkshitGulyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AkshitGulyan/subscriptions",
"organizations_url": "https://api.github.com/users/AkshitGulyan/orgs",
"repos_url": "https://api.github.com/users/AkshitGulyan/repos",
"events_url": "https://api.github.com/users/AkshitGulyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/AkshitGulyan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This Pull Request's issues are being managed in an another Pull Request, so closing this one !"
] | 1,666
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
Added the proper info for the Hindi Translation of README File
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19905/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19905",
"html_url": "https://github.com/huggingface/transformers/pull/19905",
"diff_url": "https://github.com/huggingface/transformers/pull/19905.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19905.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19904
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19904/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19904/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19904/events
|
https://github.com/huggingface/transformers/issues/19904
| 1,424,264,841
|
I_kwDOCUB6oc5U5I6J
| 19,904
|
`return_loss=True` in call for `TFCLIPModel` bugs out.
|
{
"login": "ariG23498",
"id": 36856589,
"node_id": "MDQ6VXNlcjM2ODU2NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/36856589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ariG23498",
"html_url": "https://github.com/ariG23498",
"followers_url": "https://api.github.com/users/ariG23498/followers",
"following_url": "https://api.github.com/users/ariG23498/following{/other_user}",
"gists_url": "https://api.github.com/users/ariG23498/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ariG23498/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ariG23498/subscriptions",
"organizations_url": "https://api.github.com/users/ariG23498/orgs",
"repos_url": "https://api.github.com/users/ariG23498/repos",
"events_url": "https://api.github.com/users/ariG23498/events{/privacy}",
"received_events_url": "https://api.github.com/users/ariG23498/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @ariG23498, thanks for reporting this issue. \r\n\r\nCould you give more information about the current behaviour? Specifically any tracebacks or more details about what is happening when you do execute? ",
"I have created a [colab notebook](https://gist.github.com/ariG23498/f736dea2f6f488d6c55fd9bb107bef13) that can help you with the traceback.\r\n\r\nLet me know if you need something else. Thanks for the quick response @amyeroberts (as always 😃)\r\n",
"It looks like the problem in this issue is that you are not passing along as many images as texts. Passing `images=[image, image]` makes your reproducer pass.\r\n\r\n",
"@sgugger Yes, this was the problem the whole time 😢 . The documentation has to fixed then. \r\n\r\nhttps://huggingface.co/docs/transformers/model_doc/clip",
"Indeed, do you want to make a PR with that?",
"@sgugger Yes, I will take it up.",
"@sgugger Have been thinking over this, should there be same number of images as text ? I do not see any reason to restrict it this way . Let me know if I am missing something . ",
"> @sgugger Have been thinking over this, should there be same number of images as text ? I do not see any reason to restrict it this way . Let me know if I am missing something .\r\n\r\n@sgugger Any thoughts on this ? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Pinging on this isseue",
"@ArthurZucker, would you like to take a look at this?",
"@LysandreJik @ArthurZucker The confusion here is should the number of images equal to number of text ?",
"Hey, I think that this was solved, I can't reproduce it on main. You are right, the number of images should not necessarily be the same as the number of texts. \r\n\r\n```python \r\n>>> inputs[\"pixel_values\"].shape\r\nTensorShape([1, 3, 224, 224])\r\n>>> inputs[\"input_ids\"].shape\r\nTensorShape([2, 7])\r\n>>> outputs.loss\r\n<tf.Tensor: shape=(1,), dtype=float32, numpy=array([nan], dtype=float32)>\r\n```\r\nNow the question is rather \"should the loss acatually be `nan` 😅 ",
"@ArthurZucker oh, great, let me look at the fix. Last time I checked the way contrastive loss was flawed. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
To reproduce the bug I have used the following code snippet 👇
```python
import tensorflow as tf
from PIL import Image
import requests
from transformers import CLIPProcessor, TFCLIPModel
model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="tf", padding=True
)
outputs = model(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
attention_mask=inputs["attention_mask"],
return_loss=True,
return_dict=True,
)
```
### Expected behavior
The call should execute and we should obtain the `outputs`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19904/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19903
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19903/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19903/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19903/events
|
https://github.com/huggingface/transformers/pull/19903
| 1,424,233,420
|
PR_kwDOCUB6oc5Bl7lR
| 19,903
|
Created README_hd.md
|
{
"login": "AkshitGulyan",
"id": 103456810,
"node_id": "U_kgDOBiqgKg",
"avatar_url": "https://avatars.githubusercontent.com/u/103456810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AkshitGulyan",
"html_url": "https://github.com/AkshitGulyan",
"followers_url": "https://api.github.com/users/AkshitGulyan/followers",
"following_url": "https://api.github.com/users/AkshitGulyan/following{/other_user}",
"gists_url": "https://api.github.com/users/AkshitGulyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AkshitGulyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AkshitGulyan/subscriptions",
"organizations_url": "https://api.github.com/users/AkshitGulyan/orgs",
"repos_url": "https://api.github.com/users/AkshitGulyan/repos",
"events_url": "https://api.github.com/users/AkshitGulyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/AkshitGulyan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19903). All of your documentation changes will be reflected on that endpoint.",
"added the proper info in [this dictionary](https://github.com/huggingface/transformers/blob/7a1c68a8454c25c55f3f8978c182ea90e3412f5c/utils/check_copies.py#L39)\r\n\r\nBy a new pull request \r\n[Update check_copies.py #19905](https://github.com/huggingface/transformers/pull/19905)",
"No this should all be in the same pull request please.",
"> No this should all be in the same pull request please.\r\n\r\nUpdated the check_copies.py in this current Pull Request and closed the previous Pull Request !",
"Any Update ?",
"I think you need to run `make fix-copies` on your side to adjust the READMEs, then it should be good to merge if all comments are addressed :-)",
"Please address remaining comments along with steps Sylvain has mentioned and then we are good to go",
"> Please address remaining comments along with steps Sylvain has mentioned and then we are good to go\r\n\r\nAddressed all the comments and updated the file according to them !\r\nCan you please help me understanding this fix-copies concept which Sylvain has mentioned as i dont know about it !",
"Hello @AkshitGulyan, in the above PR I fixed subtle and time-consuming bugs to run `make fix-copies` without any issues. The details are below so that you can do these things next time.\r\n\r\n1. When I ran `make fix-copies` locally I got below error:\r\n```\r\n(ml) sourabmangrulkar@Sourabs-MacBook-Pro transformers % make fix-copies\r\npython utils/check_copies.py --fix_and_overwrite\r\nTraceback (most recent call last):\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 572, in <module>\r\n check_copies(args.fix_and_overwrite)\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 270, in check_copies\r\n check_model_list_copy(overwrite=overwrite)\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 455, in check_model_list_copy\r\n localized_md_list = get_model_list(filename, _start_prompt, _end_prompt)\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 303, in get_model_list\r\n while not lines[start_index].startswith(start_prompt):\r\nIndexError: list index out of range\r\nmake: *** [fix-copies] Error 1\r\n``` \r\n\r\n2. After spending time diving into `utils/check_copies.py` found the issue wherein `prompt_start` specified was not matching to the line in `README_hd.md`. Made them same.\r\n\r\n3. Then got this issue:\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 354, in convert_to_localized_md\r\n localized_model_index = {\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 355, in <dictcomp>\r\n re.search(r\"\\*\\*\\[([^\\]]*)\", line).groups()[0]: line\r\nAttributeError: 'NoneType' object has no attribute 'groups'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 575, in <module>\r\n check_copies(args.fix_and_overwrite)\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 270, in check_copies\r\n check_model_list_copy(overwrite=overwrite)\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 459, in check_model_list_copy\r\n readmes_match, converted_md_list = convert_to_localized_md(md_list, localized_md_list, _format_model_list)\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 359, in convert_to_localized_md\r\n raise AttributeError(\"A model name in localized READMEs cannot be recognized.\")\r\nAttributeError: A model name in localized READMEs cannot be recognized.\r\n(ml) sourabmangrulkar@Sourabs-MacBook-Pro transformers % python utils/check_copies.py\r\nTraceback (most recent call last):\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 351, in convert_to_localized_md\r\n localized_model_index = {\r\n File \"/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py\", line 352, in <dictcomp>\r\n re.search(r\"\\*\\*\\[([^\\]]*)\", line).groups()[0]: line\r\nAttributeError: 'NoneType' object has no attribute 'groups'\r\n```\r\n\r\nThis was a subtle bug which took quite some time to figure out. You had improperly formatted the following models with improper spaces resulting in regex failing, below shows the buggy version:\r\n```\r\n1. ** [TrOCR] (https://huggingface.co/docs/transformers/model_doc/trocr) ** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.\r\n1. ** [UL2] (https://huggingface.co/docs/transformers/model_doc/ul2) ** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler \r\n```\r\n\r\nSo, after fixing it everything works as expected:\r\n```\r\n(ml) sourabmangrulkar@Sourabs-MacBook-Pro transformers % make fix-copies \r\npython utils/check_copies.py --fix_and_overwrite\r\npython utils/check_table.py --fix_and_overwrite\r\npython utils/check_dummies.py --fix_and_overwrite\r\n```\r\n\r\nAlso, model list is very very inconsistent with some models having names in Hindi while others in English. Follow the format where all model names are in latin script instead of Devanagari script. ",
"Hello @AkshitGulyan, please transfer the changes from above sample PR to this PR. Thank you and hope the above explanation clarifies the steps that Sylvain was suggesting. ",
"Hello @AkshitGulyan, can you please reopen this PR and transfer the relevant changes from above sample PR to this PR."
] | 1,666
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
A Hindi Translation for README
# What does this PR do?
It adds the Hindi Translation for the README File !
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19903/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19903",
"html_url": "https://github.com/huggingface/transformers/pull/19903",
"diff_url": "https://github.com/huggingface/transformers/pull/19903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19903.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19902
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19902/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19902/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19902/events
|
https://github.com/huggingface/transformers/pull/19902
| 1,424,199,204
|
PR_kwDOCUB6oc5Blz7T
| 19,902
|
Allow flax subfolder
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
MEMBER
| null |
First, I'm sorry about this long list of commits :sweat_smile: - I have my fork set up correctly now so this shouldn't happen again.
Second, this change would be very useful for this PR in `diffusers` so that clip can be loaded from a subfolder: https://github.com/huggingface/diffusers/pull/880#discussion_r1004209900
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19902/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19902/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19902",
"html_url": "https://github.com/huggingface/transformers/pull/19902",
"diff_url": "https://github.com/huggingface/transformers/pull/19902.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19902.patch",
"merged_at": 1666802003000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19901
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19901/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19901/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19901/events
|
https://github.com/huggingface/transformers/pull/19901
| 1,424,149,325
|
PR_kwDOCUB6oc5Blozt
| 19,901
|
Create dummy models
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Here are the 3 formats of the reports\r\n\r\n[simple_report.txt](https://github.com/huggingface/transformers/files/9872354/simple_report.txt)\r\n[failed_report.json](https://github.com/huggingface/transformers/files/9872355/failed_report.txt)\r\n[tiny_model_creation_report.json](https://github.com/huggingface/transformers/files/9872356/tiny_model_creation_report.txt)\r\n\r\n\r\n",
"Nice, thanks @ydshieh! I'll take it for a spin tomorrow.",
"I will take care of the quality check (don't want to push more commits at this moment) 🙏 ",
"> The description of which models succeeded and which didn't could be slimmer, for example in a TQDM bar where we would print the models that didn't succeed, for example for:\r\n\r\nDo you mean we only print the failed ones?",
"@LysandreJik Could you take a final look regarding my 2 comment above? Also [this one](https://github.com/huggingface/transformers/pull/19901#issuecomment-1293736263) 🙏 \r\n\r\nThank you for the review 💯 ",
"Final remark: those progress bar are not downloading, but the training of the tokenizers (to reduce the vocab size). I will ask @Narsil if we can disable showing those 😃 ",
"Close for now in order to fix a few really edge cases (not to run CI).",
"You can disable the progress bar indeed `Trainer(... show_progress=False)`"
] | 1,666
| 1,666
| 1,666
|
COLLABORATOR
| null |
# What does this PR do?
This is a new script based on [a previous one](https://gist.github.com/LysandreJik/39058fe6fa8771f74dda7e789a6f63ea#file-create_dummy_models-py)
In the comment, links to 3 reports are provided.
### (Probably) To Do:
- As we shrink the tokenizer vocab size, the special tokens (bos/eos/pad etc.) might change too. I think we should also try to update the attributes in `tiny_config` whose names end with `__token_id`.
- We should probably provide an option to upload to Hub.
### Remark
Currently, if we can not shrink the tokenizer's vocab size for a model type, we still build models for it but give a warning in the report. We should not use them for pipeline testing though (which is what our pipeline testing does so far)
### Current states
- #### These need to be treated specially
- EncoderDecoder
- VisionEncoderDecoder
- SpeechEncoderDecoderModel
- VisionTextDualEncoder
- #### Some of the following need to check, but others are expected not to work
- BertGeneration
- Camembert
- DecisionTransformer [This model doesn't require any processor -> need to allow this case]
- ~~DonutSwin~~
- Esm
- MarianForCausalLM
- MT5
- ~~PegasusX~~
- QDQBert
- ReformerModelWithLMHead
- Speech2Text2ForCausalLM
- TimeSeriesTransformer [This model doesn't require any processor -> need to allow this case]
- TrajectoryTransformer
- TrOCRForCausalLM
- ~~ViTMSN~~
- ~~Wav2Vec2Conformer~~
- XLMProphetNet
- XLMRoberta
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19901/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19901",
"html_url": "https://github.com/huggingface/transformers/pull/19901",
"diff_url": "https://github.com/huggingface/transformers/pull/19901.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19901.patch",
"merged_at": 1666955141000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19900
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19900/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19900/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19900/events
|
https://github.com/huggingface/transformers/pull/19900
| 1,424,131,545
|
PR_kwDOCUB6oc5Blk1i
| 19,900
|
Safetensors tf
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
COLLABORATOR
| null |
# What does this PR do?
This PR continues to explore loading models using `safetensors` by adding support for TensorFlow model. It adds support for:
- saving model using `safetensors` with the same API as PyTorch models
- loading models with a safetensors file in TensorFlow-format
- loading models with a safetensors file in PyTorch-format
Follow-up PRs will add the support for sharded checkpoints in TensorFlow as well as loading in PyTorch a safetensors file in TensorFlow-format
**Note:** All tests failures are due to the new release of safetensors being broken, not this PR :-)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19900/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19900",
"html_url": "https://github.com/huggingface/transformers/pull/19900",
"diff_url": "https://github.com/huggingface/transformers/pull/19900.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19900.patch",
"merged_at": 1666900590000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19899
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19899/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19899/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19899/events
|
https://github.com/huggingface/transformers/pull/19899
| 1,424,039,937
|
PR_kwDOCUB6oc5BlQqT
| 19,899
|
minor fix in jax attention bias
|
{
"login": "amankhandelia",
"id": 7098967,
"node_id": "MDQ6VXNlcjcwOTg5Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7098967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amankhandelia",
"html_url": "https://github.com/amankhandelia",
"followers_url": "https://api.github.com/users/amankhandelia/followers",
"following_url": "https://api.github.com/users/amankhandelia/following{/other_user}",
"gists_url": "https://api.github.com/users/amankhandelia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amankhandelia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amankhandelia/subscriptions",
"organizations_url": "https://api.github.com/users/amankhandelia/orgs",
"repos_url": "https://api.github.com/users/amankhandelia/repos",
"events_url": "https://api.github.com/users/amankhandelia/events{/privacy}",
"received_events_url": "https://api.github.com/users/amankhandelia/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19899). All of your documentation changes will be reflected on that endpoint.",
"Hey @amankhandelia! In general, I'm happy with this change. What worries me is that the tests for BART and BART-dervied models currently pass on main, which suggests there shouldn't be a need to change the attention mask value. It suggests that there could be an issue with the FlaxMBartForCausalLM model that you're adding. I've replied more in-depth on the issue as it's more relevant there https://github.com/huggingface/transformers/issues/19897#issuecomment-1294648919. Keeping this PR open until we determine whether it's a generic Flax BART issue or a FlaxBartForCausalLM one!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
Fixes #19897
Adding this PR to check if this passes all the test cases, or fix have potential issues
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19899/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19899",
"html_url": "https://github.com/huggingface/transformers/pull/19899",
"diff_url": "https://github.com/huggingface/transformers/pull/19899.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19899.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19898
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19898/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19898/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19898/events
|
https://github.com/huggingface/transformers/pull/19898
| 1,424,027,876
|
PR_kwDOCUB6oc5BlOC0
| 19,898
|
Let inputs of fast tokenizers be tuples as well as lists
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #19882
Not sure if this was an oversight when introducing fast tokenizers or if there is a real reason for not accepting tuples as well as list here (which is the case everywhere else from a quick search). We'll see if the CI picks something failing but it looks like it fixes the issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19898/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19898",
"html_url": "https://github.com/huggingface/transformers/pull/19898",
"diff_url": "https://github.com/huggingface/transformers/pull/19898.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19898.patch",
"merged_at": 1666900992000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19897
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19897/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19897/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19897/events
|
https://github.com/huggingface/transformers/issues/19897
| 1,424,020,769
|
I_kwDOCUB6oc5U4NUh
| 19,897
|
Flax implementation of BART contain NaN in hidden_states
|
{
"login": "amankhandelia",
"id": 7098967,
"node_id": "MDQ6VXNlcjcwOTg5Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7098967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amankhandelia",
"html_url": "https://github.com/amankhandelia",
"followers_url": "https://api.github.com/users/amankhandelia/followers",
"following_url": "https://api.github.com/users/amankhandelia/following{/other_user}",
"gists_url": "https://api.github.com/users/amankhandelia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amankhandelia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amankhandelia/subscriptions",
"organizations_url": "https://api.github.com/users/amankhandelia/orgs",
"repos_url": "https://api.github.com/users/amankhandelia/repos",
"events_url": "https://api.github.com/users/amankhandelia/events{/privacy}",
"received_events_url": "https://api.github.com/users/amankhandelia/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@ArthurZucker and also @sanchit-gandhi since you know Flax Bart quite well ",
"Hey @amankhandelia - the test of interest passes for the current BART and BART-derived models, so I wonder whether the issue is with the BART model or rather the mBART one? In general, I'm happy with the notion of changing the mask value from `-inf` to a large non-negative number, I just want to determine whether the issue lies with BART or FlaxMBartForCausalLM!\r\n\r\nI've noticed in your PR that you're adding FlaxMBartForCausalLM as well as the Flax DONUT model in the same PR (https://github.com/huggingface/transformers/pull/19831). Perhaps you could first add FlaxMBartForCausalLM in a smaller separate PR? We could then run through the failing test together and try to assert whether it's an issue with FlaxMBartForCausalLM or Flax BART and fix any other issues that crop up 🤗",
"Hey @sanchit-gandhi, thanks for the feedback, \r\nMakes sense, will raise a separate PR for the FlaxMBartForCausalLM and check the same.",
"Hey @amankhandelia, thanks for understanding! Feel free to tag me on the PR as soon as it's ready and I'll try to get you a review ASAP!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,672
| 1,672
|
NONE
| null |
### System Info
While implementing Donut model #19831, one of my test was failing `test_from_pretrained_save_pretrained`, while debugging the testcase failure, It is happening because the test is trying to compare nan with nan which causes the failure, on tracing the root cause, it came down to this line [`jnp.full(attention_mask.shape, float("-inf")).astype(self.dtype)`](https://github.com/huggingface/transformers/blob/fdffee8a601d0408c7e0f57fbb56217f8b57e62a/src/transformers/models/mbart/modeling_flax_mbart.py#L386), `float("-inf")` is causing the `dot_product_attention_weights` to return NaN instead of 0, which is getting cascaded downstream. Since this code is copied from BART, and that code has been copied to several different models (OPT, PEGASUS, BLENDERBOT etc), I am raising this issue against BART
IMHO, we should replace `float("-inf")` with `-1e10` as is the case for multiple different model, RoBerta, if the maintainers agree with this understanding and solution, I can raise a quick PR to fix the same, or otherwise please suggest a solution
@patil-suraj @patrickvonplaten
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run testcase from my branch in the above mentioned PR
### Expected behavior
Hidden States should contain 0 instead of instead of NaN
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19897/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19896
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19896/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19896/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19896/events
|
https://github.com/huggingface/transformers/pull/19896
| 1,423,922,415
|
PR_kwDOCUB6oc5Bk3Pw
| 19,896
|
Generate: contrastive search uses existing abstractions and conventions
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
MEMBER
| null |
# What does this PR do?
This PR updates contrastive search to follow existing abstractions conventions in other generation functions. It consists of several tiny changes, with the reasoning for each change in the PR comments below.
This is part of the effort to make converting to TF easier. All slow tests pass (`RUN_SLOW=1 py.test tests/generation/test_generation_utils.py -k contrastive -vv`)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19896/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19896",
"html_url": "https://github.com/huggingface/transformers/pull/19896",
"diff_url": "https://github.com/huggingface/transformers/pull/19896.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19896.patch",
"merged_at": 1666869615000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19895
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19895/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19895/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19895/events
|
https://github.com/huggingface/transformers/pull/19895
| 1,423,921,486
|
PR_kwDOCUB6oc5Bk3C-
| 19,895
|
Generate: contrastive search uses existing abstractions and conventions
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19895). All of your documentation changes will be reflected on that endpoint."
] | 1,666
| 1,666
| 1,666
|
MEMBER
| null |
# What does this PR do?
This PR updates contrastive search to follow existing abstractions conventions in other generation functions. It consists of several tiny changes, with the reasoning for each change in the PR comments below.
This is part of the effort to make converting to TF easier. All slow tests pass (`RUN_SLOW=1 py.test tests/generation/test_generation_utils.py -k contrastive -vv`)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19895/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19895",
"html_url": "https://github.com/huggingface/transformers/pull/19895",
"diff_url": "https://github.com/huggingface/transformers/pull/19895.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19895.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19894
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19894/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19894/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19894/events
|
https://github.com/huggingface/transformers/issues/19894
| 1,423,849,540
|
I_kwDOCUB6oc5U3jhE
| 19,894
|
Unable to Finetune Deberta
|
{
"login": "ra-MANUJ-an",
"id": 58105811,
"node_id": "MDQ6VXNlcjU4MTA1ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/58105811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ra-MANUJ-an",
"html_url": "https://github.com/ra-MANUJ-an",
"followers_url": "https://api.github.com/users/ra-MANUJ-an/followers",
"following_url": "https://api.github.com/users/ra-MANUJ-an/following{/other_user}",
"gists_url": "https://api.github.com/users/ra-MANUJ-an/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ra-MANUJ-an/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ra-MANUJ-an/subscriptions",
"organizations_url": "https://api.github.com/users/ra-MANUJ-an/orgs",
"repos_url": "https://api.github.com/users/ra-MANUJ-an/repos",
"events_url": "https://api.github.com/users/ra-MANUJ-an/events{/privacy}",
"received_events_url": "https://api.github.com/users/ra-MANUJ-an/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@sgugger \r\n@patil-suraj \r\n@patrickvonplaten ",
"Please use the [forums](https://discuss.huggingface.co/) to get help debug your code. In this instance you are using the base pretrained model (without a classification head) to do classification so it does not work. You should consider using AutoModelForSequenceClassification`.",
"okay, sure will take care from next time and thanks for the response! Just one question, do bert and roberta provide classification heads in their base models?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
I am trying to finetune deberta for irony detection task, colab's notebook link can be found [here](https://colab.research.google.com/drive/1mZI5W2ozc8speZiwzOtCrWABhSaxqhWD?usp=sharing)
When I try to use 'microsoft/deberta-v3-base' checkpoint with AutoModel, I'm getting the following error :
RuntimeError: Expected target size [32, 2], got [32]
but when I use the same model with 'bert-base-uncased' or roberta (with some changes in head) it works fine. The one can find working code for bert based in [this](https://colab.research.google.com/drive/1PXacY2YgAfk6IYC0sAynp88Z6ndWrqYQ?usp=sharing) notebook.
When I printed the shapes of predictions and labels, I got outputs as torch.Size([32, 30, 2]), torch.Size([32]) respectively. In the case of bert, shapes of outputs were torch.Size([32, 2]), torch.Size([32]) for predictions and labels.
Here 32 is the batch size, and 30 is the sequence length.
Can someone let me know what I'm doing wrong?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19894/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19893
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19893/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19893/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19893/events
|
https://github.com/huggingface/transformers/pull/19893
| 1,423,763,210
|
PR_kwDOCUB6oc5BkVLP
| 19,893
|
Geh
|
{
"login": "soma2000-lang",
"id": 56045049,
"node_id": "MDQ6VXNlcjU2MDQ1MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/56045049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soma2000-lang",
"html_url": "https://github.com/soma2000-lang",
"followers_url": "https://api.github.com/users/soma2000-lang/followers",
"following_url": "https://api.github.com/users/soma2000-lang/following{/other_user}",
"gists_url": "https://api.github.com/users/soma2000-lang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soma2000-lang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soma2000-lang/subscriptions",
"organizations_url": "https://api.github.com/users/soma2000-lang/orgs",
"repos_url": "https://api.github.com/users/soma2000-lang/repos",
"events_url": "https://api.github.com/users/soma2000-lang/events{/privacy}",
"received_events_url": "https://api.github.com/users/soma2000-lang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19893). All of your documentation changes will be reflected on that endpoint."
] | 1,666
| 1,667
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19893/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19893",
"html_url": "https://github.com/huggingface/transformers/pull/19893",
"diff_url": "https://github.com/huggingface/transformers/pull/19893.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19893.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/19892
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19892/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19892/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19892/events
|
https://github.com/huggingface/transformers/pull/19892
| 1,423,675,762
|
PR_kwDOCUB6oc5BkCxv
| 19,892
|
Add `flan-t5` documentation page
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for the feedback @sgugger ! I should have addressed the comments now "
] | 1,666
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds `FLAN-T5` on the documentation page - following the same approach for `t5-v1.1`: https://huggingface.co/docs/transformers/model_doc/t5v1.1
cc @sgugger @ydshieh @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19892/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19892",
"html_url": "https://github.com/huggingface/transformers/pull/19892",
"diff_url": "https://github.com/huggingface/transformers/pull/19892.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19892.patch",
"merged_at": 1666797778000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19891
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19891/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19891/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19891/events
|
https://github.com/huggingface/transformers/pull/19891
| 1,423,576,279
|
PR_kwDOCUB6oc5BjuAw
| 19,891
|
fix jit trace error for model forward sequence is not aligned with jit.trace tuple input sequence, update related doc
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@liangan1 @jianan-gu @yao-matrix please help review",
"_The documentation is not available anymore as the PR was closed or merged._",
"> I still do not understand the main problem: it looks like JIT does not support dictionary inputs which are used in every model in Transformers. Classification models are not the only ones using the `labels` key, all task-specific models do... and a model that a user wants to evaluate will very likely have a dataset with labels. The proposed workaround to use label smoothing makes no sense for an evaluation.\r\n> \r\n> It looks like this integration has maybe be merged too quickly and doesn't actually work or are there models that can be evaluated with it?\r\n\r\nyes. all the cases containing \"labels\" will fail in jit.trace, while other case like QnA could pass. it's pytorch limitation for jit.trace which only support tuple input now, Intel has commited a PR(https://github.com/pytorch/pytorch/pull/81623) for this and expected to be released in pytorch 1.14 (I also added it in doc).\r\n\r\nIf we would like to jit.trace successfully for such case, the other option is to modify the model like below, making forward input sequence like tuple input sequence..., \r\n\r\n```py\r\n--- a/src/transformers/models/distilbert/modeling_distilbert.py\r\n+++ b/src/transformers/models/distilbert/modeling_distilbert.py\r\n@@ -731,11 +731,11 @@ class DistilBertForSequenceClassification(DistilBertPreTrainedModel):\r\n )\r\n def forward(\r\n self,\r\n+ labels: Optional[torch.LongTensor] = None,\r\n input_ids: Optional[torch.Tensor] = None,\r\n attention_mask: Optional[torch.Tensor] = None,\r\n head_mask: Optional[torch.Tensor] = None,\r\n inputs_embeds: Optional[torch.Tensor] = None,\r\n- labels: Optional[torch.LongTensor] = None,\r\n output_attentions: Optional[bool] = None,\r\n output_hidden_states: Optional[bool] = None,\r\n return_dict: Optional[bool] = None,\r\n```\r\n \r\n\"label smoothing\" is just a smart way to walk around the jit.trace failure, since it happens to pop the labels from the input\r\n",
"We are not going to make a breaking change in the parameter order of every model. So basically the jit eval functionality added in #17753 does not work and has never worked for any model which contain labels, can you confirm?\r\n\r\nSince it is an **evaluation** function, I fail to see the point of having it in Transformers until PyTorch supports it.",
"The key point of the jit error cases met here is that jit cannot well handle the case that the dictionary forward parameter order does not match the dataset input order, not specific to whether there are \"labels\" or not. And to improve PyTorch jit ability to solve this issue, we landed https://github.com/pytorch/pytorch/pull/81623 in PyTorch;\r\n\r\nFor the usage of model inference with jit, for now, there could be many cases that natively get the benefits, like models running question-answering example mentioned above;\r\n\r\nFor these failed model inferences with jit cases, we are capturing this with the exception here to make it fallback and use logging to notify users; Meanwhile, these failed cases shall work when PyTorch release contains this [feature](https://github.com/pytorch/pytorch/pull/81623), (expect in next release);\r\n\r\nBesides, bringing \"label smoothing\" here with jit is not that reasonable since it would be confusing for users.\r\n\r\n",
"Hi, @sgugger to make it a clear, I file a issue to record the issue I meet https://github.com/huggingface/transformers/issues/19973. also I agree that \"label smoothing\" is a training skill and I have removed it in inference part. This PR could fix the error listed in https://github.com/huggingface/transformers/issues/19973",
"> You are a bit beating around the bush here: are there any models with a head where this feature can be used right now without hacks? I understand support in PyTorch is coming in the next version for dictionaries, but I think this feature was just added to early. Can the doc explicitly mention that the feature requires a nightly install?\r\n\r\nHi, sgugger \r\nfor pytorch >= 1.14.0 (nightly version is 1.14.0). jit could benefit any models for predict and eval.\r\nfor pytorch < 1.14.0. jit could benefit models like \"Question and Answer\", whose forward parameter order matches the tuple input order in jit.trace. If we meet case like \"text classification\",whose forward parameter order does not matches the tuple input order in jit.trace in evaluation, jit trace will fail and we are capturing this with the exception here to make it fallback and use logging to notify users",
"> Thanks for the precision. Could you add all of this to the documentation? Also have one last question on the actual code.\r\n\r\nwhich document would you recommend to add this, since it's not cpu specific.",
"> which document would you recommend to add this\r\n\r\nEvery time the jit eval is mentioned."
] | 1,666
| 1,667
| 1,667
|
CONTRIBUTOR
| null |
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19891/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19891",
"html_url": "https://github.com/huggingface/transformers/pull/19891",
"diff_url": "https://github.com/huggingface/transformers/pull/19891.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19891.patch",
"merged_at": 1667487005000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19890
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19890/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19890/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19890/events
|
https://github.com/huggingface/transformers/issues/19890
| 1,423,575,652
|
I_kwDOCUB6oc5U2gpk
| 19,890
|
Should always set the pad_to_max_length =False when do whole word mask language model fine-tune
|
{
"login": "bugm",
"id": 11450151,
"node_id": "MDQ6VXNlcjExNDUwMTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/11450151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bugm",
"html_url": "https://github.com/bugm",
"followers_url": "https://api.github.com/users/bugm/followers",
"following_url": "https://api.github.com/users/bugm/following{/other_user}",
"gists_url": "https://api.github.com/users/bugm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bugm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bugm/subscriptions",
"organizations_url": "https://api.github.com/users/bugm/orgs",
"repos_url": "https://api.github.com/users/bugm/repos",
"events_url": "https://api.github.com/users/bugm/events{/privacy}",
"received_events_url": "https://api.github.com/users/bugm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Note that this is not a maintained example, so we are not planning on making any changes to that script.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,666
| 1,670
| 1,670
|
NONE
| null |
### Feature request
I am researching some work about whole word mask language model fine-tune and I am doing some customize change on the code from the official example (https://github.com/huggingface/transformers/tree/main/examples/research_projects/mlm_wwm)
I find there is a argument field in class DataTrainingArguments
pad_to_max_length: bool = field(
default=False,
metadata={
"help": (
"Whether to pad all samples to `max_seq_length`. "
"If False, will pad the samples dynamically when batching to the maximum length in the batch."
)
},
)
although its default value is False, but when we set it to True, it will make some problem when we use the DataCollatorForWholeWordMask collator.
### Motivation
according to the source code about DataCollatorForWholeWordMask, I think it selects the tokens to be masked among all input_tokens, if we do padding before collator processing, then during the word mask process we may get lots of candidate index with PAD token. I believe this kind of PAD token is no meaning for word mask model fine tune.
Although the default value pad_to_max_length is set to False, but I do found lot of people customize the code from the official examples by set tokenizer.padding to "max_length" and then call the map function on dataset.
def tokenize_function(examples):
# Remove empty lines
examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()]
return tokenizer(examples["text"], padding="max_length", truncation=True, max_length=data_args.max_seq_length)
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=[text_column_name],
load_from_cache_file=not data_args.overwrite_cache,
)
### Your contribution
I suggest to remove the padding action before the DataCollatorForWholeWordMask collator process and emphasize that padding action before the collator process may influence the training of our model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19890/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19889
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19889/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19889/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19889/events
|
https://github.com/huggingface/transformers/pull/19889
| 1,423,458,873
|
PR_kwDOCUB6oc5BjVlg
| 19,889
|
[DOCTEST] Add `configuration_mbart.py` , `configuration_mctc.py` , `configuration_layoutlm.py` , `configuration_layoutlmv2.py` ,` configuration_layoutlmv3.py`
|
{
"login": "Revanth2002",
"id": 68279005,
"node_id": "MDQ6VXNlcjY4Mjc5MDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/68279005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Revanth2002",
"html_url": "https://github.com/Revanth2002",
"followers_url": "https://api.github.com/users/Revanth2002/followers",
"following_url": "https://api.github.com/users/Revanth2002/following{/other_user}",
"gists_url": "https://api.github.com/users/Revanth2002/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Revanth2002/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Revanth2002/subscriptions",
"organizations_url": "https://api.github.com/users/Revanth2002/orgs",
"repos_url": "https://api.github.com/users/Revanth2002/repos",
"events_url": "https://api.github.com/users/Revanth2002/events{/privacy}",
"received_events_url": "https://api.github.com/users/Revanth2002/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,666
| 1,666
| 1,666
|
CONTRIBUTOR
| null |
Based on #19487 .
Resolved #19806 and #19805
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19889/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/19889",
"html_url": "https://github.com/huggingface/transformers/pull/19889",
"diff_url": "https://github.com/huggingface/transformers/pull/19889.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/19889.patch",
"merged_at": 1666778744000
}
|
https://api.github.com/repos/huggingface/transformers/issues/19888
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19888/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19888/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19888/events
|
https://github.com/huggingface/transformers/issues/19888
| 1,423,443,624
|
I_kwDOCUB6oc5U2Aao
| 19,888
|
Rescale layer in whisper processor
|
{
"login": "JeffreyWardman",
"id": 23271678,
"node_id": "MDQ6VXNlcjIzMjcxNjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/23271678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JeffreyWardman",
"html_url": "https://github.com/JeffreyWardman",
"followers_url": "https://api.github.com/users/JeffreyWardman/followers",
"following_url": "https://api.github.com/users/JeffreyWardman/following{/other_user}",
"gists_url": "https://api.github.com/users/JeffreyWardman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JeffreyWardman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JeffreyWardman/subscriptions",
"organizations_url": "https://api.github.com/users/JeffreyWardman/orgs",
"repos_url": "https://api.github.com/users/JeffreyWardman/repos",
"events_url": "https://api.github.com/users/JeffreyWardman/events{/privacy}",
"received_events_url": "https://api.github.com/users/JeffreyWardman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Please provide a code reproducer for the bug you are experiencing or there is nothing we can do to help.",
"```python\r\nimport torch\r\nfrom transformers import WhisperProcessor, WhisperForConditionalGeneration\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoProcessor, AutoModelForCTC\r\n\r\n\r\ndef inference(input, processor, model):\r\n output = processor(input, sampling_rate=16000, return_tensors=\"pt\")\r\n \r\n if \"whisper\" in processor.tokenizer_class.lower():\r\n input_features = output.input_features\r\n with torch.no_grad():\r\n logits = model.generate(input_features)\r\n transcription = processor.batch_decode(logits, skip_special_tokens=True, output_word_offsets=True)[0]\r\n else:\r\n input_features = output.input_values\r\n with torch.no_grad():\r\n logits = model(input_features).logits[0]\r\n predicted_ids = torch.argmax(logits, dim=-1)\r\n transcription = processor.decode(predicted_ids, output_word_offsets=True)\r\n return transcription\r\n\r\ndef get_transcript(audio, model, processor):\r\n audio_scaled = ((audio - audio.min()) / (audio.max() - audio.min())) * (2) - 1\r\n scaled_transcription = inference(audio_scaled, processor, model)\r\n unscaled_transcription = inference(audio, processor, model)\r\n return {\"scaled\": scaled_transcription, \"unscaled\": unscaled_transcription}\r\n\r\nds = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\naudio = ds[0][\"audio\"][\"array\"]\r\naudio = ((audio - audio.min()) / (audio.max() - audio.min())) * 65535 # Rescale to [0, 65535] to show issue\r\n\r\nwhisper_processor = WhisperProcessor.from_pretrained(\"openai/whisper-base.en\")\r\nwhisper_model = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-base.en\").to(\"cpu\")\r\n\r\nwav2vec_processor = AutoProcessor.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\nwav2vec_model = AutoModelForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\n\r\nwhisper_transcripts = get_transcript(audio, whisper_model, whisper_processor)\r\nwav2vec_transcripts = get_transcript(audio, wav2vec_model, wav2vec_processor)\r\nprint(f\"WHISPER: {whisper_transcripts}\")\r\nprint(f\"WAV2VEC: {wav2vec_transcripts}\")\r\n```\r\n\r\n\r\n\r\n\r\n\r\nOutput:\r\n```\r\nWHISPER: {'scaled': ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.', \r\n'unscaled': ' I'}\r\n\r\nWAV2VEC: {'scaled': Wav2Vec2CTCTokenizerOutput(text='MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', char_offsets=None, word_offsets=[{'word': 'MISTER', 'start_offset': 28, 'end_offset': 40}, {'word': 'QUILTER', 'start_offset': 43, 'end_offset': 60}, {'word': 'IS', 'start_offset': 66, 'end_offset': 69}, {'word': 'THE', 'start_offset': 72, 'end_offset': 76}, {'word': 'APOSTLE', 'start_offset': 80, 'end_offset': 103}, {'word': 'OF', 'start_offset': 109, 'end_offset': 111}, {'word': 'THE', 'start_offset': 115, 'end_offset': 118}, {'word': 'MIDDLE', 'start_offset': 120, 'end_offset': 131}, {'word': 'CLASSES', 'start_offset': 133, 'end_offset': 156}, {'word': 'AND', 'start_offset': 168, 'end_offset': 172}, {'word': 'WE', 'start_offset': 174, 'end_offset': 178}, {'word': 'ARE', 'start_offset': 181, 'end_offset': 185}, {'word': 'GLAD', 'start_offset': 187, 'end_offset': 200}, {'word': 'TO', 'start_offset': 205, 'end_offset': 209}, {'word': 'WELCOME', 'start_offset': 212, 'end_offset': 229}, {'word': 'HIS', 'start_offset': 234, 'end_offset': 240}, {'word': 'GOSPEL', 'start_offset': 245, 'end_offset': 267}]),\r\n 'unscaled': Wav2Vec2CTCTokenizerOutput(text='MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', char_offsets=None, word_offsets=[{'word': 'MISTER', 'start_offset': 28, 'end_offset': 40}, {'word': 'QUILTER', 'start_offset': 43, 'end_offset': 60}, {'word': 'IS', 'start_offset': 66, 'end_offset': 69}, {'word': 'THE', 'start_offset': 72, 'end_offset': 76}, {'word': 'APOSTLE', 'start_offset': 80, 'end_offset': 103}, {'word': 'OF', 'start_offset': 109, 'end_offset': 111}, {'word': 'THE', 'start_offset': 115, 'end_offset': 118}, {'word': 'MIDDLE', 'start_offset': 120, 'end_offset': 131}, {'word': 'CLASSES', 'start_offset': 133, 'end_offset': 156}, {'word': 'AND', 'start_offset': 168, 'end_offset': 172}, {'word': 'WE', 'start_offset': 174, 'end_offset': 178}, {'word': 'ARE', 'start_offset': 181, 'end_offset': 185}, {'word': 'GLAD', 'start_offset': 187, 'end_offset': 200}, {'word': 'TO', 'start_offset': 205, 'end_offset': 209}, {'word': 'WELCOME', 'start_offset': 212, 'end_offset': 229}, {'word': 'HIS', 'start_offset': 234, 'end_offset': 240}, {'word': 'GOSPEL', 'start_offset': 245, 'end_offset': 267}])}\r\n```",
"You can see in the above that the transcript is gibberish for the unscaled whisper model. This is because it is taking in as input the range [0, 65535] rather than [-1, 1].",
"Thanks! cc @sanchit-gandhi and @ArthurZucker ",
"Hey @JeffreyWardman, this is a really interesting issue! I've chosen not to compare Whisper to Wav2Vec2 in my analysis, as these two systems are intrinsically different in how they process the audio inputs:\r\n\r\nWith Wav2Vec2, we first normalise the raw audio inputs to (mean, std) = (0, 1). We then pass the normalised audio inputs to the model (as you have done in your code example). In this way, Wav2Vec2 takes as input audio inputs. \r\n\r\nThis is exactly the operation that the Wav2Vec2 feature extractor performs for us:\r\n```python\r\nnormalised_audio = wav2vec_processor.feature_extractor(audio).input_values\r\n```\r\nWith Whisper, we first convert the raw audio inputs to a log-Mel spectrogram, and then feed this spectrogram to the Whisper model. In contrast to Wav2Vec2, Whisper takes the log-Mel features as inputs to the model (rather than audio values). \r\n\r\nThe audio -> log-Mel conversion is exactly the operation that the Whisper feature extractor performs for us:\r\n```python\r\nlogmel_features = whisper_processor.feature_extractor(audio).input_features\r\n```\r\n\r\nI've had a dig through the original Whisper codebase and compared it to the paper - it seems as though they perform the feature normalisation in the log-Mel space (_c.f._ Section 2.2 of the [paper](https://cdn.openai.com/papers/whisper.pdf)):\r\n\r\n<img width=\"450\" alt=\"Screenshot 2022-10-27 at 17 01 54\" src=\"https://user-images.githubusercontent.com/93869735/198340987-d6f7b8e8-433a-47e1-ba5f-7869be25125e.png\">\r\n\r\nTo check whether we missed something with our implementation, I ran your code example on the _original_ Whisper repo. To reproduce this, first install the original (OpenAI) version of the model from https://github.com/openai/whisper:\r\n```\r\npip install git+https://github.com/openai/whisper.git\r\n```\r\n\r\nI then tweaked your code snippet to make it compatible with the OpenAI model, following the \"official\" example provided in https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb:\r\n```python\r\nimport torch\r\nimport whisper\r\nfrom datasets import load_dataset\r\n\r\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\n\r\nmodel = whisper.load_model(\"base.en\")\r\nmodel.to(device)\r\n\r\n# define the decoding options\r\noptions = whisper.DecodingOptions(language=\"en\", without_timestamps=True)\r\n\r\n# load audio sample as before\r\nds = load_dataset(\"hf-internal-testing/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\naudio = ds[0][\"audio\"][\"array\"]\r\naudio = ((audio - audio.min()) / (audio.max() - audio.min())) * 65535 # Rescale to [0, 65535] to show issue\r\n\r\ndef inference(audio):\r\n # whisper pre-processor expects torch tensors (not np.arrays or lists)\r\n audio = torch.tensor(audio)\r\n audio = whisper.pad_or_trim(audio.flatten()).to(device)\r\n mel = whisper.log_mel_spectrogram(audio)\r\n\r\n results = model.decode(mel, options)\r\n return results.text\r\n\r\ndef get_transcript(audio):\r\n audio_scaled = ((audio - audio.min()) / (audio.max() - audio.min())) * (2) - 1\r\n scaled_transcription = inference(audio_scaled)\r\n unscaled_transcription = inference(audio)\r\n return {\"scaled\": scaled_transcription, \"unscaled\": unscaled_transcription}\r\n\r\noriginal_transcripts = get_transcript(audio)\r\nprint(\"ORIGINAL OpenAI: \\n\", original_transcripts)\r\n```\r\n\r\n**Print output:**\r\n```\r\nORIGINAL OpenAI: \r\n{'scaled': 'Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.',\r\n'unscaled': 'I'}\r\n```\r\nWhich is the same output that we got with Transformers Whisper. So we can be sure that the Transformers implementation matches the official OpenAI one ✅ Meaning that this is an intrinsic problem with the Whisper model (rather than a Transformers implementation one). I think this comes down to the fact that the Whisper model does not normalise the audio inputs prior to passing them to the log-Mel spectrogram.\r\n\r\nIn Transformers, we aim to provide a matching implementation to the original model. In that regard, I don't think that we can currently change the codebase for the Transformers Whisper model to normalise audio samples before computing the log-Mel spectrogram features, since this is an inherent limitation of the Whisper model. Instead, what I'll do is post this issue on the original codebase and ask the authors whether this behaviour is expected. If they update their codebase to normalise the inputs, we can do the same in Transformers 🤗\r\n\r\nHope that makes sense and thank you for the great issue!\r\n\r\n(edit: opened a discussion thread on the original OpenAI repo, awaiting the author's response https://github.com/openai/whisper/discussions/428#discussion-4510905)\r\n",
"Thanks a lot @sanchit-gandhi 💯 , totally agree with you. Also in the various tests that I ran during the integration, I did not really have any issue with custom inputs, so I am also wondering id there are any potential application for that feature request? If yes, we could definitely add an optional argument, but otherwise, I am glad with keeping it close to the original codebase! 👍🏻 ",
"I think it makes sense to offer an (optional) argument to the feature-extractor indicating whether the audio inputs should be normalised in the audio space:\r\n* `do_normalise` (Optional, defaults to `False`): whether or not to normalise the audio inputs prior to computing the log-Mel features.\r\n\r\nThis would look something along the lines of:\r\n```python\r\nfrom transformers import WhisperFeatureExtractor\r\n\r\nfeature_extractor = WhisperFeatureExtractor.from_pretrained(\"openai/whisper-base.en\")\r\n# don't normalise\r\ninput_features = feature_extractor(audio, do_normalise=False).input_features[0]\r\n# do normalise\r\ninput_features = feature_extractor(audio, do_normalise=True).input_features[0]\r\n```\r\n-> we can add this quite easily for more control over inference\r\n\r\n_c.f._ https://github.com/openai/whisper/discussions/428#discussioncomment-4057857",
"Adding it to my whisper to do list"
] | 1,666
| 1,677
| 1,677
|
NONE
| null |
### Feature request
Whisper processor does not currently rescale to the expected [-1, 1) that it requires.
### Motivation
Consistency between model processor layers.
### Your contribution
-
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19888/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19887
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19887/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19887/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19887/events
|
https://github.com/huggingface/transformers/issues/19887
| 1,423,442,708
|
I_kwDOCUB6oc5U2AMU
| 19,887
|
Long-form (including timestamps) for whisper
|
{
"login": "JeffreyWardman",
"id": 23271678,
"node_id": "MDQ6VXNlcjIzMjcxNjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/23271678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JeffreyWardman",
"html_url": "https://github.com/JeffreyWardman",
"followers_url": "https://api.github.com/users/JeffreyWardman/followers",
"following_url": "https://api.github.com/users/JeffreyWardman/following{/other_user}",
"gists_url": "https://api.github.com/users/JeffreyWardman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JeffreyWardman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JeffreyWardman/subscriptions",
"organizations_url": "https://api.github.com/users/JeffreyWardman/orgs",
"repos_url": "https://api.github.com/users/JeffreyWardman/repos",
"events_url": "https://api.github.com/users/JeffreyWardman/events{/privacy}",
"received_events_url": "https://api.github.com/users/JeffreyWardman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi and @ArthurZucker ",
"Hey @JeffreyWardman! I believe @ArthurZucker has started looking into this, see https://github.com/huggingface/transformers/issues/19490#issuecomment-1285166541 for context!",
"Thanks @sanchit-gandhi! By the looks of it, it would still be missing the timestamps. This is quite an important feature for me. I'm not completely familiar with the underlying code for huggingface. How does the chunking work? Does it calculate the first break between words after a given duration?",
"cc @ArthurZucker who knows more about timestamp generation!\r\n\r\nThis blog highlights quite nicely how chunking works in Transformers: https://huggingface.co/blog/asr-chunking",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Does whisper implementation of hugging support timestamps to generate SRT files like openai/whisper implementation?\r\n\r\nhttps://github.com/openai/whisper/blob/main/whisper/utils.py#L64",
"Not yet! Working on this you can follow #20620 !"
] | 1,666
| 1,670
| 1,669
|
NONE
| null |
### Feature request
https://github.com/huggingface/transformers/commit/504cd71a6b172f177e6da513bea94fadb18ad99c
- Inference is currently only implemented for short-form i.e. audio is pre-segmented into <=30s segments. Long-form (including timestamps) will be implemented in a future release.
When would the ETA be for this?
### Motivation
Whisper is not usable for long audio of speech, or for chunking audio based on timestamps determined by the ASR.
### Your contribution
Guidance/PR in longer term future if not picked up by others in the next month or so
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19887/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19887/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19886
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19886/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19886/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19886/events
|
https://github.com/huggingface/transformers/issues/19886
| 1,423,384,333
|
I_kwDOCUB6oc5U1x8N
| 19,886
|
TypeError: ('Keyword argument not understood:', 'ignore_mismatched_sizes')
|
{
"login": "Xappuccino",
"id": 38622926,
"node_id": "MDQ6VXNlcjM4NjIyOTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/38622926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xappuccino",
"html_url": "https://github.com/Xappuccino",
"followers_url": "https://api.github.com/users/Xappuccino/followers",
"following_url": "https://api.github.com/users/Xappuccino/following{/other_user}",
"gists_url": "https://api.github.com/users/Xappuccino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xappuccino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xappuccino/subscriptions",
"organizations_url": "https://api.github.com/users/Xappuccino/orgs",
"repos_url": "https://api.github.com/users/Xappuccino/repos",
"events_url": "https://api.github.com/users/Xappuccino/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xappuccino/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Can you try to upgrade your version of Transformers and try again? If it still fails, could you please post the full traceback?",
"I have solved my problem, it was due to the old version transformers code I saved in my project directory that invalidated the transformers version upgraded through pip install~, thx for your replay!"
] | 1,666
| 1,666
| 1,666
|
NONE
| null |
### System Info
transormers~=4.21.0
tensorflow~=2.8.2
python~=3.7.3
when I modify type_vocab_size in config.json in bert-base-chinese, and pass ignore_mismatched_sizes param to from_pretrianed function like below:
`self.bert = TFBertModel.from_pretrained(pretrain_path, ignore_mismatched_sizes=True)
`
Then I got this error `TypeError: ('Keyword argument not understood:', 'ignore_mismatched_sizes')`
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
transormers~=4.21.0
tensorflow~=2.8.2
python~=3.7.3
modify type_vocab_size in config.json in bert-base-chinese, and pass ignore_mismatched_sizes param to from_pretrianed function like below:
`self.bert = TFBertModel.from_pretrained(pretrain_path, ignore_mismatched_sizes=True)
`
### Expected behavior
load bert-base-chinese checkpoint successfully
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19886/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19885
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19885/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19885/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19885/events
|
https://github.com/huggingface/transformers/issues/19885
| 1,423,359,213
|
I_kwDOCUB6oc5U1rzt
| 19,885
|
Implementing SHAP algorithm on visualBERT transformer
|
{
"login": "MUZAMMILPERVAIZ",
"id": 69303067,
"node_id": "MDQ6VXNlcjY5MzAzMDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/69303067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MUZAMMILPERVAIZ",
"html_url": "https://github.com/MUZAMMILPERVAIZ",
"followers_url": "https://api.github.com/users/MUZAMMILPERVAIZ/followers",
"following_url": "https://api.github.com/users/MUZAMMILPERVAIZ/following{/other_user}",
"gists_url": "https://api.github.com/users/MUZAMMILPERVAIZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MUZAMMILPERVAIZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MUZAMMILPERVAIZ/subscriptions",
"organizations_url": "https://api.github.com/users/MUZAMMILPERVAIZ/orgs",
"repos_url": "https://api.github.com/users/MUZAMMILPERVAIZ/repos",
"events_url": "https://api.github.com/users/MUZAMMILPERVAIZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/MUZAMMILPERVAIZ/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You should use the [forums](https://github.com/huggingface/safetensors/pull/34) for questions like this as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing this issue for the reason above."
] | 1,666
| 1,669
| 1,669
|
NONE
| null |
### System Info
Hi @LysandreJik , @NielsRogge, @sgugger,
I am working to apply [shap](https://shap.readthedocs.io/en/latest/index.html) algorithm on visualbert. I found a piece of code that run`s well on distilbart-xsum-12-6 , Here is the code:
```
import numpy as np
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import shap
import torch
# load transformer language model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-xsum-12-6")
model = AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distilbart-xsum-12-6").cuda()
s=["In this picture, there are four persons: my father, my mother, my brother and my sister."]
explainer = shap.Explainer(model,tokenizer)
shap_values = explainer(s)

```
But i don`t know how to implement the same thing on visualBERT. Is there any repository which demonstrate the implementation of shap algorithm on visualBERT transformer or anyone know how to do this?
Thanks for your time.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import BertTokenizer, VisualBertForPreTraining, VisualBertForQuestionAnswering
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
#model = VisualBertForPreTraining.from_pretrained('uclanlp/visualbert-nlvr2-coco-pre')
model = VisualBertForQuestionAnswering.from_pretrained("uclanlp/visualbert-vqa")
from datasets import load_dataset
dataset = load_dataset("textvqa")
explainer = shap.Explainer(model,tokenizer)
shap_values = explainer(dataset['train'][0]['question'])
````
### Expected behavior

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19885/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/19884
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/19884/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/19884/comments
|
https://api.github.com/repos/huggingface/transformers/issues/19884/events
|
https://github.com/huggingface/transformers/issues/19884
| 1,423,294,427
|
I_kwDOCUB6oc5U1b_b
| 19,884
|
there is no log and processbar when running trainer.train()
|
{
"login": "yezhipeng2417",
"id": 87161948,
"node_id": "MDQ6VXNlcjg3MTYxOTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/87161948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yezhipeng2417",
"html_url": "https://github.com/yezhipeng2417",
"followers_url": "https://api.github.com/users/yezhipeng2417/followers",
"following_url": "https://api.github.com/users/yezhipeng2417/following{/other_user}",
"gists_url": "https://api.github.com/users/yezhipeng2417/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yezhipeng2417/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yezhipeng2417/subscriptions",
"organizations_url": "https://api.github.com/users/yezhipeng2417/orgs",
"repos_url": "https://api.github.com/users/yezhipeng2417/repos",
"events_url": "https://api.github.com/users/yezhipeng2417/events{/privacy}",
"received_events_url": "https://api.github.com/users/yezhipeng2417/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Vscode or datashell do not support widgets, as far as I know, so we can't show the same progress bars there as in a notebook.",
"thanks"
] | 1,666
| 1,666
| 1,666
|
NONE
| null |
### System Info
When I open a .ipynb in vscode or datashell, there is no log and processbar when running trainer.train(), but in jupyter notebook, it shows.
<img width="770" alt="截屏2022-10-26 11 55 46" src="https://user-images.githubusercontent.com/87161948/197910485-5d16a652-48a4-4c83-b110-b9ad7a95046b.png">
<img width="1138" alt="截屏2022-10-26 12 06 12" src="https://user-images.githubusercontent.com/87161948/197910740-3f7ba219-f27c-4a76-82ac-827109d9257e.png">
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained('bert-base-cased',
num_labels=2)
print(sum([i.nelement() for i in model.parameters()]) / 10000)
import numpy as np
from datasets import load_metric
from transformers.trainer_utils import EvalPrediction
metric = load_metric('accuracy')
def compute_metrics(eval_pred):
logits, labels = eval_pred
logits = logits.argmax(axis=1)
return metric.compute(predictions=logits, references=labels)
eval_pred = EvalPrediction(
predictions=np.array([[0, 1], [2, 3], [4, 5], [6, 7]]),
label_ids=np.array([1, 1, 1, 1]),
)
compute_metrics(eval_pred)
from transformers import TrainingArguments, Trainer
args = TrainingArguments(output_dir='./output_dir', evaluation_strategy='epoch')
args.num_train_epochs = 1
args.learning_rate = 1e-4
args.weight_decay = 1e-2
args.per_device_eval_batch_size = 32
args.per_device_train_batch_size = 16
trainer = Trainer(
model=model,
args=args,
train_dataset=dataset_train,
eval_dataset=dataset_test,
compute_metrics=compute_metrics,
)
# train
trainer.train()`
### Expected behavior
I hope it will output totally the same in all of the three IDEs
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/19884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/19884/timeline
|
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.