url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/18274
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18274/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18274/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18274/events
|
https://github.com/huggingface/transformers/issues/18274
| 1,315,888,423
|
I_kwDOCUB6oc5Obt0n
| 18,274
|
Define metric for save the best model
|
{
"login": "dimka11",
"id": 6096108,
"node_id": "MDQ6VXNlcjYwOTYxMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6096108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dimka11",
"html_url": "https://github.com/dimka11",
"followers_url": "https://api.github.com/users/dimka11/followers",
"following_url": "https://api.github.com/users/dimka11/following{/other_user}",
"gists_url": "https://api.github.com/users/dimka11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dimka11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dimka11/subscriptions",
"organizations_url": "https://api.github.com/users/dimka11/orgs",
"repos_url": "https://api.github.com/users/dimka11/repos",
"events_url": "https://api.github.com/users/dimka11/events{/privacy}",
"received_events_url": "https://api.github.com/users/dimka11/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"If you are using HF trainer it provides two arguments for selecting your preferred metric for choosing the best model. The first one is\r\n`metric_for_best_model`. It defaults to \"loss\". The second argument is `greater_is_better`. The default value is `False` but If metric for best model is set to any value other than \"loss\" then it will default to `True`\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
### Feature request
I suggest to add a metric choice to save the best model
### Motivation
I use multiple metrics in process of fine tuning models through Trainer and don't know how the metric is chosen to save the best model (I suppose it's first metric in dictionaty?!).
### Your contribution
-
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18274/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18273
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18273/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18273/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18273/events
|
https://github.com/huggingface/transformers/pull/18273
| 1,315,886,850
|
PR_kwDOCUB6oc47_mlL
| 18,273
|
Generalize decay_mask_fn to apply mask to all LayerNorm params
|
{
"login": "duongna21",
"id": 38061659,
"node_id": "MDQ6VXNlcjM4MDYxNjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/38061659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duongna21",
"html_url": "https://github.com/duongna21",
"followers_url": "https://api.github.com/users/duongna21/followers",
"following_url": "https://api.github.com/users/duongna21/following{/other_user}",
"gists_url": "https://api.github.com/users/duongna21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duongna21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duongna21/subscriptions",
"organizations_url": "https://api.github.com/users/duongna21/orgs",
"repos_url": "https://api.github.com/users/duongna21/repos",
"events_url": "https://api.github.com/users/duongna21/events{/privacy}",
"received_events_url": "https://api.github.com/users/duongna21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sanchit-gandhi Thank you for pointing them out! It's been done.",
"Amazing, thanks for the PR @duongna21!"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the problem of `decay_mask_fn` not applying mask to all LayerNorm params.
For example, when running `run_mlm_flax.py` with `roberta-base`'s config, current code fails to apply mask to the LayerNorm's `scale` param of the `lm_head`,

This is because `("layer_norm", "scale")` is omitted from `decay_mask_fn`:
```python
def decay_mask_fn(params):
flat_params = traverse_util.flatten_dict(params)
flat_mask = {path: (path[-1] != "bias" and path[-2:] != ("LayerNorm", "scale")) for path in flat_params}
print('flat_mask: ', flat_mask)
return traverse_util.unflatten_dict(flat_mask)
```
Another example, `run_t5_mlm_flax.py` with `t5-base`'s config omitted all the LayerNorm params,

This is because `decay_mask_fn` only takes into account `scale` while the T5LayerNorm's param is `weight`.
```python
def decay_mask_fn(params):
flat_params = traverse_util.flatten_dict(params)
flat_mask = {
path: (path[-1] != "bias" and path[-2:] not in [("layer_norm", "scale"), ("final_layer_norm", "scale")])
for path in flat_params
}
print('flat_mask: ', flat_mask)
return traverse_util.unflatten_dict(flat_mask)
```
## Fix
Generalize `decay_mask_fn` to apply mask to all params whose lowered name containing `layernorm`, `layer_norm` or `ln`.
## Who can review?
potential reviewers: @patrickvonplaten, @sgugger, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18273/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18273",
"html_url": "https://github.com/huggingface/transformers/pull/18273",
"diff_url": "https://github.com/huggingface/transformers/pull/18273.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18273.patch",
"merged_at": 1658921037000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18272
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18272/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18272/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18272/events
|
https://github.com/huggingface/transformers/pull/18272
| 1,315,737,466
|
PR_kwDOCUB6oc47_KmX
| 18,272
|
Deberta V2: Fix critical trace warnings to allow ONNX export
|
{
"login": "iiLaurens",
"id": 9915637,
"node_id": "MDQ6VXNlcjk5MTU2Mzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9915637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iiLaurens",
"html_url": "https://github.com/iiLaurens",
"followers_url": "https://api.github.com/users/iiLaurens/followers",
"following_url": "https://api.github.com/users/iiLaurens/following{/other_user}",
"gists_url": "https://api.github.com/users/iiLaurens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iiLaurens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iiLaurens/subscriptions",
"organizations_url": "https://api.github.com/users/iiLaurens/orgs",
"repos_url": "https://api.github.com/users/iiLaurens/repos",
"events_url": "https://api.github.com/users/iiLaurens/events{/privacy}",
"received_events_url": "https://api.github.com/users/iiLaurens/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @BigBird01 for knowledge, @michaelbenayoun ",
"@michaelbenayoun I resolved all the comments. Could you verify and merge?"
] | 1,658
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes some untraceable functions used in Deberta V2. Specifically, `math.sqrt`, `np.arange`, `np.tile` and `np.where` were replaced with their torch equivalents. I also applied some type conversions to make sure that the types are compatible with ONNX ops (opsset 15 was the focus). The remaining trace warnings that I did not solve seem to concern configuration items, which should stay constant for any given model.
Fixes #18237
## Who can review?
@LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18272/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18272",
"html_url": "https://github.com/huggingface/transformers/pull/18272",
"diff_url": "https://github.com/huggingface/transformers/pull/18272.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18272.patch",
"merged_at": 1660226084000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18271
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18271/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18271/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18271/events
|
https://github.com/huggingface/transformers/pull/18271
| 1,315,616,582
|
PR_kwDOCUB6oc47-z2U
| 18,271
|
[EncoderDecoder] Improve docs
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
As a follow-up of #17815, this PR improves the docs of `VisionEncoderDecoderModel` and `SpeechEncoderDecoderModel`.
It also fixes some typos in the docs of `EncoderDecoderModel`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18271/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18271",
"html_url": "https://github.com/huggingface/transformers/pull/18271",
"diff_url": "https://github.com/huggingface/transformers/pull/18271.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18271.patch",
"merged_at": 1658909339000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18270
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18270/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18270/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18270/events
|
https://github.com/huggingface/transformers/issues/18270
| 1,315,588,476
|
I_kwDOCUB6oc5Oakl8
| 18,270
|
This code block will not be executed
|
{
"login": "AlfredQin",
"id": 40079631,
"node_id": "MDQ6VXNlcjQwMDc5NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/40079631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlfredQin",
"html_url": "https://github.com/AlfredQin",
"followers_url": "https://api.github.com/users/AlfredQin/followers",
"following_url": "https://api.github.com/users/AlfredQin/following{/other_user}",
"gists_url": "https://api.github.com/users/AlfredQin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlfredQin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlfredQin/subscriptions",
"organizations_url": "https://api.github.com/users/AlfredQin/orgs",
"repos_url": "https://api.github.com/users/AlfredQin/repos",
"events_url": "https://api.github.com/users/AlfredQin/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlfredQin/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Where is that code block from?",
"The code block is from https://github.com/huggingface/transformers/blob/8e8384663d716d4b5a4f510070ff954fc0ba4a52/src/transformers/models/detr/modeling_detr.py#L1076",
"cc @NielsRogge ",
"Hi @AlfredQin, thanks for spotting that. The `combined_attention_mask` was probably taken from another model, which is not relevant for DETR.\r\n\r\nFeel free to open a PR to remove that code block!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,662
| 1,662
|
NONE
| null |
### System Info
python 3.9
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[](urlhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/detr/modeling_detr.py)
` combined_attention_mask = None
if attention_mask is not None and combined_attention_mask is not None:
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
combined_attention_mask = combined_attention_mask + _expand_mask(
attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]
)`
### Expected behavior
The combined_attention_mask is set to None, so the if code block below it will never be executed because it will check combined_attention_mask is not None.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18270/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18269
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18269/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18269/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18269/events
|
https://github.com/huggingface/transformers/issues/18269
| 1,315,585,480
|
I_kwDOCUB6oc5Oaj3I
| 18,269
|
cannot import name 'TrainingArguments' from 'transformers'
|
{
"login": "takfarine",
"id": 87461007,
"node_id": "MDQ6VXNlcjg3NDYxMDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/87461007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/takfarine",
"html_url": "https://github.com/takfarine",
"followers_url": "https://api.github.com/users/takfarine/followers",
"following_url": "https://api.github.com/users/takfarine/following{/other_user}",
"gists_url": "https://api.github.com/users/takfarine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/takfarine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/takfarine/subscriptions",
"organizations_url": "https://api.github.com/users/takfarine/orgs",
"repos_url": "https://api.github.com/users/takfarine/repos",
"events_url": "https://api.github.com/users/takfarine/events{/privacy}",
"received_events_url": "https://api.github.com/users/takfarine/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey! Could you please provide your transformers version? How did you install transformers?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Problem in \"from transformers import TrainingArguments,Trainer\" How to resolve it"
] | 1,658
| 1,708
| 1,661
|
NONE
| null |
### System Info
Traceback (most recent call last):
File "dv2xxl.py", line 30, in <module>
from transformers import TrainingArguments,Trainer
**ImportError: cannot import name 'TrainingArguments' from 'transformers'** (/lustre06/project/6005433/takfa/UW/ue/lib/python3.7/site-packages/transformers/__init__.py)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Traceback (most recent call last):
File "dv2xxl.py", line 30, in <module>
from transformers import TrainingArguments,Trainer
ImportError: cannot import name 'TrainingArguments' from 'transformers' (/lustre06/project/6005433/takfa/UW/ue/lib/python3.7/site-packages/transformers/__init__.py)
### Expected behavior
looking for help
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18269/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18268
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18268/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18268/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18268/events
|
https://github.com/huggingface/transformers/issues/18268
| 1,315,381,825
|
I_kwDOCUB6oc5OZyJB
| 18,268
|
OPT vocab size of model and tokenizer does not match
|
{
"login": "dhansmair",
"id": 21751746,
"node_id": "MDQ6VXNlcjIxNzUxNzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/21751746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhansmair",
"html_url": "https://github.com/dhansmair",
"followers_url": "https://api.github.com/users/dhansmair/followers",
"following_url": "https://api.github.com/users/dhansmair/following{/other_user}",
"gists_url": "https://api.github.com/users/dhansmair/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhansmair/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhansmair/subscriptions",
"organizations_url": "https://api.github.com/users/dhansmair/orgs",
"repos_url": "https://api.github.com/users/dhansmair/repos",
"events_url": "https://api.github.com/users/dhansmair/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhansmair/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker ",
"Hey, thanks for noticing this! I am gonna add @younesbelkada to the loop. \r\nIt seems that the original tokenizer [vocabulary](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/assets/gpt2-vocab.json) has 50264 words, with some \"madeupwords\". Let us have a look! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Duplicate of https://github.com/huggingface/transformers/issues/17431#issuecomment-1224231170",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,664
| 1,664
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.19.2
- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: no
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('facebook/opt-350m')
tok = AutoTokenizer.from_pretrained('facebook/opt-350m', use_fast=False)
print(model.config.vocab_size) # 50272
print(tok.vocab_size) # 50265
```
### Expected behavior
Hello,
I'm not sure whether this is a bug or if I am missing something.
In the reproduction script above, the model has a bigger vocabulary than the tokenizer. In my project, the LM produces the token `50272`, which the tokenizer doesn't know and thus the decode() function fails.
(I use my own text generation script, so is it by any chance that the model is not supposed to output the last 7 tokens that the tokenizer doesn't know?)
Best, David
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18268/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18268/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18267
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18267/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18267/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18267/events
|
https://github.com/huggingface/transformers/issues/18267
| 1,315,330,462
|
I_kwDOCUB6oc5OZlme
| 18,267
|
Expected input batch_size (16) to match target batch_size (262144)
|
{
"login": "priyankarasakonda",
"id": 101957164,
"node_id": "U_kgDOBhO-LA",
"avatar_url": "https://avatars.githubusercontent.com/u/101957164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/priyankarasakonda",
"html_url": "https://github.com/priyankarasakonda",
"followers_url": "https://api.github.com/users/priyankarasakonda/followers",
"following_url": "https://api.github.com/users/priyankarasakonda/following{/other_user}",
"gists_url": "https://api.github.com/users/priyankarasakonda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/priyankarasakonda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/priyankarasakonda/subscriptions",
"organizations_url": "https://api.github.com/users/priyankarasakonda/orgs",
"repos_url": "https://api.github.com/users/priyankarasakonda/repos",
"events_url": "https://api.github.com/users/priyankarasakonda/events{/privacy}",
"received_events_url": "https://api.github.com/users/priyankarasakonda/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
### System Info
ENERATE_JPG_FILES = True # warning: generation takes ~ 1h
slice_sum=0
if (GENERATE_JPG_FILES):
path = Path(".")
os.makedirs('train_images',exist_ok=True)
os.makedirs('train_masks',exist_ok=True)
for ii in tqdm(range(0,len(df_files))): # take 1/3 nii files for training
curr_ct = read_nii(df_files.loc[ii,'dirname']+"/"+df_files.loc[ii,'filename'])
curr_mask = read_nii(df_files.loc[ii,'mask_dirname']+"/"+df_files.loc[ii,'mask_filename'])
curr_file_name = str(df_files.loc[ii,'filename']).split('.')[0]
curr_dim = curr_ct.shape[2] # 512, 512, curr_dim
slice_sum = slice_sum+curr_dim
for curr_slice in range(0,curr_dim,1): # export every 2nd slice for training
data = tensor(curr_ct[...,curr_slice].astype(np.float32))
mask = Image.fromarray(curr_mask[...,curr_slice].astype('uint8'), mode="L")
data.save_jpg(f"train_images/{curr_file_name}_slice_{curr_slice}.jpg", [dicom_windows.liver,dicom_windows.custom])
mask.save(f"train_masks/{curr_file_name}_slice_{curr_slice}_mask.png")
else:
path = Path('C:/AML 2404 AI and ML Lab/Liver Tumor Segmentation/Liver Tumor Segmentation/new_images') # read jpg from saved kernel output
print(slice_sum)
bs = 16
im_size = 128
codes = np.array(["background","liver","tumor"])
def get_x(fname:Path):
return fname
def label_func(x):
return path/'train_masks'/f'{x.stem}_mask.png'
tfms = [IntToFloatTensor(),Normalize()]
db = DataBlock(blocks=(ImageBlock(),MaskBlock(codes)), #codes = {"Backround": 0,"Liver": 1,"Tumor": 2}
batch_tfms=tfms,
splitter=RandomSplitter(),
item_tfms=[Resize(im_size)],
get_items=get_image_files,
get_y=label_func
)
# ../output/kaggle/working/train_images.zip
# ds = db.datasets(source=path/'train_images.zip')
ds = db.datasets(source='./train_images')
print(len(ds))
print(ds)
dls = db.dataloaders(path/'train_images',bs = bs) # num_workers=0
dls.show_batch()
def foreground_acc(inp, targ, bkg_idx=0, axis=1): # exclude a background from metric
"Computes non-background accuracy for multiclass segmentation"
targ = targ.squeeze(1)
mask = targ != bkg_idx
return (inp.argmax(dim=axis)[mask]==targ[mask]).float().mean()
def cust_foreground_acc(inp, targ): # # include a background into the metric
return foreground_acc(inp=inp, targ=targ, bkg_idx=3, axis=1)
learn = vision_learner(dls, resnet34, metrics =([foreground_acc,cust_foreground_acc]))
learn.lr_find()
ValueError: Expected input batch_size (16) to match target batch_size (262144).
### Who can help?
@NielsRogge, @sgugger, @Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
ENERATE_JPG_FILES = True # warning: generation takes ~ 1h
slice_sum=0
if (GENERATE_JPG_FILES):
path = Path(".")
os.makedirs('train_images',exist_ok=True)
os.makedirs('train_masks',exist_ok=True)
for ii in tqdm(range(0,len(df_files))): # take 1/3 nii files for training
curr_ct = read_nii(df_files.loc[ii,'dirname']+"/"+df_files.loc[ii,'filename'])
curr_mask = read_nii(df_files.loc[ii,'mask_dirname']+"/"+df_files.loc[ii,'mask_filename'])
curr_file_name = str(df_files.loc[ii,'filename']).split('.')[0]
curr_dim = curr_ct.shape[2] # 512, 512, curr_dim
slice_sum = slice_sum+curr_dim
for curr_slice in range(0,curr_dim,1): # export every 2nd slice for training
data = tensor(curr_ct[...,curr_slice].astype(np.float32))
mask = Image.fromarray(curr_mask[...,curr_slice].astype('uint8'), mode="L")
data.save_jpg(f"train_images/{curr_file_name}_slice_{curr_slice}.jpg", [dicom_windows.liver,dicom_windows.custom])
mask.save(f"train_masks/{curr_file_name}_slice_{curr_slice}_mask.png")
else:
path = Path('C:/AML 2404 AI and ML Lab/Liver Tumor Segmentation/Liver Tumor Segmentation/new_images') # read jpg from saved kernel output
print(slice_sum)
bs = 16
im_size = 128
codes = np.array(["background","liver","tumor"])
def get_x(fname:Path):
return fname
def label_func(x):
return path/'train_masks'/f'{x.stem}_mask.png'
tfms = [IntToFloatTensor(),Normalize()]
db = DataBlock(blocks=(ImageBlock(),MaskBlock(codes)), #codes = {"Backround": 0,"Liver": 1,"Tumor": 2}
batch_tfms=tfms,
splitter=RandomSplitter(),
item_tfms=[Resize(im_size)],
get_items=get_image_files,
get_y=label_func
)
# ../output/kaggle/working/train_images.zip
# ds = db.datasets(source=path/'train_images.zip')
ds = db.datasets(source='./train_images')
print(len(ds))
print(ds)
dls = db.dataloaders(path/'train_images',bs = bs) # num_workers=0
dls.show_batch()
def foreground_acc(inp, targ, bkg_idx=0, axis=1): # exclude a background from metric
"Computes non-background accuracy for multiclass segmentation"
targ = targ.squeeze(1)
mask = targ != bkg_idx
return (inp.argmax(dim=axis)[mask]==targ[mask]).float().mean()
def cust_foreground_acc(inp, targ): # # include a background into the metric
return foreground_acc(inp=inp, targ=targ, bkg_idx=3, axis=1)
learn = vision_learner(dls, resnet34, metrics =([foreground_acc,cust_foreground_acc]))
learn.lr_find()
ValueError: Expected input batch_size (16) to match target batch_size (262144).
### Expected behavior
vision_learner modell should predict image processing
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18267/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18266
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18266/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18266/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18266/events
|
https://github.com/huggingface/transformers/issues/18266
| 1,315,300,231
|
I_kwDOCUB6oc5OZeOH
| 18,266
|
Can't pickle local object when running official benchmark
|
{
"login": "ryanrudes",
"id": 18452581,
"node_id": "MDQ6VXNlcjE4NDUyNTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/18452581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryanrudes",
"html_url": "https://github.com/ryanrudes",
"followers_url": "https://api.github.com/users/ryanrudes/followers",
"following_url": "https://api.github.com/users/ryanrudes/following{/other_user}",
"gists_url": "https://api.github.com/users/ryanrudes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryanrudes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryanrudes/subscriptions",
"organizations_url": "https://api.github.com/users/ryanrudes/orgs",
"repos_url": "https://api.github.com/users/ryanrudes/repos",
"events_url": "https://api.github.com/users/ryanrudes/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryanrudes/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @ryanrudes, we're in the process of deprecating and removing benchmarks from the library, so we unfortunately won't be able to help you out on this one.",
"Understood"
] | 1,658
| 1,658
| 1,658
|
NONE
| null |
### System Info
- `transformers` version: 4.11.3
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.8.13
- PyTorch version (GPU?): 1.12.0.post2 (False)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments
args = PyTorchBenchmarkArguments(models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512])
benchmark = PyTorchBenchmark(args)
results = benchmark.run()
print(results)
```
```
1 / 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/site-packages/transformers/benchmark/benchmark_utils.py", line 707, in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/site-packages/transformers/benchmark/benchmark_utils.py", line 676, in inference_memory
return separate_process_wrapper_fn(self._inference_memory, self.args.do_multi_processing)(*args, **kwargs)
File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/site-packages/transformers/benchmark/benchmark_utils.py", line 101, in multi_process_func
p.start()
File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/ryanrudes/miniforge3/envs/torch-gpu/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'separate_process_wrapper_fn.<locals>.multi_process_func.<locals>.wrapper_func'
```
### Expected behavior
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base-uncased 8 8 0.006
bert-base-uncased 8 32 0.006
bert-base-uncased 8 128 0.018
bert-base-uncased 8 512 0.088
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base-uncased 8 8 1227
bert-base-uncased 8 32 1281
bert-base-uncased 8 128 1307
bert-base-uncased 8 512 1539
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
...
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18266/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18265
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18265/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18265/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18265/events
|
https://github.com/huggingface/transformers/pull/18265
| 1,315,273,462
|
PR_kwDOCUB6oc479sqd
| 18,265
|
Allows `KerasMetricCallback` to use XLA generation
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
MEMBER
| null |
Updates the `KerasMetricCallback` with the ability to use XLA generation for a big speed boost! cc @merveenoyan
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18265/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18265",
"html_url": "https://github.com/huggingface/transformers/pull/18265",
"diff_url": "https://github.com/huggingface/transformers/pull/18265.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18265.patch",
"merged_at": 1658749897000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18264
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18264/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18264/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18264/events
|
https://github.com/huggingface/transformers/pull/18264
| 1,315,242,207
|
PR_kwDOCUB6oc479l9G
| 18,264
|
Adding type hints of TF:CTRL
|
{
"login": "Mathews-Tom",
"id": 9562152,
"node_id": "MDQ6VXNlcjk1NjIxNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9562152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mathews-Tom",
"html_url": "https://github.com/Mathews-Tom",
"followers_url": "https://api.github.com/users/Mathews-Tom/followers",
"following_url": "https://api.github.com/users/Mathews-Tom/following{/other_user}",
"gists_url": "https://api.github.com/users/Mathews-Tom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mathews-Tom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mathews-Tom/subscriptions",
"organizations_url": "https://api.github.com/users/Mathews-Tom/orgs",
"repos_url": "https://api.github.com/users/Mathews-Tom/repos",
"events_url": "https://api.github.com/users/Mathews-Tom/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mathews-Tom/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
Issue related: #16059
As the title suggests, this PR adds type hints to the `Tensorflow` `CTRL` model class.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18264/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18264",
"html_url": "https://github.com/huggingface/transformers/pull/18264",
"diff_url": "https://github.com/huggingface/transformers/pull/18264.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18264.patch",
"merged_at": 1658834822000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18263
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18263/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18263/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18263/events
|
https://github.com/huggingface/transformers/pull/18263
| 1,315,186,759
|
PR_kwDOCUB6oc479aGB
| 18,263
|
Adding type hints of TF:OpenAIGPT
|
{
"login": "Mathews-Tom",
"id": 9562152,
"node_id": "MDQ6VXNlcjk1NjIxNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9562152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mathews-Tom",
"html_url": "https://github.com/Mathews-Tom",
"followers_url": "https://api.github.com/users/Mathews-Tom/followers",
"following_url": "https://api.github.com/users/Mathews-Tom/following{/other_user}",
"gists_url": "https://api.github.com/users/Mathews-Tom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mathews-Tom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mathews-Tom/subscriptions",
"organizations_url": "https://api.github.com/users/Mathews-Tom/orgs",
"repos_url": "https://api.github.com/users/Mathews-Tom/repos",
"events_url": "https://api.github.com/users/Mathews-Tom/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mathews-Tom/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @Mathews-Tom, thanks for working on providing type hints for both OpenAIGPT and CTRL! To get a review quicker, don't hesitate to ping @Rocketknight1 directly :)"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
Issue related: #16059
As the title suggests, this PR adds type hints to the `Tensorflow` `OpenAIGPT` model class.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18263/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18263",
"html_url": "https://github.com/huggingface/transformers/pull/18263",
"diff_url": "https://github.com/huggingface/transformers/pull/18263.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18263.patch",
"merged_at": 1658835006000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18262
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18262/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18262/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18262/events
|
https://github.com/huggingface/transformers/pull/18262
| 1,315,146,622
|
PR_kwDOCUB6oc479Rag
| 18,262
|
[DETR] Improve code examples
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
As a follow-up of #17786, this PR improves the code examples of DETR to showcase the `post_process` methods.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18262/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18262",
"html_url": "https://github.com/huggingface/transformers/pull/18262",
"diff_url": "https://github.com/huggingface/transformers/pull/18262.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18262.patch",
"merged_at": 1658908481000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18261
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18261/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18261/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18261/events
|
https://github.com/huggingface/transformers/pull/18261
| 1,315,145,235
|
PR_kwDOCUB6oc479RHW
| 18,261
|
Generate: validate `model_kwargs` (and catch typos in generate arguments)
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger ready for a final review :) [equivalent TF and FLAX changes will come in separate PRs, as they might need test corrections like this one]",
"Super-useful, thank you, @gante!"
] | 1,658
| 1,660
| 1,660
|
MEMBER
| null |
# What does this PR do?
A common cause for issues in generate is around it not behaving as expected, as arguments can be silently ignored as part of the selected generation submethod (greedy_search, sample, ...). Typos also often fly under the radar, as the method accepts **model_kwargs, which in turn are passed to models that also accept **kwargs.
This PR solves the low-hanging fruit (derived from https://github.com/huggingface/transformers/pull/18218): validates `model_kwargs`, which notifies the users about problems in model arguments AND about typos, as they will fall in `model_kwargs`.
Will open a PR with TF and FLAX equivalent after this one gets merged. A solution for the other generation arguments is also on the way :)
Fixes https://github.com/huggingface/transformers/issues/18130
___________________
Here is an example of the output for
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
prompt = tokenizer(["hello world"], return_tensors="pt")
model.generate(**prompt, do_samples=True, foo="bar")
```

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18261/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18261/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18261",
"html_url": "https://github.com/huggingface/transformers/pull/18261",
"diff_url": "https://github.com/huggingface/transformers/pull/18261.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18261.patch",
"merged_at": 1660312432000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18260
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18260/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18260/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18260/events
|
https://github.com/huggingface/transformers/pull/18260
| 1,315,050,231
|
PR_kwDOCUB6oc4788rF
| 18,260
|
Fix torch version check in Vilt
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @LysandreJik: This is *one* of the reasons why there are more failures in PyTorch past CI.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Oops, you approved before I tag 😄 "
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
This line fails when torch < 1.10.0
https://github.com/huggingface/transformers/blob/1fc4b2a13223b9069f9969344117a2994261939c/src/transformers/models/vilt/modeling_vilt.py#L44
In this case, `torch.__version__` is of type `str` instead of `torch.torch_version.TorchVersion` and can't compare to a tuple.
This leads to strange error message when other models being tested, for example (the use of `get_values` below)
```
def test_training(self):
for model_class in self.all_model_classes:
....
if model_class in get_values(MODEL_MAPPING):
continue
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18260/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18260",
"html_url": "https://github.com/huggingface/transformers/pull/18260",
"diff_url": "https://github.com/huggingface/transformers/pull/18260.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18260.patch",
"merged_at": 1658499889000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18259
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18259/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18259/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18259/events
|
https://github.com/huggingface/transformers/pull/18259
| 1,315,035,774
|
PR_kwDOCUB6oc4785j4
| 18,259
|
Replace false parameter by a buffer
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
The weights of the sinusoidal embedding are defined as a parameter with no grad in M2M100 (and thus XGLM), and never saved in the state dict. The problem is that when loading this with `low_cpu_mem_usage=True`, this false parameter will be replaced by an empty weight on the meta device, which is not re-initialized afterward (since it's not in the state dict). As a result, the model is not usable if `low_cpu_mem_usage=True` is used.
By replacing it with a buffer, the weight is ignore by init_empty_weights and thus preserved.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18259/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18259",
"html_url": "https://github.com/huggingface/transformers/pull/18259",
"diff_url": "https://github.com/huggingface/transformers/pull/18259.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18259.patch",
"merged_at": 1658833379000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18258
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18258/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18258/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18258/events
|
https://github.com/huggingface/transformers/pull/18258
| 1,315,019,539
|
PR_kwDOCUB6oc4782B8
| 18,258
|
Fix dtype of input_features in docstring
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
Fix dtype in docstring for `input_features`: It should be `torch.FloatTensor`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18258/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18258",
"html_url": "https://github.com/huggingface/transformers/pull/18258",
"diff_url": "https://github.com/huggingface/transformers/pull/18258.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18258.patch",
"merged_at": 1658820846000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18257
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18257/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18257/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18257/events
|
https://github.com/huggingface/transformers/pull/18257
| 1,315,004,380
|
PR_kwDOCUB6oc478yu9
| 18,257
|
Owlvit docs test
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> LGTM! Just wondering why there are 90+ commits\r\n\r\nThat was my mistake, I merged the owlvit branch of my forked transformers repo with the main and created this branch. I squashed the commits on the main but don't know how to fix this one.",
"Let me know if you'd like some help to squash the commits of this PR @alaradirik!"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
- Fixes a typo in OwlViTForObjectDetection forward function docs: transformers/models/owlvit/modeling_owlvit.py
- Adds docs test for OWL-ViT
- Makes `OwlViTFeatureExtractor.post_process` callable from `OwlViTProcessor`
- Improves code examples to demonstrate how to use the post_process method
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18257/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18257",
"html_url": "https://github.com/huggingface/transformers/pull/18257",
"diff_url": "https://github.com/huggingface/transformers/pull/18257.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18257.patch",
"merged_at": 1658822114000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18256
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18256/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18256/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18256/events
|
https://github.com/huggingface/transformers/pull/18256
| 1,314,975,720
|
PR_kwDOCUB6oc478siD
| 18,256
|
Change how `take_along_axis` is computed in DeBERTa to stop confusing XLA
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@gante The original torch code used `take_along_axis`, so I guess this is a complete TF reimplementation of it! That approach makes way more sense, though - let me make some changes!",
"> The original torch code used take_along_axis\r\n\r\nThat would explain it!"
] | 1,658
| 1,658
| 1,658
|
MEMBER
| null |
The previous code for `take_along_axis()` in DeBERTa used dynamic TF shapes like `tf.shape()` and `tf.rank()` in conditionals. This is a data-dependent conditional, which is forbidden in XLA.
Replacing these with the static shape equivalents `x.shape` and `x.shape.rank` works fine, and now DeBERTa can be compiled successfully with XLA.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18256/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18256",
"html_url": "https://github.com/huggingface/transformers/pull/18256",
"diff_url": "https://github.com/huggingface/transformers/pull/18256.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18256.patch",
"merged_at": 1658505690000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18255
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18255/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18255/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18255/events
|
https://github.com/huggingface/transformers/pull/18255
| 1,314,900,872
|
PR_kwDOCUB6oc478cTb
| 18,255
|
[Don't merge] debug CircleCI test timing
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18255). All of your documentation changes will be reflected on that endpoint."
] | 1,658
| 1,662
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18255/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18255",
"html_url": "https://github.com/huggingface/transformers/pull/18255",
"diff_url": "https://github.com/huggingface/transformers/pull/18255.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18255.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18254
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18254/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18254/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18254/events
|
https://github.com/huggingface/transformers/issues/18254
| 1,314,890,060
|
I_kwDOCUB6oc5OX6FM
| 18,254
|
Can not import Trainer
|
{
"login": "Eleo22",
"id": 55881447,
"node_id": "MDQ6VXNlcjU1ODgxNDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/55881447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eleo22",
"html_url": "https://github.com/Eleo22",
"followers_url": "https://api.github.com/users/Eleo22/followers",
"following_url": "https://api.github.com/users/Eleo22/following{/other_user}",
"gists_url": "https://api.github.com/users/Eleo22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Eleo22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Eleo22/subscriptions",
"organizations_url": "https://api.github.com/users/Eleo22/orgs",
"repos_url": "https://api.github.com/users/Eleo22/repos",
"events_url": "https://api.github.com/users/Eleo22/events{/privacy}",
"received_events_url": "https://api.github.com/users/Eleo22/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @Eleo22, that's interesting, it seems that `import datasets` in the trainer led to an import of the `keras.datasets` package.\r\n\r\nDo you know why that might be? Could you try to uninstall keras (you don't need it for the trainer) and to reinstall `datasets` ? Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
### System Info
python 3.9
I install transformer with pip install transformer.
Using a terminar I open python from:
>from transformers import Trainer
I get:
`Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 957, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/usr/local/Cellar/python@3.9/3.9.0_4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/local/lib/python3.9/site-packages/transformers/trainer.py", line 176, in <module>
import datasets
File "/usr/local/lib/python3.7/site-packages/keras/datasets/__init__.py", line 3, in <module>
from . import mnist
File "/usr/local/lib/python3.7/site-packages/keras/datasets/mnist.py", line 7, in <module>
from ..utils.data_utils import get_file
ImportError: attempted relative import beyond top-level package
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 947, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 959, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
attempted relative import beyond top-level package`
Any idea how I can fix it ?
Many thanks,
Ele
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import Trainer
### Expected behavior
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 957, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/usr/local/Cellar/python@3.9/3.9.0_4/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/local/lib/python3.9/site-packages/transformers/trainer.py", line 176, in <module>
import datasets
File "/usr/local/lib/python3.7/site-packages/keras/datasets/__init__.py", line 3, in <module>
from . import mnist
File "/usr/local/lib/python3.7/site-packages/keras/datasets/mnist.py", line 7, in <module>
from ..utils.data_utils import get_file
ImportError: attempted relative import beyond top-level package
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1055, in _handle_fromlist
File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 947, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/usr/local/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 959, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
attempted relative import beyond top-level package
>>>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18254/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18253
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18253/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18253/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18253/events
|
https://github.com/huggingface/transformers/pull/18253
| 1,314,882,956
|
PR_kwDOCUB6oc478YdQ
| 18,253
|
Fix OwlViT tests
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
Should fix the errors on main with `ImportError: cannot import name 'OwlViTFeatureExtractor' from 'transformers'` on runners with torch not installed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18253/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18253",
"html_url": "https://github.com/huggingface/transformers/pull/18253",
"diff_url": "https://github.com/huggingface/transformers/pull/18253.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18253.patch",
"merged_at": 1658489540000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18252
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18252/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18252/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18252/events
|
https://github.com/huggingface/transformers/issues/18252
| 1,314,806,553
|
I_kwDOCUB6oc5OXlsZ
| 18,252
|
How to convert pytorch bart model to tf1.x ?
|
{
"login": "wwwlps",
"id": 42160485,
"node_id": "MDQ6VXNlcjQyMTYwNDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/42160485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwwlps",
"html_url": "https://github.com/wwwlps",
"followers_url": "https://api.github.com/users/wwwlps/followers",
"following_url": "https://api.github.com/users/wwwlps/following{/other_user}",
"gists_url": "https://api.github.com/users/wwwlps/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwwlps/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwwlps/subscriptions",
"organizations_url": "https://api.github.com/users/wwwlps/orgs",
"repos_url": "https://api.github.com/users/wwwlps/repos",
"events_url": "https://api.github.com/users/wwwlps/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwwlps/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
I have trained a pytorch bart model, and how can I convert it to tf1.x ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18252/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18251
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18251/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18251/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18251/events
|
https://github.com/huggingface/transformers/pull/18251
| 1,314,726,155
|
PR_kwDOCUB6oc4772GE
| 18,251
|
Add PYTEST_TIMEOUT for CircleCI test jobs
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I like the idea of getting a hard error instead of silently getting new tests that slow down the CI by quite a lot! Now we just ahve to get to all tests passing below that threshold 😅 \r\nFor the examples, you can authorize 60s before timing out as those are end-to-end small training so take more time.",
"Currently set to `PYTEST_TIMEOUT: 120`. As mentioned on Slack, tests sometimes get much longer to run. For example,\r\n\r\n```\r\ntest_modeling_data2vec_audio.py::Data2VecAudioModelTest::test_mask_time_prob_ctc\r\n\r\n44.16s call \r\n37.64s call \r\n12.46s call \r\n12.60s call \r\n```\r\n\r\nand \r\n\r\n```\r\ntest_modeling_plbart.py::PLBartBaseIntegrationTest::test_base_generate\r\n\r\n48.18 call\r\n11.20s call \r\n11.53s call\r\n```\r\n\r\nThis makes it difficult to determine a good threshold that won't be flaky.\r\n\r\n### Current longest 2 tests \r\n(observed on a CircleCI workflow run):\r\n\r\n```\r\n73.08s call \r\ntest_pipelines_image_segmentation.py::ImageSegmentationPipelineTests::test_pt_DetrConfig_DetrForSegmentation_notokenizer_DetrFeatureExtractor\r\n\r\n65.33s call\r\nlongt5/test_modeling_flax_longt5.py::FlaxLongT5ModelTest::test_jit_compilation\r\n```",
"Reverted the change in `setup.py` (was for running the full tests). The final timeout limit is 2 minutes (to avoid flaky failures).\r\nI will merge this PR today unless @sgugger has a different opinion."
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
⚠️ Before merging, we need to **run the full tests on CircleCI** to see if there are slower tests that will fail and decide what to do with them.
# What does this PR do?
Add `PYTEST_TIMEOUT: 30` for CircleCI jobs:
```
environment:
...
PYTEST_TIMEOUT: 30
```
The main goal is to avoid CircleCI's default 10 minute timeout that cancels the jobs. Also with this PR, we can see clearly which test (s) timeout.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18251/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18251",
"html_url": "https://github.com/huggingface/transformers/pull/18251",
"diff_url": "https://github.com/huggingface/transformers/pull/18251.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18251.patch",
"merged_at": 1658851079000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18250
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18250/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18250/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18250/events
|
https://github.com/huggingface/transformers/pull/18250
| 1,314,623,541
|
PR_kwDOCUB6oc477ep-
| 18,250
|
Skip passes report for `--make-reports`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Same as Sylvain :)"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
We sometimes have timeout on CircleCI. It turns out that the tests are finished (running in the workers, which exit at the end), but the main process is busy doing some reporting work when we specify `--make-reports`. More precisely, it is the `passes` report which takes time (as we include `Pp` in `tr.reportchars = "wPpsxXEf"`). From the 2 screenshots below (running with 64 models), we can see that currently it takes extra ~2-3 minutes at the end.
It seems that `passes` report doesn't contain any useful information to us, therefore this PR skips generating it to avoid timeout.
- **without this PR**
<img width="448" alt="no-fix" src="https://user-images.githubusercontent.com/2521628/180396971-69f19b12-978b-4e19-8842-17d1af307d51.png">
- **with this PR**
<img width="452" alt="fix" src="https://user-images.githubusercontent.com/2521628/180396917-bb866363-efa3-4959-b5e3-e99ab283cc6c.png">
### One failed CircleCI job run
[Job](https://app.circleci.com/pipelines/github/huggingface/transformers/43738/workflows/325901ce-948e-4737-9a79-8a7fe9d6e27d/jobs/506868/resources)
<img width="452" alt="real" src="https://user-images.githubusercontent.com/2521628/180399481-a7f571bc-75f3-4826-b5d5-8c50c3164678.png">
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18250/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18250/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18250",
"html_url": "https://github.com/huggingface/transformers/pull/18250",
"diff_url": "https://github.com/huggingface/transformers/pull/18250.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18250.patch",
"merged_at": 1658740163000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18249
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18249/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18249/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18249/events
|
https://github.com/huggingface/transformers/issues/18249
| 1,314,573,034
|
I_kwDOCUB6oc5OWsrq
| 18,249
|
Behavior of shift_tokens_right on padded input_ids
|
{
"login": "duongna21",
"id": 38061659,
"node_id": "MDQ6VXNlcjM4MDYxNjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/38061659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duongna21",
"html_url": "https://github.com/duongna21",
"followers_url": "https://api.github.com/users/duongna21/followers",
"following_url": "https://api.github.com/users/duongna21/following{/other_user}",
"gists_url": "https://api.github.com/users/duongna21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duongna21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duongna21/subscriptions",
"organizations_url": "https://api.github.com/users/duongna21/orgs",
"repos_url": "https://api.github.com/users/duongna21/repos",
"events_url": "https://api.github.com/users/duongna21/events{/privacy}",
"received_events_url": "https://api.github.com/users/duongna21/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @duongna21! This method isn't exposed in the main init so we consider it to be private (this should really be written somewhere if we haven't done so yet).\r\n\r\nIt's used internally by the BART model, but we don't validate it to work for any other purpose.",
"@LysandreJik Yeah, I specifically raise above question in the context of BART training. IMO `shift_tokens_right` should not take the `<pad>` token into account (this is actually the behavior of [fairseq's original code](https://github.com/facebookresearch/fairseq/blob/8e804cb38a1575c65a1fc981d75ae5a97c24dd5b/fairseq/data/data_utils.py#L69)). \r\nAlso, I believe this issue also applies to other models using `shift_tokens_right`, such as T5.",
"Any comment? I'm happy to create a PR if my assumption is correct.",
"Pinging @patil-suraj and @patrickvonplaten regarding the `shift_tokens_right` method and its purpose.",
"Hey @",
"Hey @duongna21,\r\n\r\nGood question! Note however that it doesn't really matter whether you pass\r\n\r\n```py\r\n['</s><s>My dog is cute<pad><pad>']\r\n```\r\n\r\nor\r\n\r\n```python\r\n['</s><s>My dog is cute</s><pad>']\r\n```\r\n\r\nto the model if the labels are:\r\n\r\n```py\r\n['<s>My dog is cute</s><pad><pad>']\r\n```\r\n\r\nbecause every loss token that gets mapped to the `<pad>` token is ignored. More specifically this means that during training the following happens:\r\n- The model learns that `</s>` should predict `<s>`\r\n- then `</s><s>` should predict `My`\r\n- then `</s><s>My` should predict `dog`\r\n- ... until\r\n- `</s><s>My dog is cute` should predict `</s>`\r\n**Now** it doesn't matter whether `</s><s>My dog is cute</s>` or `</s><s>My dog is cute<pad>` is passed because both will be ignored as the predicted token is `<pad>` and the model should never learn to predict pad tokens -> so this loss will be ignored. Does this make sense?\r\n",
"@patrickvonplaten Thanks for elaborating on it. Totally agree with you that it doesn't matter with seq2seq training. I just tried to make sure it doesn't create any side-effect somewhere :D. "
] | 1,658
| 1,663
| 1,663
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.20.1
### Who can help?
@patrickvonplaten @patil-suraj @sgugger
### Reproduction
When I applied shift_tokens_right on a padded input_ids, I get this
```python
from transformers import AutoTokenizer
from transformers.models.bart.modeling_flax_bart import shift_tokens_right
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
labels = tokenizer("My dog is cute", padding='max_length', max_length=8, return_tensors='np').input_ids
decoder_input_ids = shift_tokens_right(labels, tokenizer.pad_token_id, tokenizer.eos_token_id)
print(tokenizer.batch_decode(labels))
# ['<s>My dog is cute</s><pad><pad>']
print(tokenizer.batch_decode(decoder_input_ids))
# ['</s><s>My dog is cute</s><pad>']
```
### Expected behavior
Should the desired behavior of shift_token_right be the following?
```python
print(tokenizer.batch_decode(labels))
# ['<s>My dog is cute</s><pad><pad>']
print(tokenizer.batch_decode(decoder_input_ids))
# ['</s><s>My dog is cute<pad><pad>']
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18249/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18248
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18248/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18248/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18248/events
|
https://github.com/huggingface/transformers/pull/18248
| 1,314,567,345
|
PR_kwDOCUB6oc477Rw8
| 18,248
|
Changed to filter out oddball list sizes
|
{
"login": "spanglies",
"id": 6833217,
"node_id": "MDQ6VXNlcjY4MzMyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6833217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spanglies",
"html_url": "https://github.com/spanglies",
"followers_url": "https://api.github.com/users/spanglies/followers",
"following_url": "https://api.github.com/users/spanglies/following{/other_user}",
"gists_url": "https://api.github.com/users/spanglies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spanglies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spanglies/subscriptions",
"organizations_url": "https://api.github.com/users/spanglies/orgs",
"repos_url": "https://api.github.com/users/spanglies/repos",
"events_url": "https://api.github.com/users/spanglies/events{/privacy}",
"received_events_url": "https://api.github.com/users/spanglies/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18248). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
This PR fixes the issue of lists occasionally being of different sizes before being sent to `trainer.py`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18167
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger My apologies for the third ping in a week. I believe I found a solution that would be more acceptable.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18248/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18248",
"html_url": "https://github.com/huggingface/transformers/pull/18248",
"diff_url": "https://github.com/huggingface/transformers/pull/18248.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18248.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18247
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18247/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18247/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18247/events
|
https://github.com/huggingface/transformers/pull/18247
| 1,314,540,260
|
PR_kwDOCUB6oc477Lkq
| 18,247
|
Pin rouge_score
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @albertvillanova, sorry for missing this! Should this be merged? Should we replace by `!=0.7.0` so that we're still compatible with newer versions?",
"They made several failed releases until they got to fix the error... Let me update this PR with all the versions to be avoided.",
"@LysandreJik there is a non-passing test though...",
"Hmmm weird, it seems like it passed when it was <0.07 [here](https://github.com/huggingface/transformers/runs/7464301285?check_suite_focus=true) (installing version v0.0.4), but not after setting `rouge-score!=0.0.7,!=0.0.8,!=0.1,!=0.1.1` (installing version 1.2.0).\r\n\r\nI see that when installing `rouge-score` version v1.2.0, it was installing `rouge-score` using a legacy approach:\r\n```\r\nUsing legacy 'setup.py install' for rouge-score, since package 'wheel' is not installed.\r\n```\r\n\r\nCould this be the cause of the failure? If you put `<0.7` once again, does it pass the test?",
"The test does not pass now either with `rouge-score<0.0.7`, @LysandreJik. \r\n\r\nBut it passed when I opened this PR: see https://github.com/huggingface/transformers/pull/18247/commits/4e7a46dcfe4bfbab25858052017a39459c7831d4"
] | 1,658
| 1,662
| 1,662
|
MEMBER
| null |
# What does this PR do?
Temporarily pin `rouge_score` (to avoid latest version 0.7.0) until the issue is fixed on their side:
- https://github.com/google-research/google-research/issues/1212
See:
- https://github.com/huggingface/datasets/issues/4734
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18247/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18247",
"html_url": "https://github.com/huggingface/transformers/pull/18247",
"diff_url": "https://github.com/huggingface/transformers/pull/18247.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18247.patch",
"merged_at": 1662026690000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18246
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18246/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18246/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18246/events
|
https://github.com/huggingface/transformers/issues/18246
| 1,314,513,698
|
I_kwDOCUB6oc5OWeMi
| 18,246
|
RuntimeError: "triu_tril_cuda_template" not implemented for 'BFloat16'
|
{
"login": "DogeWatch",
"id": 13670813,
"node_id": "MDQ6VXNlcjEzNjcwODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13670813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DogeWatch",
"html_url": "https://github.com/DogeWatch",
"followers_url": "https://api.github.com/users/DogeWatch/followers",
"following_url": "https://api.github.com/users/DogeWatch/following{/other_user}",
"gists_url": "https://api.github.com/users/DogeWatch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DogeWatch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DogeWatch/subscriptions",
"organizations_url": "https://api.github.com/users/DogeWatch/orgs",
"repos_url": "https://api.github.com/users/DogeWatch/repos",
"events_url": "https://api.github.com/users/DogeWatch/events{/privacy}",
"received_events_url": "https://api.github.com/users/DogeWatch/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
### System Info
transformers==4.19.2
### Who can help?
use bf16 with accelerate config
```
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: MULTI_GPU
fsdp_config: {}
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 8
use_cpu: false
```
model is LongformerModel, and get error
File "~/miniconda3/envs/bai/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 788, in _mask_invalid_locations
beginning_mask_2d = input_tensor.new_ones(affected_seq_len, affected_seq_len + 1).tril().flip(dims=[0])
RuntimeError: "triu_tril_cuda_template" not implemented for 'BFloat16'
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
none
### Expected behavior
none
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18246/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18246/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18245
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18245/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18245/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18245/events
|
https://github.com/huggingface/transformers/issues/18245
| 1,314,291,203
|
I_kwDOCUB6oc5OVn4D
| 18,245
|
Not able to load the Facebook OPT model
|
{
"login": "xiajinxiong",
"id": 40830229,
"node_id": "MDQ6VXNlcjQwODMwMjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/40830229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiajinxiong",
"html_url": "https://github.com/xiajinxiong",
"followers_url": "https://api.github.com/users/xiajinxiong/followers",
"following_url": "https://api.github.com/users/xiajinxiong/following{/other_user}",
"gists_url": "https://api.github.com/users/xiajinxiong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiajinxiong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiajinxiong/subscriptions",
"organizations_url": "https://api.github.com/users/xiajinxiong/orgs",
"repos_url": "https://api.github.com/users/xiajinxiong/repos",
"events_url": "https://api.github.com/users/xiajinxiong/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiajinxiong/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @xiajinxiong, this seems to be a version error!\r\n\r\nIn order to verify, could you run the following snippet and copy-paste the output?\r\n\r\n```\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, __version__\r\n\r\nprint(\"Version\", __version__)\r\nmodel = AutoModelForCausalLM.from_pretrained(\"facebook/opt-13b\", torch_dtype=torch.float16)\r\n```",
"I reaffirmed that the transformers version was 4.20.1.\r\nBut it's because my jupyter kernel didn't synchronize with the transformers version.\r\nAfter I restart the jupyter, it works fine.\r\nThanks.",
"I reaffirmed that the transformers version was 4.20.1.\r\nBut it's because my jupyter kernel didn't synchronize with the transformers version.\r\nAfter I restart the jupyter, it works fine.\r\nThanks."
] | 1,658
| 1,658
| 1,658
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16)
```
Errors:
KeyError Traceback (most recent call last)
<ipython-input-15-00179d7539d3> in <module>
2 from transformers import AutoModelForCausalLM, AutoTokenizer
3
----> 4 model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16)
~/workspace/anaconda3/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
421 kwargs["_from_auto"] = True
422 if not isinstance(config, PretrainedConfig):
--> 423 config, kwargs = AutoConfig.from_pretrained(
424 pretrained_model_name_or_path, return_unused_kwargs=True, trust_remote_code=trust_remote_code, **kwargs
425 )
~/workspace/anaconda3/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
670
671 Examples:
--> 672
673 ```python
674 >>> from transformers import AutoConfig
~/workspace/anaconda3/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in __getitem__(self, key)
385 ("xlsr_wav2vec2", "XLSR-Wav2Vec2"),
386 ("yolos", "YOLOS"),
--> 387 ("yoso", "YOSO"),
388 ]
389 )
KeyError: 'opt'
### Expected behavior
The code is expected to run with any error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18245/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18244
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18244/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18244/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18244/events
|
https://github.com/huggingface/transformers/pull/18244
| 1,313,946,963
|
PR_kwDOCUB6oc475EjG
| 18,244
|
patch for smddp import
|
{
"login": "carolynwang",
"id": 32006339,
"node_id": "MDQ6VXNlcjMyMDA2MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/32006339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/carolynwang",
"html_url": "https://github.com/carolynwang",
"followers_url": "https://api.github.com/users/carolynwang/followers",
"following_url": "https://api.github.com/users/carolynwang/following{/other_user}",
"gists_url": "https://api.github.com/users/carolynwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/carolynwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/carolynwang/subscriptions",
"organizations_url": "https://api.github.com/users/carolynwang/orgs",
"repos_url": "https://api.github.com/users/carolynwang/repos",
"events_url": "https://api.github.com/users/carolynwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/carolynwang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes an `invalid backend` error when starting a HF job with smddp in a method that goes through src/transformers/training_args.py by adding the import statement for smddp, which registers smddp as a torch.distributed backend.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18244/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18244",
"html_url": "https://github.com/huggingface/transformers/pull/18244",
"diff_url": "https://github.com/huggingface/transformers/pull/18244.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18244.patch",
"merged_at": 1658865624000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18243
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18243/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18243/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18243/events
|
https://github.com/huggingface/transformers/issues/18243
| 1,313,790,988
|
I_kwDOCUB6oc5OTtwM
| 18,243
|
Onnx Runtime Errors With LongT5
|
{
"login": "reelmath",
"id": 108700518,
"node_id": "U_kgDOBnqjZg",
"avatar_url": "https://avatars.githubusercontent.com/u/108700518?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reelmath",
"html_url": "https://github.com/reelmath",
"followers_url": "https://api.github.com/users/reelmath/followers",
"following_url": "https://api.github.com/users/reelmath/following{/other_user}",
"gists_url": "https://api.github.com/users/reelmath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reelmath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reelmath/subscriptions",
"organizations_url": "https://api.github.com/users/reelmath/orgs",
"repos_url": "https://api.github.com/users/reelmath/repos",
"events_url": "https://api.github.com/users/reelmath/events{/privacy}",
"received_events_url": "https://api.github.com/users/reelmath/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 4364086132,
"node_id": "LA_kwDOCUB6oc8AAAABBB6rdA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/ONNX",
"name": "ONNX",
"color": "D4C5F9",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hey @reelmath, thanks for opening an issue, it seems you and @echarlaix managed to find the source of the problem.\r\n\r\nWe unfortunately don't have a lot of bandwidth to dive into solving that code, so I'll add an `onnx` tag and a `Good second issue` tag so that experienced users know that this is an issue that could be fixed. If you'd like to try your hand at it, please go ahead!",
"Hi, I would like to work on this if it has not been assigned to anyone, but could take some time if that is ok?",
"Hey @yhl48, this would be great indeed :-)",
"Hello @reelmath , I was trying to mimic your error with my setting as follows:\r\n\r\n- transformers version 4.23.1\r\n- Python version: 3.10.5\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: no\r\n- Using distributed or parallel set-up in script?: no\r\n\r\n\r\nbut I faced the same errors with you.",
"\r\n",
"It looks like the pretrained model is not available anymore?\r\n\r\nUpon running the following line\r\n\r\n```\r\nmodel = ORTModelForSeq2SeqLM.from_pretrained(\"longt5-tglobal-base\", from_transformers=True)\r\n```\r\n\r\nThe following error was raised\r\n\r\n```\r\nOSError: longt5-tglobal-base is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\nIf this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.\r\n```",
"@yhl48 I think you need to use `google/long-t5-tglobal-base` name",
"Thanks @stancld!\r\n\r\nHas this issue been resolved? I can no longer replicate the error."
] | 1,658
| 1,677
| null |
NONE
| null |
### System Info
- `optimum` version: 1.2.3 (installed via Github installation)
- `transformers` version: 4.20.1
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@stancld @echarlaix @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
LongT5 with TGlobal Attention isn't able to run sequences longer than **global_block_size * 2**. This is because during the model tracing [num_globals > 0](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longt5/modeling_longt5.py#L191) is being converted to False. I originally posted the error in Optimum (https://github.com/huggingface/optimum/issues/285) but @echarlaix asked me to open an issue here because this error concerns the ONNX export.
Code to reproduce is below:
```
!pip install transformers
!pip install transformers[onnx]
!python -m pip install git+https://github.com/huggingface/optimum.git
!python -m pip install git+[https://github.com/huggingface/optimum.git#egg=optimum[onnxruntime]](https://github.com/huggingface/optimum.git#egg=optimum%5Bonnxruntime%5D)
!pip install datasets
```
```py
from optimum.onnxruntime import ORTModelForSeq2SeqLM
model = ORTModelForSeq2SeqLM.from_pretrained("longt5-tglobal-base", from_transformers=True)
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained('google/long-t5-tglobal-base')
onnx_summarization = pipeline("summarization", model=model, tokenizer=tokenizer)
text = # Something longer than 32 tokens if I don't change the number of global blocks
pred = onnx_summarization(text)`
```
```
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running LessOrEqual node. Name:'LessOrEqual_648' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:603 onnxruntime::Broadcaster::Broadcaster(gsl::span, gsl::span) largest <= 1 was false. Can broadcast 0 by 0 or 1. 16 is invalid.
```
### Expected behavior
Should work for very large seq lens on default global block size without error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18243/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/18242
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18242/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18242/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18242/events
|
https://github.com/huggingface/transformers/pull/18242
| 1,313,532,429
|
PR_kwDOCUB6oc473pDC
| 18,242
|
Fix `no_trainer` CI
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes the no_trainer tests silently failing due to a similar reason in Accelerate [here](https://github.com/huggingface/accelerate/pull/517)
- Adds a new way to call subprocess that properly contains the stack trace raised in the error
- Reduces the passing result needed for the image classification example, as on a single GPU it reaches 62.5% but on multi gpu it hits 60%
Requires https://github.com/huggingface/accelerate/pull/547 to be merged first
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18242/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18242",
"html_url": "https://github.com/huggingface/transformers/pull/18242",
"diff_url": "https://github.com/huggingface/transformers/pull/18242.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18242.patch",
"merged_at": 1658429097000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18241
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18241/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18241/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18241/events
|
https://github.com/huggingface/transformers/issues/18241
| 1,313,506,557
|
I_kwDOCUB6oc5OSoT9
| 18,241
|
Flax Support NLLB (or M2M100) model
|
{
"login": "acul3",
"id": 56231298,
"node_id": "MDQ6VXNlcjU2MjMxMjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/56231298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/acul3",
"html_url": "https://github.com/acul3",
"followers_url": "https://api.github.com/users/acul3/followers",
"following_url": "https://api.github.com/users/acul3/following{/other_user}",
"gists_url": "https://api.github.com/users/acul3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/acul3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/acul3/subscriptions",
"organizations_url": "https://api.github.com/users/acul3/orgs",
"repos_url": "https://api.github.com/users/acul3/repos",
"events_url": "https://api.github.com/users/acul3/events{/privacy}",
"received_events_url": "https://api.github.com/users/acul3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @patil-suraj @sanchit-gandhi in case you'd like to guide @acul3 to contribute the Flax version of M2M100",
"Hey @acul3! Awesome, let's do it 💪 Happy to help you through the implementation! On a high-level, the process for adding this model will look something as follows:\r\n\r\n1. Copy across the modelling code from Flax Bart\r\n2. Modify the Flax modelling code to match the PyTorch NLLB/M2M100 implementation\r\n\\+ write any necessary tests along the way\r\n3. Check whether the Flax logits match with the PyTorch ones\r\n4. Iterate on step 2 until the check in step 3 passes!\r\n\r\nOnce we have a Flax model that matches the PyTorch logits, we can be confident our implementation is correct :)\r\n\r\nAs a starting point, we can copy across the Flax Bart modelling code. You can do this through the 'add-new-model-like' command:\r\nhttps://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the Flax Bart model:\r\nhttps://github.com/huggingface/transformers/tree/main/src/transformers/models/bart/modeling_flax_bart.py\r\n\r\nOnce you've done that, feel free to open a WIP PR. We can go from there!",
"Hi @sanchit-gandhi \nThank you for the help\n\nI'll start to work on this today by following the step and open WIP PR first\nWill ask some question/pointer after that..thanks again",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
### Feature request
Add Flax/Jax support for M2M100 so it can be optimize in TPU
### Motivation
NLLB is great model translation that support many languages and also have good accuracy
it can be used to translate available english large dataset to another language
but this require much resource such as multiple gpu parallelism to cut time for translation
since tpu access more easy to get(through trc program) than multi-gpu access,it would be nice if we have NLLB/M2M100 for flax
to give another reason
flaxmarian in tpu is amazing... it can translate 100k English texts to Spanish in less than 4 minutes using Flax on TPUs with jax parallelism [flax community slack](https://huggingface.slack.com/archives/C025LJDP962/p1626488974341000)
me myself already translated almost 100m sentence using marian flax with 3 days ~~
### Your contribution
although i am not expert yet at flax/jax ,i can make attemp to implement
but i need some pointer how to do that
- AFAIK,NLLB or M2M100 has similar architecture as MBart,since mbart flax has already implemented maybe it can start from that(?)
- need to figure out how to "convert" NLLLB/M2M100 position embedding,etc to jax CMIIW
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18241/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18240
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18240/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18240/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18240/events
|
https://github.com/huggingface/transformers/issues/18240
| 1,313,452,425
|
I_kwDOCUB6oc5OSbGJ
| 18,240
|
Add callback that saves only best checkpoints
|
{
"login": "KseniaSycheva",
"id": 84267634,
"node_id": "MDQ6VXNlcjg0MjY3NjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/84267634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KseniaSycheva",
"html_url": "https://github.com/KseniaSycheva",
"followers_url": "https://api.github.com/users/KseniaSycheva/followers",
"following_url": "https://api.github.com/users/KseniaSycheva/following{/other_user}",
"gists_url": "https://api.github.com/users/KseniaSycheva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KseniaSycheva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KseniaSycheva/subscriptions",
"organizations_url": "https://api.github.com/users/KseniaSycheva/orgs",
"repos_url": "https://api.github.com/users/KseniaSycheva/repos",
"events_url": "https://api.github.com/users/KseniaSycheva/events{/privacy}",
"received_events_url": "https://api.github.com/users/KseniaSycheva/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sgugger ",
"We already have `save_total_limits` to limit the number of checkpoint saved, and with `load_best_model_at_end=True` the best checkpoint is always kept. Which use case that is not currently available would this new callback permit?",
"I haven't noticed that it is possible to use these options together. The only thing that is different in proposed callback is that all saved checkpoints perform better on the evaluation dataset than the rest of the checkpoints. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
### Feature request
A new class for a callback that saves checkpoint only if it performs better than the previous checkpoint on the evaluation dataset. It can work similar to the [EvalCallback](https://stable-baselines3.readthedocs.io/en/master/guide/callbacks.html) that is proposed by stable_baselines3.
### Motivation
This callback will enable evaluating model frequently without using a lot of memory, because only several checkpoints will be kept.
### Your contribution
Submitting PR
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18240/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18239
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18239/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18239/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18239/events
|
https://github.com/huggingface/transformers/issues/18239
| 1,313,424,832
|
I_kwDOCUB6oc5OSUXA
| 18,239
|
TF2 DeBERTaV2 runs super slow on TPUs
|
{
"login": "WissamAntoun",
"id": 44616226,
"node_id": "MDQ6VXNlcjQ0NjE2MjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/44616226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WissamAntoun",
"html_url": "https://github.com/WissamAntoun",
"followers_url": "https://api.github.com/users/WissamAntoun/followers",
"following_url": "https://api.github.com/users/WissamAntoun/following{/other_user}",
"gists_url": "https://api.github.com/users/WissamAntoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WissamAntoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WissamAntoun/subscriptions",
"organizations_url": "https://api.github.com/users/WissamAntoun/orgs",
"repos_url": "https://api.github.com/users/WissamAntoun/repos",
"events_url": "https://api.github.com/users/WissamAntoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/WissamAntoun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @WissamAntoun, this is an interesting issue! I honestly have no idea what the cause could be, but the fact that it highlights that function is interesting. The reason is that the DeBERTa code was ported from PyTorch, and so we wrote our own implementation of `take_along_axis` because TF didn't have one. One thing to try would be to edit the code to use `tf.experimental.numpy.take_along_axis` instead of that function. If that doesn't work then we might have to see if we can do things in a different, more performant way.\r\n\r\nAlso, just in case XLA compilation is the issue, have you tried using `jit_compile=True` in `compile()` when running DeBERTa on GPU? If that also causes performance degradation then the problem is caused by XLA and not TPUs, and we can investigate from there.",
"Also cc @sanchit-gandhi because I'm not a TPU expert - don't worry about investigating this deeply, but if anything comes to mind when you read it, let me know!",
"@Rocketknight1 I read all the discussions that you had with Kamal about the `torch.gather` and `take_along_axis` .\r\n\r\nOn GPUs I already enabled XLA via `tf.config.optimizer.set_jit` and via T`F_XLA_FLAGS=\"--tf_xla_auto_jit=2 --tf_xla_cpu_global_jit\"` but I was reading that this isn't the optimal way to do it, so I'm now trying the `jit_compile=True` and will report back.\r\n\r\nAlso I just finished testing `tf.experimental.numpy.take_along_axis`, on GPUs it improved performance by ~10% yet on TPUs I still have the same issue. I will also test the `jit_compile` on TPUs but I don't think it will solve anything.\r\n\r\nThanks a lot for the replies and for the effort you put in convert the pytorch code into TF ",
"runnig the training with `jit_compile=True` on GPU revealed a new bug. Then it is now an XLA/JIT issue not a TPU one\r\n\r\n<details>\r\n<summary style=\"font-size:14px\">View log dump</summary>\r\n<p>\r\n\r\n```md\r\n2022-07-21 23:36:18.107830: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at bcast_ops.cc:50 : \r\nINVALID_ARGUMENT: \r\nInput 0 to node `pretraining_model/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/BroadcastArgs`\r\nwith op BroadcastArgs must be a compile-time constant.\r\n\r\nXLA compilation requires that operator arguments that represent shapes or dimensions be evaluated to concrete values at compile time. \r\nThis error means that a shape or dimension argument could not be evaluated at compile time, \r\nusually because the value of the argument depends on a parameter to the computation, on a variable, \r\nor on a stateful operation such as a random number generator.\r\n\r\nStack trace for op definition: \r\nFile \"run_pretraining.py\", line 204, in <module>\r\n config = main(start_time)\r\nFile \"run_pretraining.py\", line 184, in main\r\n trained_model = run_customized_training_loop(\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 675, in run_customized_training_loop\r\n train_steps_strategy(\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 407, in train_steps_strategy\r\n if num_grad_accumulates != 1:\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 408, in train_steps_strategy\r\n for step_idx in tf.range(steps * num_grad_accumulates):\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 410, in train_steps_strategy\r\n strategy.run(_forward, args=(next(iterator),))\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 324, in _forward\r\n loss, model_outputs = model(inputs, is_training=True)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2491, in call\r\n if config.uniform_generator:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2496, in call\r\n mlm_output = self._get_masked_lm_output(\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2541, in _get_masked_lm_output\r\n if self._config.uniform_generator:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2550, in _get_masked_lm_output\r\n outputs = generator(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py\", line 1872, in run_call_with_unpacked_inputs\r\n )\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 1880, in call\r\n outputs = self.deberta(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py\", line 1872, in run_call_with_unpacked_inputs\r\n )\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 1617, in call\r\n encoder_outputs = self.encoder(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 527, in call\r\n for i, layer_module in enumerate(self.layer):\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 532, in call\r\n layer_outputs = layer_module(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 317, in call\r\n attention_outputs = self.attention(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 226, in call\r\n self_outputs = self.self(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 876, in call\r\n if self.relative_attention:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 878, in call\r\n rel_att = self.disentangled_att_bias(\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 991, in disentangled_att_bias\r\n if \"c2p\" in self.pos_att_type:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 1012, in disentangled_att_bias\r\n c2p_att = tnp.take_along_axis(\r\n\r\n2022-07-21 23:36:18.184105: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at xla_ops.cc:248 : \r\nINVALID_ARGUMENT: \r\nInput 0 to node `pretraining_model/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/BroadcastArgs` \r\nwith op BroadcastArgs must be a compile-time constant.\r\n\r\nXLA compilation requires that operator arguments that represent shapes or dimensions be evaluated to concrete values at compile time. \r\nThis error means that a shape or dimension argument could not be evaluated at compile time, \r\nusually because the value of the argument depends on a parameter to the computation, \r\non a variable, or on a stateful operation such as a random number generator.\r\n\r\nStack trace for op definition: \r\nFile \"run_pretraining.py\", line 204, in <module>\r\n config = main(start_time)\r\nFile \"run_pretraining.py\", line 184, in main\r\n trained_model = run_customized_training_loop(\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 675, in run_customized_training_loop\r\n train_steps_strategy(\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 407, in train_steps_strategy\r\n if num_grad_accumulates != 1:\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 408, in train_steps_strategy\r\n for step_idx in tf.range(steps * num_grad_accumulates):\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 410, in train_steps_strategy\r\n strategy.run(_forward, args=(next(iterator),))\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 324, in _forward\r\n loss, model_outputs = model(inputs, is_training=True)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2491, in call\r\n if config.uniform_generator:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2496, in call\r\n mlm_output = self._get_masked_lm_output(\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2541, in _get_masked_lm_output\r\n if self._config.uniform_generator:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2550, in _get_masked_lm_output\r\n outputs = generator(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py\", line 1872, in run_call_with_unpacked_inputs\r\n )\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 1880, in call\r\n outputs = self.deberta(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py\", line 1872, in run_call_with_unpacked_inputs\r\n )\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 1617, in call\r\n encoder_outputs = self.encoder(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 527, in call\r\n for i, layer_module in enumerate(self.layer):\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 532, in call\r\n layer_outputs = layer_module(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 317, in call\r\n attention_outputs = self.attention(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 226, in call\r\n self_outputs = self.self(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 876, in call\r\n if self.relative_attention:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 878, in call\r\n rel_att = self.disentangled_att_bias(\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 991, in disentangled_att_bias\r\n if \"c2p\" in self.pos_att_type:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 1012, in disentangled_att_bias\r\n c2p_att = tnp.take_along_axis(\r\n\r\n [[{{node pretraining_model/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/BroadcastArgs}}]]\r\nTraceback (most recent call last):\r\n File \"run_pretraining.py\", line 204, in <module>\r\n config = main(start_time)\r\n File \"run_pretraining.py\", line 184, in main\r\n trained_model = run_customized_training_loop(\r\n File \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 675, in run_customized_training_loop\r\n train_steps_strategy(\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/traceback_utils.py\", line 153, in error_handler\r\n raise e.with_traceback(filtered_tb) from None\r\n File \"/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py\", line 54, in quick_execute\r\n tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:\r\n\r\nInput 0 to node `pretraining_model/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/BroadcastArgs` \r\nwith op BroadcastArgs must be a compile-time constant.\r\n\r\nXLA compilation requires that operator arguments that represent shapes or dimensions be evaluated to concrete values at compile time. \r\nThis error means that a shape or dimension argument could not be evaluated at compile time, \r\nusually because the value of the argument depends on a parameter to the computation, \r\non a variable, or on a stateful operation such as a random number generator.\r\n\r\nStack trace for op definition: \r\nFile \"run_pretraining.py\", line 204, in <module>\r\n config = main(start_time)\r\nFile \"run_pretraining.py\", line 184, in main\r\n trained_model = run_customized_training_loop(\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 675, in run_customized_training_loop\r\n train_steps_strategy(\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 407, in train_steps_strategy\r\n if num_grad_accumulates != 1:\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 408, in train_steps_strategy\r\n for step_idx in tf.range(steps * num_grad_accumulates):\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 410, in train_steps_strategy\r\n strategy.run(_forward, args=(next(iterator),))\r\nFile \"/workspaces/nv-deberta-tf2/electra/model_training_utils.py\", line 324, in _forward\r\n loss, model_outputs = model(inputs, is_training=True)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2491, in call\r\n if config.uniform_generator:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2496, in call\r\n mlm_output = self._get_masked_lm_output(\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2541, in _get_masked_lm_output\r\n if self._config.uniform_generator:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 2550, in _get_masked_lm_output\r\n outputs = generator(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py\", line 1872, in run_call_with_unpacked_inputs\r\n )\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 1880, in call\r\n outputs = self.deberta(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_utils.py\", line 1872, in run_call_with_unpacked_inputs\r\n )\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 1617, in call\r\n encoder_outputs = self.encoder(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 527, in call\r\n for i, layer_module in enumerate(self.layer):\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 532, in call\r\n layer_outputs = layer_module(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 317, in call\r\n attention_outputs = self.attention(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 226, in call\r\n self_outputs = self.self(\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 64, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py\", line 1096, in __call__\r\n outputs = call_fn(inputs, *args, **kwargs)\r\nFile \"/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py\", line 92, in error_handler\r\n return fn(*args, **kwargs)\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 876, in call\r\n if self.relative_attention:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 878, in call\r\n rel_att = self.disentangled_att_bias(\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 991, in disentangled_att_bias\r\n if \"c2p\" in self.pos_att_type:\r\nFile \"/workspaces/nv-deberta-tf2/electra/modeling_tf_deberta_v2.py\", line 1012, in disentangled_att_bias\r\n c2p_att = tnp.take_along_axis(\r\n\r\n [[{{node pretraining_model/tf_deberta_v2_for_masked_lm/deberta/encoder/layer_._0/attention/self/BroadcastArgs}}]]\r\n [[while/body/_1/while/StatefulPartitionedCall]] [Op:__inference_train_steps_strategy_177980]\r\n```\r\n\r\n</p></details>",
"@WissamAntoun Confirmed reproduction of the issue here. Our TF DeBERTa implementation seems to have issues with XLA - I'm investigating now.",
"@WissamAntoun We have a potential fix - I've confirmed that I can compile `microsoft/deberta-v3-small` with XLA on my local machine. Can you try installing this branch and let me know if this fixes the problem for you? You can use `pip install git+https://github.com/huggingface/transformers.git@deberta-xla-fixes`",
"I confirm it works on GPUs with XLA, and I got ~20% improved speedup.\r\nI'm still testing now on TPUs, will let you know ASAP",
"Weirdly enough TPUs didn't seem to care about the changes 😅 even after we removed all the if branches",
"Hmm. Can you check that you don't get the slowdown if you switch the model to another model, like BERT or ELECTRA, while keeping all of the other code the same (especially data loading)? I know the profiling indicates that the `GatherV2` is the problem, but I'm a little suspicious!",
"I tried disabling `relative_attention` in deberta, which makes the model a regular BERT, and the performance improved 40x 😅",
"@WissamAntoun So the issue really is in that gather! That's extremely interesting - with the simplified code, it's just a single call to `tf.gather`, but perhaps the `batch_dims` argument is not handled elegantly on TPU, or XLA converts it in a way that doesn't run well on TPU. \r\n\r\nIs it possible that some kind of memory spill is occurring? Can you try lowering your batch size and increasing steps_per_execution?\r\n\r\nIf that isn't it, then I have no idea - maybe there's some way to rewrite the gather, but I don't really know what to try!",
"@Rocketknight1 I tried your suggestions without any success, sadly!\r\n\r\nThen I tried replacing the whole `take_along_axis` function with `tf.gather(..,...,batch_dims=2)` which is equivalent, according to this test I made. GPU still runs fine, TPU still has the same issue 😔. \r\n\r\nI also ran out of ideas to try, now I'm just waiting for the TPU gods 😅\r\n\r\n<details>\r\n<summary style=\"font-size:14px\">View code</summary>\r\n<p>\r\n\r\n```python\r\n#%%\r\nimport tensorflow as tf\r\n\r\n#%%\r\nx_shape = [32, 128, 512]\r\nindices_shape = [32, 128, 128]\r\nx = tf.random.uniform(shape=x_shape)\r\nindices = tf.random.uniform(shape=indices_shape, minval=1, maxval=128, dtype=tf.int32)\r\n#%%\r\nflat_x = tf.reshape(x, (-1, x_shape[-1]))\r\nprint(flat_x.shape) # (4096, 512)\r\nflat_indices = tf.reshape(indices, (-1, indices_shape[-1]))\r\nprint(flat_indices.shape) # (4096, 128)\r\n\r\n#%%\r\ngathered = tf.gather(\r\n params=flat_x, indices=flat_indices, batch_dims=1, validate_indices=None\r\n)\r\nprint(gathered.shape) # (4096, 128)\r\ngathered_reshaped = tf.reshape(gathered, indices.shape)\r\nprint(gathered_reshaped.shape) # ( 32, 128, 128)\r\n\r\n# %%\r\ngathered2 = tf.gather(params=x, indices=indices, batch_dims=2, validate_indices=None)\r\nprint(gathered2.shape) # (32, 128, 128)\r\n# %%\r\ntf.assert_equal(gathered2, gathered_reshaped) # passes\r\n\r\n# %%\r\n\r\n```\r\n\r\n</p></details>\r\n",
"I'm clueless in that case - @patrickvonplaten @sanchit-gandhi do you have any idea why a `gather` or `take_along_axis` op which is performant on GPU and compiles with XLA would become a huge bottleneck on TPU?",
"In our JAX BLOOM experiments, we experienced significant improvements in performance by changing how we indexed. Swapping scatter ops for one-host broadcasts, we obtained 3-4x speed-ups in practice. The logic is largely lifted from T5X: https://github.com/google-research/t5x/blob/63d9addf628c6d8c547a407a32095fcb527bb20b/t5x/examples/scalable_t5/layers.py#L280-L284 \r\n\r\nI wonder if applying similar logic here and swapping the gather op to one-hot indexing might help?",
"DO you mean something to BERT one-hot embeddings ?https://github.com/tensorflow/models/blob/master/official/nlp/modeling/layers/on_device_embedding.py#L79",
"Simply modifying the bottleneck function: https://github.com/huggingface/transformers/blob/f4e172716b91b477ce3cddc9a253094b7121a4b8/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L525\r\nTo use `one_hot` encodings as opposed to a `gather` op. The example you've liked looks like the right idea! Worth a try IMO!",
"I tried this, although I'm not sure if it's the best implementation\r\n\r\n```python\r\ndef take_along_axis(x, indices):\r\n\r\n one_hot_indices = tf.one_hot(indices, depth=x.shape[-1], dtype=x.dtype) # [B, S, P, D] => [B, 128, 128, 512]\r\n \r\n # [B, S, P, D] . [B, S, D, 1] = [B, S, P, 1]\r\n gathered = tf.squeeze(tf.matmul(one_hot_indices, tf.expand_dims(x, axis=-1)), axis=-1)\r\n return gathered\r\n```\r\n\r\nIt improved the speed from 20 seq/s to 110 seq/s. For reference, regular ELECTRA/BERT got ~800 seq/s.\r\n\r\nNow it's the reshape and squeeze operations that are \"wasting\" time:\r\n\r\n\r\n",
"@sanchit-gandhi is there a better implementation than mine, without `expand_dims` or `squeeze` since these are unfavorable operations on TPUs",
"Nice! A 5x speed up is a good start. If we can get another 5x we'll be in business. Thanks for linking the Tensorboard profile! Super helpful in identifying bottlenecks like these 🙏 \r\n\r\nInteresting to see the `expand_dims` and `squeeze` are now accruing large amounts of runtime. I'm not a TF user (it's mainly JAX on TPU for me!), so I'm not up to speed with implementation details, but my impression from the profile is that the shapes are unfavourable for XLA. Perhaps you could have a play around and see whether changing the tensor shapes / choice of TF ops have any effect? It's been the case for me in the past that using tensors of different shape can give big speed-ups. Is there a repo you could reference for XLA optimised TF code? For JAX, we usually look to the T5X repo when deciding on tensor shapes and trying out 'hacks' like these: https://github.com/google-research/t5x/tree/main/t5x\r\n\r\ncc @Rocketknight1 who's more up to speed in the TF sphere!",
"Hey @WissamAntoun! Any luck with this? Maybe also worth trying https://www.tensorflow.org/api_docs/python/tf/experimental/numpy/take_along_axis",
"Hey @sanchit-gandhi , I have already tried the exp. numpy function with no improvement at all compared to `gather` with `batch_dims=2`.\r\n\r\nI also tried going up to sequence length of `512`, I got the exact same speedup but it is still much slower than expected (around 20 seq/s for sentence length 512). I also changed batch sizes with no effect at all ",
"Okay probably worth sticking with the one-hot encoding hack then, seems most promising! I'm not a TF user so can't comment on the exact implementations changes you could make with the `expand_dims` or `squeeze` ops. Perhaps @gante could take a look here with his experience using TF and XLA?",
"> Now it's the reshape and squeeze operations that are \"wasting\" time\r\n\r\nInteresting -- I spent some time with TPU profiling on a different application (TF text generation with a myriad of models), and found that those two operations were part of the bottleneck (along XLA's `dynamic_update_slice`). They accounted for 50-70% of the execution time. Do you know if it is also a bottleneck for FLAX, @sanchit-gandhi (e.g. the cache updates [here](https://github.com/huggingface/transformers/blob/0b8c1b6994082950044452a670e8417a5ebc2db0/src/transformers/models/gpt2/modeling_flax_gpt2.py#L163))?\r\n\r\n",
"For JAX BLOOM we couldn't even compile the 176B parameter model with the naive implementation of `concatenate_to_cache`, yet alone benchmark which operations consumed the bulk of the execution time! We swapped it for this more efficient implementation (with one-hot encodings etc): https://github.com/huggingface/bloom-jax-inference/blob/2a04aa519d262729d54adef3d19d63879f81ea89/bloom_inference/modeling_bloom/modeling_bloom.py#L119\r\nCoincidentally, we've just run the JAX profiler for this implementation and are going through the traceback it with some of the Google JAX guys later today. Will report back on how performance fares!",
"> ```python\r\n> def take_along_axis(x, indices):\r\n> \r\n> one_hot_indices = tf.one_hot(indices, depth=x.shape[-1], dtype=x.dtype) # [B, S, P, D] => [B, 128, 128, 512]\r\n> \r\n> # [B, S, P, D] . [B, S, D, 1] = [B, S, P, 1]\r\n> gathered = tf.squeeze(tf.matmul(one_hot_indices, tf.expand_dims(x, axis=-1)), axis=-1)\r\n> return gathered\r\n> ```\r\n\r\n@gante Do you think the one-hot trick can be done without the `expands_dims` and `squeeze`, maybe then we can just dodge the whole problem",
"@sanchit-gandhi that's interesting! I'd be interested in knowing the pro tips for XLA (which should also apply to TF)\r\n\r\n@WissamAntoun Yeah, we can rework it with [`tf.einsum`](https://www.tensorflow.org/api_docs/python/tf/einsum) magic, assuming the operation can be rewritten with [Einstein notation](https://en.wikipedia.org/wiki/Einstein_notation) -- in this case, it is possible! Check the implementation below, give it a try, and let us know if it helped with speed on a TPU (my debug runs confirmed that they are numerically equivalent)\r\n\r\n```python\r\ndef take_along_axis(x, indices):\r\n # [B, S, P] -> [B, S, P, D]\r\n one_hot_indices = tf.one_hot(indices, depth=x.shape[-1], dtype=x.dtype)\r\n \r\n # if we ignore the first two dims, this is equivalent to multiplying a matrix (one hot) by a vector (x)\r\n # grossly abusing notation: [B, S, P, D] . [B, S, D] = [B, S, P]\r\n gathered = tf.einsum('ijkl,ijl->ijk', one_hot_indices, x)\r\n\r\n return gathered\r\n```",
"@gante I tested the `tf.einsum` implementation. It gave me the same performance as the `one_hot` trick, which is about ~120 seq/second. \r\nI tried it with different batch sizes but still it didn't change much.\r\n\r\nThis is a screenshot of the profiler:\r\n\r\n\r\n",
"I'm out of suggestions :( I suspect this is a good question for Google's XLA and TPU teams -- the problem is probably at a compiler/hardware level.",
"Yeah this is a weird and unexpected bug. Do you know someone we can get in contact with from Google's XLA or TPU team?\r\n\r\nAnd thanks a lot for the efforts you guys put into this issue!",
"@sanchit-gandhi do you know a good point of contact for TPU problems?"
] | 1,658
| 1,662
| 1,662
|
CONTRIBUTOR
| null |
### System Info
latest version of transformers, Colab TPU, tensorflow 2
### Who can help?
@kamalkraj @Rocketknight1 @BigBird01
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
It's currently hard to share code and access to the google bucket. But I believe any TF2 DeBERTaV2 code running on TPUs will have this issue
### Expected behavior
I've been trying to train a deberta v3 model on GPU and TPUs. I got it to work on multi-node and multi-gpus using Nvidia deeplearning examples libraries https://github.com/NVIDIA/DeepLearningExamples/blob/master/TensorFlow2/LanguageModeling/
I basically used the training setup and loop from the BERT code, the dataset utils from the ELECTRA code, and the model from Huggingface transformers with some changes in order to share embeddings.
On 6xA40 45gb gpus i get around 1370 sentences per seconds during training (which is lower than what Nvidia gets for Electra but it's fine).
Ok, now the problem.... on TPU i get **20** sentences per second
I traced the issue back to the tf.gather function here https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py#L525
I ran TPU profiling and this is the output:

GatherV2 takes most of the time:

zoomed in pictures of the fast ops

Also, I'm not sure if this is TPU specific since on GPUs the training ~30% slower compared to regular ELECTRA.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18239/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18238
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18238/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18238/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18238/events
|
https://github.com/huggingface/transformers/pull/18238
| 1,313,238,694
|
PR_kwDOCUB6oc472pCm
| 18,238
|
Update all no_trainer scripts
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834083927,
"node_id": "MDU6TGFiZWwxODM0MDgzOTI3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/External",
"name": "External",
"color": "fbca04",
"default": false,
"description": "Using the library with external tools (onnx, tflite, ...)"
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR updates the `no_trainer` scripts with the latest capabilities in accelerate:
- Includes gradient_accumulation wrapper
- Adds the `gather_for_metrics` wrapper
- Removes the explicit `step` param since it breaks wandb trackers (it will never be pushed)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18238/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18238",
"html_url": "https://github.com/huggingface/transformers/pull/18238",
"diff_url": "https://github.com/huggingface/transformers/pull/18238.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18238.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18237
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18237/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18237/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18237/events
|
https://github.com/huggingface/transformers/issues/18237
| 1,313,176,946
|
I_kwDOCUB6oc5ORX1y
| 18,237
|
ONNX runtime error after export of Deberta v3 SequenceClassification model
|
{
"login": "iiLaurens",
"id": 9915637,
"node_id": "MDQ6VXNlcjk5MTU2Mzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9915637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iiLaurens",
"html_url": "https://github.com/iiLaurens",
"followers_url": "https://api.github.com/users/iiLaurens/followers",
"following_url": "https://api.github.com/users/iiLaurens/following{/other_user}",
"gists_url": "https://api.github.com/users/iiLaurens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iiLaurens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iiLaurens/subscriptions",
"organizations_url": "https://api.github.com/users/iiLaurens/orgs",
"repos_url": "https://api.github.com/users/iiLaurens/repos",
"events_url": "https://api.github.com/users/iiLaurens/events{/privacy}",
"received_events_url": "https://api.github.com/users/iiLaurens/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @iiLaurens, thanks for the PR on fixing the export of DeBERTa!\r\n\r\nIn terms of your use case, another possibility to simplify all the code would be using the [optimum library](https://github.com/huggingface/optimum) which is an extension of transformers. You can use directly [ORTModels](https://github.com/huggingface/optimum/blob/main/optimum/onnxruntime/modeling_ort.py#L526) and the pipeline for inference which are natively integrated with transformers. \r\n\r\nHere is a snippet adapted to your case:\r\n```python\r\nfrom optimum.onnxruntime.modeling_ort import ORTModelForSequenceClassification\r\nfrom transformers import AutoTokenizer\r\n\r\nort_model = ORTModelForSequenceClassification.from_pretrained(model_id=\"results\", file_name=\"deberta_v3_seq.onnx\")\r\n# Or download directly from the hub once your fix makes its way to the main of transformers\r\n# ort_model = ORTModelForSequenceClassification.from_pretrained('microsoft/deberta-v3-xsmall')\r\ntokenizer = AutoTokenizer.from_pretrained('microsoft/deberta-v3-xsmall', use_fast=True)\r\ninputs = tokenizer(\"Using DeBERTa with ONNX Runtime!\", return_tensors=\"pt\", return_token_type_ids=False)\r\npred = ort_model(**inputs)\r\n```\r\n```\r\n>>> pred\r\nSequenceClassifierOutput(loss=None, logits=tensor([[-0.0199, 0.1397]]), hidden_states=None, attentions=None)\r\n```\r\nBesides, you can also leverage other tools in optimum(graph optimization, quantization...) for accelerating your inference.\r\n\r\nCheers!"
] | 1,658
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
### System Info
- Transformers: 4.20.1.dev0 (master branch as of 2022-07-21)
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.8.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu113
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
Issue both occurs on a Linux notebook with GPU (databricks platform) and on windows without GPU.
**Do note that I use the latest development version of transformers, i.e. the current master branch of this repo.** This is necessary because there are changes to symbolic ops in the Deberta V3 model that have not made it into a stable release yet.
### Who can help?
@LysandreJik
### Information
- [X] My own modified scripts
### Tasks
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to make an ONNX export of a fine-tuned Deberta sequence classification model. Below are the steps to make such a model and export it to ONNX.
1. First initiate a deberta sequence model. This example will just use the random weights, as there is no need for actual fine-tuning in this minimal example
2. Export to onnx
3. Test an inference using `onnxruntime`
```Python
from pathlib import Path
from onnxruntime import InferenceSession
from transformers.models.deberta_v2 import DebertaV2OnnxConfig
from transformers.onnx import export
from transformers import AutoTokenizer, AutoConfig, AutoModelForSequenceClassification
# Step 1
model_base = 'microsoft/deberta-v3-xsmall'
config = AutoConfig.from_pretrained(model_base)
tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(model_base)
# Step 2
onnx_path = Path(f"deberta.onnx")
onnx_config = DebertaV2OnnxConfig(config, task="sequence-classification")
export(tokenizer, model, onnx_config, 15, onnx_path)
# Step 3
session = InferenceSession(onnx_path.as_posix())
inputs = tokenizer("Using DeBERTa with ONNX Runtime!", return_tensors="np", return_token_type_ids=False)
input_feed = {k: v.astype('int64') for k, v in inputs.items()}
outputs = session.run(output_names=['logits'], input_feed=input_feed)
```
I would expect outputs from the inference model. However the error I am getting is:
```
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Expand node. Name:'Expand_674' Status Message: invalid expand shape
```
### Expected behavior
Surprisingly, this model doesn't seem to work when the sequence length is anything else but 8. For example:
```Python
# Anything with a sequence length of 8 runs fine:
inputs = tokenizer(["Using Deberta V3!"], return_tensors="np", return_token_type_ids=False)
inputs1 = {k: v.astype('int64') for k, v in inputs.items()}
outputs = session.run(output_names=['logits'], input_feed=inputs1)
# Anything else doesnt:
inputs = tokenizer(["Using Deberta V3 with ONNX Runtime!"], return_tensors="np", return_token_type_ids=False)
inputs2 = {k: v.astype('int64') for k, v in inputs.items()}
outputs = session.run(output_names=['logits'], input_feed=inputs2)
# Multiples of 8 will also not work:
inputs = tokenizer(["Hello world. This is me. I will crash this model now!"], return_tensors="np", return_token_type_ids=False)
inputs3 = {k: v.astype('int64') for k, v in inputs.items()}
outputs = session.run(output_names=['logits'], input_feed=inputs3)
```
I was wondering if it maybe has anything to do with the dynamic axes. However when I check the graph, it seems correct:
```Python
import onnx
m = onnx.load(str(onnx_path))
print(m.graph.input)
```
```
[name: "input_ids"
type {
tensor_type {
elem_type: 7
shape {
dim {
dim_param: "batch"
}
dim {
dim_param: "sequence"
}
}
}
}
, name: "attention_mask"
type {
tensor_type {
elem_type: 7
shape {
dim {
dim_param: "batch"
}
dim {
dim_param: "sequence"
}
}
}
}
]
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18237/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18236
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18236/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18236/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18236/events
|
https://github.com/huggingface/transformers/pull/18236
| 1,313,162,560
|
PR_kwDOCUB6oc472YSb
| 18,236
|
Fix command of doc tests for local testing
|
{
"login": "oneraghavan",
"id": 3041890,
"node_id": "MDQ6VXNlcjMwNDE4OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3041890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oneraghavan",
"html_url": "https://github.com/oneraghavan",
"followers_url": "https://api.github.com/users/oneraghavan/followers",
"following_url": "https://api.github.com/users/oneraghavan/following{/other_user}",
"gists_url": "https://api.github.com/users/oneraghavan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oneraghavan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oneraghavan/subscriptions",
"organizations_url": "https://api.github.com/users/oneraghavan/orgs",
"repos_url": "https://api.github.com/users/oneraghavan/repos",
"events_url": "https://api.github.com/users/oneraghavan/events{/privacy}",
"received_events_url": "https://api.github.com/users/oneraghavan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey! You should update line 39 in a similar fashion :)",
"> Hey! You should update line 39 in a similar fashion :)\r\n\r\nYes, realised bit later, Done",
"@ydshieh Can we close this ? \r\n@LysandreJik Please point to a next good bug to pick up.",
"@oneraghavan, thanks for wanting to contribute! There are a lot of issues available [here](https://github.com/huggingface/transformers/issues). Feel free to take a look and find one you'd like to try your hand at!"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
In the utils/prepare_for_doc_test.py file, the command to test the doc test locally had typo, this fixes it
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18236/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18236",
"html_url": "https://github.com/huggingface/transformers/pull/18236",
"diff_url": "https://github.com/huggingface/transformers/pull/18236.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18236.patch",
"merged_at": 1658819231000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18235
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18235/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18235/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18235/events
|
https://github.com/huggingface/transformers/pull/18235
| 1,313,154,868
|
PR_kwDOCUB6oc472Wnv
| 18,235
|
Correct BLOOM parameters to 176B
|
{
"login": "muhammad-ahmed-ghani",
"id": 63394104,
"node_id": "MDQ6VXNlcjYzMzk0MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/63394104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muhammad-ahmed-ghani",
"html_url": "https://github.com/muhammad-ahmed-ghani",
"followers_url": "https://api.github.com/users/muhammad-ahmed-ghani/followers",
"following_url": "https://api.github.com/users/muhammad-ahmed-ghani/following{/other_user}",
"gists_url": "https://api.github.com/users/muhammad-ahmed-ghani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muhammad-ahmed-ghani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muhammad-ahmed-ghani/subscriptions",
"organizations_url": "https://api.github.com/users/muhammad-ahmed-ghani/orgs",
"repos_url": "https://api.github.com/users/muhammad-ahmed-ghani/repos",
"events_url": "https://api.github.com/users/muhammad-ahmed-ghani/events{/privacy}",
"received_events_url": "https://api.github.com/users/muhammad-ahmed-ghani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18235). All of your documentation changes will be reflected on that endpoint.\r\n\r\nGreat!",
"Thanks for the fix ! 🚀 ",
"awesome, thanks for fixing @muhammad-ahmed-ghani!"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18235/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18235",
"html_url": "https://github.com/huggingface/transformers/pull/18235",
"diff_url": "https://github.com/huggingface/transformers/pull/18235.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18235.patch",
"merged_at": 1658499469000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18234
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18234/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18234/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18234/events
|
https://github.com/huggingface/transformers/issues/18234
| 1,313,142,672
|
I_kwDOCUB6oc5ORPeQ
| 18,234
|
Longformer, BigBird take same time to run in sparse mode as well as full-mode
|
{
"login": "allohvk",
"id": 109533797,
"node_id": "U_kgDOBodaZQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109533797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/allohvk",
"html_url": "https://github.com/allohvk",
"followers_url": "https://api.github.com/users/allohvk/followers",
"following_url": "https://api.github.com/users/allohvk/following{/other_user}",
"gists_url": "https://api.github.com/users/allohvk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/allohvk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/allohvk/subscriptions",
"organizations_url": "https://api.github.com/users/allohvk/orgs",
"repos_url": "https://api.github.com/users/allohvk/repos",
"events_url": "https://api.github.com/users/allohvk/events{/privacy}",
"received_events_url": "https://api.github.com/users/allohvk/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@ydshieh It gets a bit more wierder. Today I tried to directly use the Longformer bypassing Huggingface. \r\n\r\nIt needed minor changes to the above code. The link is here: \r\nhttps://colab.research.google.com/drive/1R5uDsbl3ZmUIccZtefVNBs3CXU_vcDZd?usp=sharing\r\n\r\nThe observations continues to be perplexing:\r\nCASE 1:\r\nATT_MODE = 'sliding_chunks'; 100% LOCAL attention ie attention_mask = 1 for all tokens\r\nSLIDE_WIN_SIZE = 256(default) takes between 9-10 hours to train\r\nSLIDE_WIN_SIZE = 1024 takes between 9-10 hours to train\r\nObservation: Sparse attention with 256 tokens windowsize should not take same fine-tuning time as 1024 tokens\r\n\r\nCASE 2:\r\nATT_MODE = 'sliding_chunks'; NO attention ie attention_mask = 0 for all tokens\r\nSLIDE_WIN_SIZE is immaterial\r\nObservation: It is observed that even if none of tokens attend to each other, training time taken is same as case 1 \r\nabove ie 9-10 hours which should not be the case\r\n\r\nCASE 3:\r\nATT_MODE = 'sliding_chunks'; 100% Global attention: ie attention_mask = 2\r\nSLIDE_WIN_SIZE is immaterial\r\nObservation: With 100% global attention, every token attends to each other. It is observed that if all tokens attend to each other, training time taken is 16-17 hours. This training time should be similar to Case 4 which is NOT the case\r\n\r\nCase 4: This is the most bizzarre\r\nATT_MODE = 'n2'\r\nWe can simply set choose the attention mode = 'n2' which is regular quadratic attention. Theoritically this should take\r\nsame training time as Case 3 (when all tokens are marked as global)\r\nObservation: n2 attention takes the lowest training time of approx 2 hours only which is exact opposite of what LOngformer is supposed to do !!!\r\n\r\nShould I open a bug directly with the Longformer GITHUB?",
"Hi @allohvk\r\n\r\nAfter doing some experiments, I think we need **really** long sequences and attention window size to see the benefits of attention window size. Here is the main summary, which is from the 2 tables below:\r\n\r\n## Summary\r\n- with tiny model, the effect of attention window size is more clear, especially on **CPU**\r\n - large model size has more overhead on other layers (for example, intermediate linear layers)\r\n- for a fixed model size, the effect is even more clear when the `max_len` get larger\r\n- with GPU, (which is very fast), the effect is less clear, but we can still see it with very long sequence/att_win (16384)\r\n\r\n### Model size \r\n\r\n- **Tiny**: n_layers = 1, hidden_size = 1, intermediate_size = 1\r\n- **Base**: n_layers = 12, hidden_size = 256, intermediate_size = 1024\r\n- **Large**: n_layers = 24, hidden_size = 1024, intermediate_size = 4096\r\n\r\n⚠️ **(Be careful with `it/s` and `s/it` below)**\r\n\r\n### CPU (256G RAM)\r\n\r\n| CPU | Tiny | Base | Large |\r\n| ------------- | -------------: | -------------: | -------------: |\r\n| max_len 2048 , attn_win 512 | 19.74 it/s | 1.02 s/it | 5.92 s/it |\r\n| max_len 2048 , attn_win 1024 | 14.42 it/s | 1.25 s/it | 6.47 s/it |\r\n| max_len 2048 , attn_win 2048 | 13.25 it/s | 1.48 s/it | 6.69 s/it |\r\n| max_len 4096, attn_win 512 | 16.55 it/s | 1.61 s/it | 10.31 s/it |\r\n| max_len 4096, attn_win 1024 | 10.00 it/s | 2.20 s/it | 11.29 s/it |\r\n| max_len 4096, attn_win 2048 | 4.84 it/s | 3.85 s/it | 13.47 s/it |\r\n| max_len 4096, attn_win 4096 | 3.18 it/s | 6.15 s/it | 15.49 s/it |\r\n| max_len 16384, attn_win 512 | 3.51 it/s | 5.61 s/it | 42.33 s/it |\r\n| max_len 16384, attn_win 1024 | 2.03 it/s | 8.08 s/it | 48.13 s/it |\r\n| max_len 16384, attn_win 2048 | 1.12 it/s | 12.03 s/it | 56.93 s/it |\r\n| max_len 16384, attn_win 4096 | 1.62 s/it | 20.22 s/it | 87.87 s/it |\r\n| max_len 16384, attn_win 8192 | 3.02 s/it | 34.67 s/it | 131.81 s/it |\r\n| max_len 16384, attn_win 16384 | 5.00 s/it | 56.79 s/it | 187.91 s/it |\r\n\r\n### GPU (A100)\r\n\r\n| GPU | Tiny | Base | Large |\r\n| ------------- | -------------: | -------------: | -------------: |\r\n| max_len 2048 , attn_win 512 | 25.48 it/s | 5.15 it/s | 2.57 it/s |\r\n| max_len 2048 , attn_win 1024 | 26.33 it/s | 5.10 it/s | 2.42 it/s |\r\n| max_len 2048 , attn_win 2048 | 26.52 it/s | 5.09 it/s | 2.10 it/s |\r\n| max_len 4096, attn_win 512 | 25.55 it/s | 5.26 it/s | 2.32 it/s |\r\n| max_len 4096, attn_win 1024 | 25.73 it/s | 5.10 it/s | 2.01 it/s |\r\n| max_len 4096, attn_win 2048 | 24.23 it/s | 4.63 it/s | 1.52 it/s |\r\n| max_len 4096, attn_win 4096 | 21.30 it/s | 3.76 it/s | 1.05 it/s |\r\n| max_len 16384, attn_win 512 | 7.39 it/s | 4.24 it/s | 1.07 it/s |\r\n| max_len 16384, attn_win 1024 | 13.30 it/s | 3.37 it/s | 1.25 s/it |\r\n| max_len 16384, attn_win 2048 | 20.17 it/s | 2.33 it/s | 1.88 s/it |\r\n| max_len 16384, attn_win 4096 | 16.50 it/s | 1.44 it/s | N/A |\r\n| max_len 16384, attn_win 8192 | 13.46 it/s | 1.21 s/it | N/A |\r\n| max_len 16384, attn_win 16384 | 9.04 it/s | 2.16 s/it | N/A |",
"For the record, here are the 2 scripts I used to measure running time (copied from yours with modification)\r\n\r\n```python\r\npython run.py\r\n```\r\n\r\n### run.py\r\n```python\r\n\r\nimport os\r\nimport json\r\n\r\n\r\ndef run(attention_window, steps, batch_size, max_length):\r\n\r\n os.system(\"rm -rf output.txt\")\r\n os.system(f\"python debug.py {attention_window} {steps} {batch_size} {max_length} > output.txt 2>&1\")\r\n\r\n with open(\"output.txt\") as fp:\r\n for line in fp:\r\n if f\"{steps - 1}/{steps}\" in line:\r\n line = line.strip()\r\n idx = line.find(f\"{steps - 1}/{steps}\")\r\n line = line[idx:]\r\n if \"Initializing global\" in line:\r\n idx = line.find(\"Initializing global\")\r\n line = line[:idx]\r\n line = line.strip()\r\n return line\r\n\r\nres = {}\r\nsteps = 10\r\n\r\nfor batch_size in [1]:\r\n for max_length in [2048, 4096, 16384]:\r\n for attention_window in [512, 1024, 2048, 4096, 8192, 16384]:\r\n if attention_window > max_length:\r\n continue\r\n r = run(attention_window=attention_window, steps=steps, batch_size=batch_size, max_length=max_length)\r\n print(f\"(attn_win: {attention_window}, batch_size: {batch_size}, max_len: {max_length}) --> {r}\")\r\n print(\"=\" * 40)\r\n\r\n res[f\"(attn_win: {attention_window}, batch_size: {batch_size}, max_len: {max_length})\"] = r\r\n\r\n with open(\"results.json\", \"w\") as fp:\r\n json.dump(res, fp, indent=4, ensure_ascii=False)\r\n```\r\n\r\n### debug.py\r\n```python\r\nimport sys\r\n\r\nimport torch\r\nimport datasets\r\nimport transformers\r\nfrom transformers import BigBirdForSequenceClassification, Trainer, TrainingArguments, AutoTokenizer, AutoModel\r\nfrom transformers.models.longformer.modeling_longformer import LongformerForSequenceClassification, LongformerConfig\r\nfrom sklearn.metrics import accuracy_score\r\nfrom torch.utils.data import Dataset\r\n\r\nimport logging\r\n# logging.disable(logging.INFO)\r\n\r\n\r\ndef measure(attention_window, steps, batch_size, max_length):\r\n\r\n SLIDE_WIN_SIZE = attention_window\r\n\r\n STEPS = steps\r\n BATCH_SIZE = batch_size\r\n GRAD_ACCUMULATION_STEPS = 1\r\n LEN = max_length\r\n\r\n MODEL = 'allenai/longformer-base-4096'\r\n LONGFORMER = True\r\n CACHE_ROOT = \"./\"\r\n\r\n train_data, test_data = datasets.load_dataset('imdb', split=['train', 'test'], cache_dir=f'{CACHE_ROOT}/data')\r\n\r\n config = LongformerConfig.from_pretrained(MODEL, num_labels=2, return_dict=True)\r\n\r\n config.num_hidden_layers = 12\r\n config.hidden_size = 256\r\n config.num_attention_heads = 1\r\n config.intermediate_size = 1024\r\n\r\n config.attention_window = SLIDE_WIN_SIZE\r\n\r\n model = LongformerForSequenceClassification(config=config)\r\n tokenizer = AutoTokenizer.from_pretrained(MODEL, max_length=LEN, cache_dir=f'{CACHE_ROOT}/data')\r\n\r\n print(\"DEFAULT - Sliding window width across layers\", model.config.attention_window)\r\n model.config.attention_window = SLIDE_WIN_SIZE\r\n print(\"UPDATED - Sliding window width across layers\", model.config.attention_window)\r\n\r\n def tokenization(batched_text):\r\n return tokenizer(batched_text['text'], padding = 'max_length', truncation=True, max_length = LEN)\r\n\r\n train_data = train_data.map(tokenization, batched = True, batch_size = len(train_data))\r\n test_data = test_data.map(tokenization, batched = True, batch_size = len(test_data))\r\n\r\n train_data.set_format('torch', columns=['input_ids', 'attention_mask', 'label'])\r\n test_data.set_format('torch', columns=['input_ids', 'attention_mask', 'label'])\r\n\r\n def compute_metrics(pred):\r\n labels = pred.label_ids\r\n preds = pred.predictions.argmax(-1)\r\n acc = accuracy_score(labels, preds)\r\n return {'accuracy': acc}\r\n\r\n training_args = TrainingArguments(\r\n output_dir=f'{CACHE_ROOT}/results',\r\n # num_train_epochs=1,\r\n per_device_train_batch_size=BATCH_SIZE,\r\n max_steps=STEPS,\r\n gradient_accumulation_steps=GRAD_ACCUMULATION_STEPS,\r\n warmup_steps=160,\r\n weight_decay=0.01,\r\n learning_rate=2e-5,\r\n fp16=False, # True,\r\n dataloader_num_workers=2,\r\n logging_strategy=\"steps\",\r\n logging_steps=1,\r\n )\r\n\r\n trainer = Trainer(model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_data)\r\n trainer.train()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n\r\n data = sys.argv[1:]\r\n print(data)\r\n data = [int(x) for x in data]\r\n measure(*data)\r\n```",
"Thank you so much @ydshieh The observations are fascinating. One would think they would be part of the actual BigBird and Longformer papers but they are not. The benefits of changing the hyperparameters like sliding_window, global tokens etc manifest at really high seq sizes (not 2048 or even 4096 but 8000 or 16000). Because I was testing on a GPU and at a size of 2048, I could hardly see any difference. Thank you for your detailed testing and observations.\r\n\r\nIn fact this means that there is a gap to squeeze in couple of new transformer models/white papers which specifically address the max_seqlen of 512 - 4096 space in a non-quadratic way such that it makes a meaningful difference in training time. Hope someone comes out with a new model soon :)",
"No good "
] | 1,658
| 1,668
| 1,658
|
NONE
| null |
### System Info
Transformers: 4.20.1
Python: 3.8.12
Pretrained models & tokenizer from HF: "allenai/longformer-base-4096" and "google/bigbird-roberta-base"
Longformer: Take same time to train (finetume) a pretrained model for different sliding window sizes of 256, 512, 1024 or 2048. One would expect that at lower sliding window sizes, the training times should be lower.
BigBird: Same problem as above. In fact BigBird has a simple switch to change from sparse-attention to full-attention. The training time taken in both cases is roughly the same which seems to point to some issue.
Small but complete source code to simulate:
https://colab.research.google.com/drive/1nm7a-qJseNSCkAB5_3QNkVSrHc8zePAV?usp=sharing
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1nm7a-qJseNSCkAB5_3QNkVSrHc8zePAV?usp=sharing
### Expected behavior
Longformer: Take different time to train (finetume) a pretrained model for different sliding window sizes of 256, 512, 1024 or 2048. One would expect that at lower sliding window sizes, the training times should be lower.
BigBird: Same problem as above. In fact BigBird has a simple switch to change from sparse-attention to full-attention. The training time taken in both cases is roughly the same which seems to point to some issue.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18234/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18233
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18233/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18233/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18233/events
|
https://github.com/huggingface/transformers/pull/18233
| 1,312,980,192
|
PR_kwDOCUB6oc471wF7
| 18,233
|
Make errors for loss-less models more user-friendly
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
A common mistake beginners encounter is trying to fine-tune with the `Trainer` one of the AutoModel which do not have any head, and can't be fine-tuned directly. This PR makes the Trainer error at init when it received one such model, and also adds a more helpful error message when the outputs of the model don't have a loss.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18233/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18233",
"html_url": "https://github.com/huggingface/transformers/pull/18233",
"diff_url": "https://github.com/huggingface/transformers/pull/18233.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18233.patch",
"merged_at": 1658397154000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18232
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18232/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18232/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18232/events
|
https://github.com/huggingface/transformers/pull/18232
| 1,312,913,233
|
PR_kwDOCUB6oc471hi8
| 18,232
|
Fix TrainingArguments help section
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
A typo was introduced in #18134 with a trailing comma that has nothing to do here. It broke the `--help` for all example scripts, as reported in #18222, this PR fixes it and adds a type annotation.
Fixes #18222
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18232/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18232/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18232",
"html_url": "https://github.com/huggingface/transformers/pull/18232",
"diff_url": "https://github.com/huggingface/transformers/pull/18232.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18232.patch",
"merged_at": 1658394205000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18231
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18231/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18231/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18231/events
|
https://github.com/huggingface/transformers/issues/18231
| 1,312,904,437
|
I_kwDOCUB6oc5OQVT1
| 18,231
|
Conflict between pyctcdecode and Wav2Vec2ProcessorWithLM
|
{
"login": "voidful",
"id": 10904842,
"node_id": "MDQ6VXNlcjEwOTA0ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/10904842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/voidful",
"html_url": "https://github.com/voidful",
"followers_url": "https://api.github.com/users/voidful/followers",
"following_url": "https://api.github.com/users/voidful/following{/other_user}",
"gists_url": "https://api.github.com/users/voidful/gists{/gist_id}",
"starred_url": "https://api.github.com/users/voidful/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/voidful/subscriptions",
"organizations_url": "https://api.github.com/users/voidful/orgs",
"repos_url": "https://api.github.com/users/voidful/repos",
"events_url": "https://api.github.com/users/voidful/events{/privacy}",
"received_events_url": "https://api.github.com/users/voidful/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Maybe of interest to @patrickvonplaten @anton-l @sanchit-gandhi** ",
"Hi @voidful. The function [`get_missing_alphabet_tokens`](https://github.com/huggingface/transformers/blob/99eb9b523f9b9ea6096323ce5610ce6633acc88a/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L187) will only replace 'special' tokens associated with CTC decoding, namely:\r\n\r\n- The 'blank' token\r\n- The 'pad' token\r\n- The 'word delimiter' token\r\n\r\nThe function is used to highlight discrepancies between the tokenizer and decoder vocabularies. The tokens highlighted as missing `{'', '⁇', ' '}` are not filtered by this function, and thus appear to be missing in the decoder vocabulary.\r\n\r\nMay I ask, what is it exactly that you are proposing? If we you could provide a code snippet to reproduce this behaviour it would be much appreciated.",
"My situation is that I have a fine-tuned xlsr model, and I want to add kenlm on top of it. \r\nAnd I build the decoder using `build_ctcdecoder`, the label will be the same as our tokenizer vocabulary.\r\nTherefore It will have discrepancies on https://github.com/huggingface/transformers/blob/99eb9b523f9b9ea6096323ce5610ce6633acc88a/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L187\r\n\r\nI suggest not to replace the token when tokenizer and decoder vocabularies are the same.\r\n\r\nHere is my code:\r\nhttps://colab.research.google.com/drive/1IR8cwVjkflJhj0e7te_iAdYfuNlKDVzr?usp=sharing\r\n",
"Thanks for the code-snippet! I haven't been able to reproduce on the other template examples (e.g. https://discuss.huggingface.co/t/how-to-create-wav2vec2-with-language-model/12703). Will look more in-depth as to why the exception is being thrown for the use case in the Colab!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey @voidful, sorry about the delayed reply. I've taken a deeper look into your issue - it looks as though there is a mis-match between the tokeniser and LM's vocabularies (12305 tokens to be exact): https://colab.research.google.com/drive/1v1qd4CUdSXKmrSYIMqMzMk_KCUMfMWu9?usp=sharing\r\n\r\nFor LM boosted beam-search decoding for CTC, we need the vocabulary of the LM to match that of the tokeniser one-to-one. You can ensure this by training your LM using the same method that you use to train the Wav2Vec2 tokeniser. You then shouldn't have to override the method `decoder._alphabel.labels`: the vocabularies should already match (barring the special tokens).\r\n\r\nSee this example for creating a tokeniser: https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/get_ctc_tokenizer.py\r\n\r\nAnd this example for creating a corresponding LM: https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/get_ctc_ngram.py\r\n\r\nThis blog also explains succinctly how one can train and instantiate an LM: https://huggingface.co/blog/wav2vec2-with-ngram",
"> Hey @voidful, sorry about the delayed reply. I've taken a deeper look into your issue - it looks as though there is a mis-match between the tokeniser and LM's vocabularies (12305 tokens to be exact): https://colab.research.google.com/drive/1v1qd4CUdSXKmrSYIMqMzMk_KCUMfMWu9?usp=sharing\r\n> \r\n> For LM boosted beam-search decoding for CTC, we need the vocabulary of the LM to match that of the tokeniser one-to-one. You can ensure this by training your LM using the same method that you use to train the Wav2Vec2 tokeniser. You then shouldn't have to override the method `decoder._alphabel.labels`: the vocabularies should already match (barring the special tokens).\r\n> \r\n> See this example for creating a tokeniser: https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/get_ctc_tokenizer.py\r\n> \r\n> And this example for creating a corresponding LM: https://github.com/sanchit-gandhi/seq2seq-speech/blob/main/get_ctc_ngram.py\r\n> \r\n> This blog also explains succinctly how one can train and instantiate an LM: https://huggingface.co/blog/wav2vec2-with-ngram\r\n\r\nI see, the reason is that I use a bpe vocabulary to train the ctc model, it will not be match to KenLM, so I have to patch the vocabulary to make sure not deleting the bpe token. "
] | 1,658
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
### System Info
transformers 4975002df50c472cbb6f8ac3580e475f570606ab
pyctcdecode 9afead58560df07c021aa01285cd941f70fe93d5
### Who can help?
@patrici
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Error:
` The tokens {'', '⁇', ' '} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'', '⁇', ' '} in the decoder's alphabet.`
Reason:
`get_missing_alphabet_tokens` will replace special tokens
https://github.com/huggingface/transformers/blob/4975002df50c472cbb6f8ac3580e475f570606ab/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L196
however if we `build_ctcdecoder` using the same tokenizer vocab, it will be always mismatch.
### Expected behavior
A straight fix is do the same mapping on build_ctcdecoder
```python
from transformers import AutoProcessor
from pyctcdecode.alphabet import BLANK_TOKEN_PTN, UNK_TOKEN, UNK_TOKEN_PTN, Alphabet
from pyctcdecode import build_ctcdecoder
from transformers import Wav2Vec2ProcessorWithLM
model_to_add_lm = "wav2vec2-large-xxxxx"
lm_arpa_path = "xxxxx.arpa"
processor = AutoProcessor.from_pretrained(model_to_add_lm)
vocab_dict = processor.tokenizer.get_vocab()
sorted_vocab_dict = {k: v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}
alphabet = list(sorted_vocab_dict.keys())
for i, token in enumerate(alphabet):
if BLANK_TOKEN_PTN.match(token):
alphabet[i] = ""
if token == processor.tokenizer.word_delimiter_token:
alphabet[i] = " "
if UNK_TOKEN_PTN.match(token):
alphabet[i] = UNK_TOKEN
decoder = build_ctcdecoder(
labels=alphabet,
kenlm_model_path=lm_arpa_path,
)
decoder._alphabet._labels = alphabet
processor_with_lm = Wav2Vec2ProcessorWithLM(
feature_extractor=processor.feature_extractor,
tokenizer=processor.tokenizer,
decoder=decoder
)
processor_with_lm.save_pretrained("xxxxxx")
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18231/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18230
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18230/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18230/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18230/events
|
https://github.com/huggingface/transformers/pull/18230
| 1,312,881,533
|
PR_kwDOCUB6oc471auk
| 18,230
|
Translation/debugging
|
{
"login": "nickprock",
"id": 11136646,
"node_id": "MDQ6VXNlcjExMTM2NjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11136646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nickprock",
"html_url": "https://github.com/nickprock",
"followers_url": "https://api.github.com/users/nickprock/followers",
"following_url": "https://api.github.com/users/nickprock/following{/other_user}",
"gists_url": "https://api.github.com/users/nickprock/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nickprock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickprock/subscriptions",
"organizations_url": "https://api.github.com/users/nickprock/orgs",
"repos_url": "https://api.github.com/users/nickprock/repos",
"events_url": "https://api.github.com/users/nickprock/events{/privacy}",
"received_events_url": "https://api.github.com/users/nickprock/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
* added debugging.mdx
* updated _toctree.yml
See issue: [#17459](https://github.com/huggingface/transformers/issues/17459)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@omarespejel @sgugger @mfumanelli
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18230/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18230",
"html_url": "https://github.com/huggingface/transformers/pull/18230",
"diff_url": "https://github.com/huggingface/transformers/pull/18230.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18230.patch",
"merged_at": 1658394146000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18229
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18229/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18229/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18229/events
|
https://github.com/huggingface/transformers/pull/18229
| 1,312,868,239
|
PR_kwDOCUB6oc471X4t
| 18,229
|
start from 1.12, torch_ccl is renamed as oneccl_bindings_for_pytorch …
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@yao-matrix @liangan1 please review",
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger document has been uploaded",
"Hi, @sgugger ,this fix is aligned with what we do in the accelerate PR, without the correct module import, the DDP could not work with CCL backend",
"@sgugger thanks for the careful review. doc is updated based one your comment"
] | 1,658
| 1,666
| 1,658
|
CONTRIBUTOR
| null |
…and should import it before use
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
# What does this PR do?
when run the transformer with torch 1.12 and we should pip install one ccl (version 1.12) as well to enable DDP finetune in cpu.
python -m pip install oneccl_bind_pt==1.12.0 -f https://developer.intel.com/ipex-whl-stable
from 1.12.0 the module name will be changed to oneccl_bindings_for_pytorch. and should be imported before use. or else
error will happen.
Fixes # (issue)
as described above.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Library:
- trainer: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18229/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18229",
"html_url": "https://github.com/huggingface/transformers/pull/18229",
"diff_url": "https://github.com/huggingface/transformers/pull/18229.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18229.patch",
"merged_at": 1658934941000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18228
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18228/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18228/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18228/events
|
https://github.com/huggingface/transformers/issues/18228
| 1,312,862,833
|
I_kwDOCUB6oc5OQLJx
| 18,228
|
VisualBERT, visual feature projection.
|
{
"login": "shiv6891",
"id": 9869470,
"node_id": "MDQ6VXNlcjk4Njk0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9869470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shiv6891",
"html_url": "https://github.com/shiv6891",
"followers_url": "https://api.github.com/users/shiv6891/followers",
"following_url": "https://api.github.com/users/shiv6891/following{/other_user}",
"gists_url": "https://api.github.com/users/shiv6891/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shiv6891/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shiv6891/subscriptions",
"organizations_url": "https://api.github.com/users/shiv6891/orgs",
"repos_url": "https://api.github.com/users/shiv6891/repos",
"events_url": "https://api.github.com/users/shiv6891/events{/privacy}",
"received_events_url": "https://api.github.com/users/shiv6891/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gchhablani ",
"@Shiv681991 Can you please share some code examples of what you are trying to do? It'll help me replicate and understand the issue better.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
The default implementation takes in 1x197x768 visual features and gives an error while multiplying 197x768 with 2048x768 (or 1021/512 x 768 depending upon the model used).
Do we really need to modify the inner visual projection code for VisualBERT? Feels weird. Can someone help pl?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18228/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18227
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18227/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18227/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18227/events
|
https://github.com/huggingface/transformers/issues/18227
| 1,312,822,643
|
I_kwDOCUB6oc5OQBVz
| 18,227
|
Can't load tokenizer for longt5-xl
|
{
"login": "whiteRa2bit",
"id": 28367451,
"node_id": "MDQ6VXNlcjI4MzY3NDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/28367451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whiteRa2bit",
"html_url": "https://github.com/whiteRa2bit",
"followers_url": "https://api.github.com/users/whiteRa2bit/followers",
"following_url": "https://api.github.com/users/whiteRa2bit/following{/other_user}",
"gists_url": "https://api.github.com/users/whiteRa2bit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/whiteRa2bit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/whiteRa2bit/subscriptions",
"organizations_url": "https://api.github.com/users/whiteRa2bit/orgs",
"repos_url": "https://api.github.com/users/whiteRa2bit/repos",
"events_url": "https://api.github.com/users/whiteRa2bit/events{/privacy}",
"received_events_url": "https://api.github.com/users/whiteRa2bit/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I have the same issue: When I try to load the model with its tokenizer I get the following error message:\r\n```\r\nOSError: Can't load tokenizer for 'google/long-t5-tglobal-xl'.\r\nIf you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name.\r\nOtherwise, make sure 'google/long-t5-tglobal-xl' is the correct path to a directory containing all relevant files for a T5TokenizerFast tokenizer.\r\n```",
"Indeed, it seems the tokenizer files were not uploaded to that repository. Pinging @stancld, could you mention which tokenizer files should be used here? I'm happy to add these to `google`'s repositories. ",
"@LysandreJik AFAIK, the `LongT5` models use the same tokenizer as the `T5` model. I'd, therefore, just copy the `tokenizer.json` config e.g. from `long-t5-tglobal-large` to the XL repo, and it should work as expected.",
"Sounds good, I'll take care of that. Thanks!",
"Should work now, this was the only repository that needed to be updated (was lacking the tokenizer files). Feel free to close this issue if your problem is solved!"
] | 1,658
| 1,659
| 1,659
|
NONE
| null |
### System Info
transformers version: 4.20.0
Platform: Linux-4.15.0-135-generic
Python version: 3.8.13
PyTorch version (GPU?): torch==1.10.2+cu113
Using GPU in script?: no
Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplate
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm using the example provided on this page: https://huggingface.co/google/long-t5-tglobal-xl:
```
from transformers import AutoTokenizer, LongT5Model
tokenizer = AutoTokenizer.from_pretrained("google/long-t5-tglobal-xl")
```
### Expected behavior
A tokenizer for a longt5-xl works
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18227/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/18227/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18226
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18226/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18226/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18226/events
|
https://github.com/huggingface/transformers/pull/18226
| 1,312,763,715
|
PR_kwDOCUB6oc471BGj
| 18,226
|
Fix `TFSwinSelfAttention` to have relative position index as non-trainable weight
|
{
"login": "harrydrippin",
"id": 5152494,
"node_id": "MDQ6VXNlcjUxNTI0OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5152494?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harrydrippin",
"html_url": "https://github.com/harrydrippin",
"followers_url": "https://api.github.com/users/harrydrippin/followers",
"following_url": "https://api.github.com/users/harrydrippin/following{/other_user}",
"gists_url": "https://api.github.com/users/harrydrippin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harrydrippin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harrydrippin/subscriptions",
"organizations_url": "https://api.github.com/users/harrydrippin/orgs",
"repos_url": "https://api.github.com/users/harrydrippin/repos",
"events_url": "https://api.github.com/users/harrydrippin/events{/privacy}",
"received_events_url": "https://api.github.com/users/harrydrippin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger Adding you for a final TF review before merging"
] | 1,658
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes `TFSwinSelfAttention` to have `relative_position_index` as non-trainable weight.
## Problem
When trying to convert `SwinModel` to `TFSwinModel` by using `TFSwinModel.from_pretrained(weight_path, config, from_pt=True)`, I faced the warning below:
```
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFSwinModel:
['encoder.layers.2.blocks.0.attention.self.relative_position_index',
'encoder.layers.2.blocks.1.attention.self.relative_position_index',
'encoder.layers.2.blocks.6.attention.self.relative_position_index',
'encoder.layers.2.blocks.7.attention.self.relative_position_index', ...
```
**I checked that `SwinModel` has those keys on its weight while `TFSwinModel` hasn't.** `SwinModel` assigned this value as non-trainable weight by using `self.register_buffer`, but on `TFSwinModel` it was just assigned as class members (`self.relative_position_index = tf.reduce_sum(...)`).
## Fix
I added `relative_position_index` as non-trainable parameter by using `self.add_weight` on `build()`, so that `relative_position_index` can have proper key name on `model.weights` list. I checked the conversion that I failed is successfully done after applying this fix.
I also tried to just change `self.relative_position_index` to `tf.Variable(..., trainable=False)`, but It didn't work due to the key name. This will set key name as `relative_position_index:0`, not like `tf_swin_model/swin/encoder/layers.0/.../self/relative_position_index:0`.
## Review
This PR is related with Swin Transformer and TensorFlow.
TensorFlow: @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18226/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18226",
"html_url": "https://github.com/huggingface/transformers/pull/18226",
"diff_url": "https://github.com/huggingface/transformers/pull/18226.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18226.patch",
"merged_at": 1659699580000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18225
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18225/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18225/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18225/events
|
https://github.com/huggingface/transformers/pull/18225
| 1,312,701,996
|
PR_kwDOCUB6oc470zVq
| 18,225
|
Add canine in documentation_tests_file
|
{
"login": "oneraghavan",
"id": 3041890,
"node_id": "MDQ6VXNlcjMwNDE4OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3041890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oneraghavan",
"html_url": "https://github.com/oneraghavan",
"followers_url": "https://api.github.com/users/oneraghavan/followers",
"following_url": "https://api.github.com/users/oneraghavan/following{/other_user}",
"gists_url": "https://api.github.com/users/oneraghavan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oneraghavan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oneraghavan/subscriptions",
"organizations_url": "https://api.github.com/users/oneraghavan/orgs",
"repos_url": "https://api.github.com/users/oneraghavan/repos",
"events_url": "https://api.github.com/users/oneraghavan/events{/privacy}",
"received_events_url": "https://api.github.com/users/oneraghavan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh Requesting you to review .",
"Sorry, I missed this PR. Look it now.",
"@oneraghavan , the doctest would fail for `canine` at this moment. We have to add the expected values (loss or some outputs).\r\n\r\nWould you like to follow the changes in PR #16441 for `modeling_longformer.py`, more precisely in `LongformerForSequenceClassification` and `LongformerForTokenClassification`. See those changes [here](https://github.com/huggingface/transformers/pull/16441/files).\r\n\r\nDon't hesitate if you have any question. Thank you!",
"@ydshieh Will add those changes. Request you to reopen this PR.",
"@oneraghavan Thank you 🤗 . I reopened the PR. Before continue the work, don't forget to update your local `main` branch first, then rebase your working branch on `main` branch.",
"@ydshieh I request you to reopen the PR again. I have fixed the checkpoints, the tests should pass now.",
"@ydshieh I think this is good to merge. ",
"Yes, agreed @ydshieh . In general any result that is LABEL_0 or a list of those should really not be included.",
"@ydshieh I agree to the part where label_x is not so meaningful. Duplicating the function will make later debugging hard. I will remove the test for token classification. \r\n\r\n@sgugger Can we make add_code_sample_docstrings decorator use the expected output in optional way ? like if the function does not have the expected output, just don't validate the expected output ?",
"I don't think we have an easy way to ignore the doctest in this case. The `>>> predicted_tokens_classes` part in `PT_TOKEN_CLASSIFICATION_SAMPLE` in the file `src/transformers/utils/doc.py` requires some expected outputs for `predicted_tokens_classes`. If there is none, the test just fails.\r\n\r\n\r\n```python\r\n >>> predicted_tokens_classes\r\n {expected_output}\r\n```",
"@ydshieh @sgugger Can we do add a paramerter in add_code_sample_docstrings in function and leave the default to None. Then when places we need to use custom sample, we can call it from there .\r\n\r\nThe function definition will look like this \r\n\r\ndef add_code_sample_docstrings(\r\n *docstr,\r\n processor_class=None,\r\n checkpoint=None,\r\n output_type=None,\r\n config_class=None,\r\n mask=\"[MASK]\",\r\n qa_target_start_index=14,\r\n qa_target_end_index=15,\r\n model_cls=None,\r\n modality=None,\r\n expected_output=\"\",\r\n expected_loss=\"\",\r\n code_sample=\"\",\r\n):\r\n\r\nInside I can use code_sample if it has been passed or look up the code sample from the templates. \r\n\r\nLet me know if this is okay.\r\n",
"I don't see how [the latest change](https://github.com/huggingface/transformers/pull/18225/commits/e1e98b9576c6a331f2b74f730ddd08c6f47421d6) is better than just putting the docstring under `CanineForTokenClassification` directly.\r\n\r\nI will leave @sgugger to give his opinion.",
"We don't need any other tooling here. Either the model falls in the \"automatic docstring\" category or it does not. If it does not, we just write the docstring (with the replace return decorator)."
] | 1,658
| 1,665
| 1,665
|
CONTRIBUTOR
| null |
# What does this PR do?
modeling_canine has doc test setup by not included in documentation_tests.txt , this PR adds it
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16292
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18225/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18225",
"html_url": "https://github.com/huggingface/transformers/pull/18225",
"diff_url": "https://github.com/huggingface/transformers/pull/18225.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18225.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18224
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18224/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18224/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18224/events
|
https://github.com/huggingface/transformers/pull/18224
| 1,312,618,871
|
PR_kwDOCUB6oc470gV7
| 18,224
|
Fix typo in add_new_pipeline.mdx
|
{
"login": "zh-zheng",
"id": 44703133,
"node_id": "MDQ6VXNlcjQ0NzAzMTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/44703133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zh-zheng",
"html_url": "https://github.com/zh-zheng",
"followers_url": "https://api.github.com/users/zh-zheng/followers",
"following_url": "https://api.github.com/users/zh-zheng/following{/other_user}",
"gists_url": "https://api.github.com/users/zh-zheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zh-zheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zh-zheng/subscriptions",
"organizations_url": "https://api.github.com/users/zh-zheng/orgs",
"repos_url": "https://api.github.com/users/zh-zheng/repos",
"events_url": "https://api.github.com/users/zh-zheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/zh-zheng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
fix typo
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18224/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18224",
"html_url": "https://github.com/huggingface/transformers/pull/18224",
"diff_url": "https://github.com/huggingface/transformers/pull/18224.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18224.patch",
"merged_at": 1658382930000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18223
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18223/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18223/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18223/events
|
https://github.com/huggingface/transformers/issues/18223
| 1,312,576,019
|
I_kwDOCUB6oc5OPFIT
| 18,223
|
Tensorflow example squad's run_qa.py miss token_type_ids inputs
|
{
"login": "zhuango",
"id": 5491519,
"node_id": "MDQ6VXNlcjU0OTE1MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5491519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuango",
"html_url": "https://github.com/zhuango",
"followers_url": "https://api.github.com/users/zhuango/followers",
"following_url": "https://api.github.com/users/zhuango/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuango/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuango/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuango/subscriptions",
"organizations_url": "https://api.github.com/users/zhuango/orgs",
"repos_url": "https://api.github.com/users/zhuango/repos",
"events_url": "https://api.github.com/users/zhuango/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuango/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This issue has been resolved by #18451"
] | 1,658
| 1,662
| 1,661
|
NONE
| null |
### System Info
transformers==4.20.1, torch==1.9.0, tensorflow2==2.9.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior
1. Download SQuADv.11 fine-tuned bert large weights from: https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad
2. Successfully reproduce the inference F1-score by runing this [pytorch example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py).
3. But fail to reproduce the inference f1-score by runing this [tensorflow2 example](https://github.com/huggingface/transformers/blob/main/examples/tensorflow/question-answering/run_qa.py).
4. The reason is the tensorflow example miss the token_type_ids inputs. I add this input at following position to solved this problem:
https://github.com/huggingface/transformers/blob/main/examples/tensorflow/question-answering/run_qa.py#L640
` tensor_keys = ["attention_mask", "token_type_ids", "input_ids"]`
https://github.com/huggingface/transformers/blob/main/examples/tensorflow/question-answering/run_qa.py#L661
```
eval_inputs = {
"input_ids": tf.ragged.constant(processed_datasets["validation"]["input_ids"]).to_tensor(),
"token_type_ids": tf.ragged.constant(processed_datasets["validation"]["token_type_ids"]).to_tensor(),
"attention_mask": tf.ragged.constant(processed_datasets["validation"]["attention_mask"]).to_tensor(),
}
```
https://github.com/huggingface/transformers/blob/main/examples/tensorflow/question-answering/run_qa.py#L681
```
predict_inputs = {
"input_ids": tf.ragged.constant(processed_datasets["test"]["input_ids"]).to_tensor(),
"token_type_ids": tf.ragged.constant(processed_datasets["test"]["token_type_ids"]).to_tensor(),
"attention_mask": tf.ragged.constant(processed_datasets["test"]["attention_mask"]).to_tensor(),
}
```
### Expected behavior
Both pytorch and tensorflow example produce same F1-score based on [this weights](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18223/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18222
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18222/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18222/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18222/events
|
https://github.com/huggingface/transformers/issues/18222
| 1,312,163,476
|
I_kwDOCUB6oc5ONgaU
| 18,222
|
Running `examples/pytorch/summarization/run_summarization.py --help` gives `TypeError: can only concatenate tuple (not "str") to tuple`
|
{
"login": "mattf1n",
"id": 13317807,
"node_id": "MDQ6VXNlcjEzMzE3ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/13317807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mattf1n",
"html_url": "https://github.com/mattf1n",
"followers_url": "https://api.github.com/users/mattf1n/followers",
"following_url": "https://api.github.com/users/mattf1n/following{/other_user}",
"gists_url": "https://api.github.com/users/mattf1n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mattf1n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mattf1n/subscriptions",
"organizations_url": "https://api.github.com/users/mattf1n/orgs",
"repos_url": "https://api.github.com/users/mattf1n/repos",
"events_url": "https://api.github.com/users/mattf1n/events{/privacy}",
"received_events_url": "https://api.github.com/users/mattf1n/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Thanks for flagging! The PR mentioned above should fix it."
] | 1,658
| 1,658
| 1,658
|
NONE
| null |
### System Info
- `transformers` version: 4.21.0.dev0
- Platform: macOS-12.3-x86_64-i386-64bit
- Python version: 3.10.0
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.5.0 (cpu)
- Jax version: 0.3.13
- JaxLib version: 0.3.10
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running `examples/pytorch/summarization/run_summarization.py --help` gives `TypeError: can only concatenate tuple (not "str") to tuple` in my environment.
1. `git clone https://github.com/huggingface/transformers`
2. `cd transformers`
3. `pip install .`
4. `pip install -r examples/pytorch/summarization/requirements.txt`
5. `python examples/pytorch/summarization/run_summarization.py --help`
### Expected behavior
(full traceback)
```
Traceback (most recent call last):
File "/Users/matthewf/transformers/examples/pytorch/summarization/run_summarization.py", line 735, in <module>
main()
File "/Users/matthewf/transformers/examples/pytorch/summarization/run_summarization.py", line 304, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/Users/matthewf/.pyenv/versions/3.9.7/envs/transformers/lib/python3.9/site-packages/transformers/hf_argparser.py", line 217, in parse_args_into_dataclasses
namespace, remaining_args = self.parse_known_args(args=args)
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 1853, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 2062, in _parse_known_args
start_index = consume_optional(start_index)
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 2002, in consume_optional
take_action(action, args, option_string)
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 1930, in take_action
action(self, namespace, argument_values, option_string)
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 1094, in __call__
parser.print_help()
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 2550, in print_help
self._print_message(self.format_help(), file)
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 2534, in format_help
return formatter.format_help()
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 283, in format_help
help = self._root_section.format_help()
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 214, in format_help
item_help = join([func(*args) for func, args in self.items])
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 214, in <listcomp>
item_help = join([func(*args) for func, args in self.items])
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 214, in format_help
item_help = join([func(*args) for func, args in self.items])
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 214, in <listcomp>
item_help = join([func(*args) for func, args in self.items])
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 530, in _format_action
help_text = self._expand_help(action)
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 626, in _expand_help
return self._get_help_string(action) % params
File "/Users/matthewf/.pyenv/versions/3.9.7/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py", line 697, in _get_help_string
help += ' (default: %(default)s)'
TypeError: can only concatenate tuple (not "str") to tuple
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18222/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18221
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18221/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18221/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18221/events
|
https://github.com/huggingface/transformers/pull/18221
| 1,311,730,889
|
PR_kwDOCUB6oc47xUKC
| 18,221
|
Add support for Sagemaker Model Parallel >= 1.10 new checkpoint API
|
{
"login": "viclzhu",
"id": 20961977,
"node_id": "MDQ6VXNlcjIwOTYxOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/20961977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/viclzhu",
"html_url": "https://github.com/viclzhu",
"followers_url": "https://api.github.com/users/viclzhu/followers",
"following_url": "https://api.github.com/users/viclzhu/following{/other_user}",
"gists_url": "https://api.github.com/users/viclzhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/viclzhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/viclzhu/subscriptions",
"organizations_url": "https://api.github.com/users/viclzhu/orgs",
"repos_url": "https://api.github.com/users/viclzhu/repos",
"events_url": "https://api.github.com/users/viclzhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/viclzhu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds support for Sagemaker Model Parallel >= 1.10's new checkpoint API while keeping SMP < 1.10 functionality.
* Support loading checkpoints saved with SMP < 1.10 in SMP < 1.10 and SMP >= 1.10
* Support loading checkpoints saved with SMP >= 1.10 in SMP >= 1.10
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18221/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18221",
"html_url": "https://github.com/huggingface/transformers/pull/18221",
"diff_url": "https://github.com/huggingface/transformers/pull/18221.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18221.patch",
"merged_at": 1658382980000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18220
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18220/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18220/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18220/events
|
https://github.com/huggingface/transformers/issues/18220
| 1,311,686,546
|
I_kwDOCUB6oc5OLr-S
| 18,220
|
transformers[tf-cpu] fails because torch isn't installed
|
{
"login": "BrainSlugs83",
"id": 5217366,
"node_id": "MDQ6VXNlcjUyMTczNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5217366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BrainSlugs83",
"html_url": "https://github.com/BrainSlugs83",
"followers_url": "https://api.github.com/users/BrainSlugs83/followers",
"following_url": "https://api.github.com/users/BrainSlugs83/following{/other_user}",
"gists_url": "https://api.github.com/users/BrainSlugs83/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BrainSlugs83/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BrainSlugs83/subscriptions",
"organizations_url": "https://api.github.com/users/BrainSlugs83/orgs",
"repos_url": "https://api.github.com/users/BrainSlugs83/repos",
"events_url": "https://api.github.com/users/BrainSlugs83/events{/privacy}",
"received_events_url": "https://api.github.com/users/BrainSlugs83/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @BrainSlugs83, the issue there is that the `AutoModelForSequenceClassification` is actually a Torch class - if you want the TF version you should use `TFAutoModelForSequenceClassification`. Can you try that change and let me know if it fixes things?",
"I see, that's helpful to know -- I think it would fix it (though not for that specific model). -- And we can close this issue as PEBCAK on my part. \r\n\r\n(Definitely PEBCAK as this is documented, I just didn't notice it when I was trying to figure this out yesterday. 🤦🏻♂️ -- I really appreciate the guidance, so thank you @Rocketknight1. 🙂)\r\n\r\nThough I would like to give the feedback (if you're open to it): \r\n 1. It seems like a missed opportunity for the Auto classes (i.e. it seems like the Auto classes are designed to look up the class that you actually need and hand that back to you, so as to promote code reuse.)\r\n \r\n Therefore, I feel like the auto classes *should* be able to know the difference and just hand you back a TF specific class if you're using TF or a Torch specific class if you're using Torch...\r\n\r\n Because, as-is, this prevents code-reuse (i.e. I can't share the same code between the two frameworks as they have different class names.)\r\n\r\n 2. At the very least, it seems like the error message should be telling me to use a different class name, and not to be reinstalling my dev environment and switching ML stacks. 😅\r\n\r\nThank you again though -- I really appreciate the hand holding here!",
"@BrainSlugs83 Honestly, we like the idea! I'm going to draft a PR - I'll link you when it's ready.",
"@BrainSlugs83 PR is open at #18280!"
] | 1,658
| 1,658
| 1,658
|
NONE
| null |
### System Info
transformers-cli-env crashes, so I'm typing things manually, lmk if you need something specific.
```
Windows 10=19043.1826
Miniconda3=4.12.0
pip=22.1.2
python=3.9.13
cudatoolkit=11.3.1
cudnn=8.1.0.77
tensorboard=2.9.1
tensorboard-data-server=0.6.1
tensorboard-plugin-wit=1.8.1
tensorflow-cpu=2.9.1
tensorflow-estimator=2.9.0
tensorflow-io-gcs-filesystem=0.26.0
```
### Who can help?
@Rocketknight1 - looks like you are listed for tensorflow. Apologies if this is wrong, or if I misinterpreted something.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Follow the installation instructions for tf-cpu from the [documentation](https://www.tensorflow.org/install/pip#windows).
1. `conda create -n hf python=3.9 pip`
2. `conda activate hf`
3. `pip install transformers[tf-cpu]`
6. Verify tensorflow install: `python -c "import tensorflow as tf; print(tf.config.list_physical_devices('CPU'))"`
7. Verify the hugging face install `python -c "from transformers import AutoModelForSequenceClassification; model=AutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english')"`
It fails complaining that torch is not installed. -- Yes I can create an env with torch, but ... the tf-cpu branch should be working with tensorflow not torch.
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Mikey\miniconda3\envs\hf\lib\site-packages\transformers\utils\import_utils.py", line 821, in __getattr__
requires_backends(cls, cls._backends)
File "C:\Users\Mikey\miniconda3\envs\hf\lib\site-packages\transformers\utils\import_utils.py", line 809, in requires_backends
raise ImportError("".join(failed))
ImportError:
AutoModelForSequenceClassification requires the PyTorch library but it was not found in your environment. Checkout the instructions on the
installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment.
```
I have also tried installing CUDA and CuDNN, but it did not have any effect.
`conda install -c conda-forge cudatoolkit=11.3 cudnn=8.1.0`
### Expected behavior
The tensorflow version of hugging face should work with tensorflow and not raise exceptions about torch being missing.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18220/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18219
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18219/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18219/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18219/events
|
https://github.com/huggingface/transformers/issues/18219
| 1,311,531,901
|
I_kwDOCUB6oc5OLGN9
| 18,219
|
Tokeniser support in java
|
{
"login": "rkoystart",
"id": 64691602,
"node_id": "MDQ6VXNlcjY0NjkxNjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/64691602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rkoystart",
"html_url": "https://github.com/rkoystart",
"followers_url": "https://api.github.com/users/rkoystart/followers",
"following_url": "https://api.github.com/users/rkoystart/following{/other_user}",
"gists_url": "https://api.github.com/users/rkoystart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rkoystart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rkoystart/subscriptions",
"organizations_url": "https://api.github.com/users/rkoystart/orgs",
"repos_url": "https://api.github.com/users/rkoystart/repos",
"events_url": "https://api.github.com/users/rkoystart/events{/privacy}",
"received_events_url": "https://api.github.com/users/rkoystart/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
### Feature request
Current I have tested the sentence transformers paraphrase-multilingual-MiniLM-L12-v2 model in python. The model seems to be performing very well. I want to use the model in java so I converted the model to onnx model. But I could not find a way to convert the tokeniser in java or some equivalent tokeniser library in java .
So would like to know is there a way to use the tokeniser in java.
### Motivation
Tokeniser support in java
### Your contribution
.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18219/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18218
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18218/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18218/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18218/events
|
https://github.com/huggingface/transformers/pull/18218
| 1,311,518,992
|
PR_kwDOCUB6oc47wj9k
| 18,218
|
Generate: validate arguments
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> For 2, the way you chose feels very very magical with lots of ad-hoc code that is going to be hard to maintain.\r\n\r\nYeah, I agree, that was the number 1 reason why I left so many comments and caveats. It works but would be annoying to maintain.\r\n\r\n(@sgugger) If I got it right, the suggestion was to pop used arguments from `generation_inputs` as we call functions, correct? Something like `consume_arguments(generation_inputs, <function that was just called>)` after most calls, with a small validation function at the end of generate?\r\n\r\nMeanwhile, I'm going to do as suggested, and move the model kwargs validation to its own PR :)",
"> Something like consume_arguments(generation_inputs, <function that was just called>) after most calls, with a small validation function at the end of generate\r\n\r\nNo, something more like `result, generation_inputs = <function to call>(generation_inputs)`",
"Closing in place of two PRs:\r\n- https://github.com/huggingface/transformers/pull/18261 for the model_kwargs validation\r\n- TBD for the validation of other arguments, as per comments above"
] | 1,658
| 1,666
| 1,658
|
MEMBER
| null |
# What does this PR do?
NOTE: this PR is very experimental, feel free to trash it in the review process :)
A common cause for issues in `generate` is around it not behaving as expected, as arguments can be silently ignored as part of the selected generation submethod (greedy_search, sample, ...). Typos also often fly under the radar, as the method accepts `**model_kwargs`, which in turn are passed to models that also accept `**kwargs`.
This PR adds argument validation to `generate` in two separate steps:
1. `model_kwargs` are verified as soon as the method is called. Only arguments that the model actually uses in `prepare_inputs_for_generation` or in its forward pass are accepted. This means that typos are caught immediately. The exception enumerates all arguments that triggered this failed check, so the user can correct them.
2. Before calling the appropriate generate submethod, which is picked from the arguments, checks that all passed arguments will actually be used. If the user passes an argument that is not used in that particular submethod, throws an exception indicating the submethod that was triggered and the unaccepted arguments, so the user can fix either problem (correct the submethod or correct the arguments).
Although I think the checks are super useful, the code around it is not the prettiest. The first check has some logic for edge cases, and the second case requires passing the list of methods that will be called before the submethod in question. The PR is heavily commented in GH, feel free to cast your judgment!
P.S.: (seemingly) unrelated accelerate tests are failing in `run_examples_torch`
### Related issues
- https://github.com/huggingface/transformers/issues/18130
- https://github.com/huggingface/transformers/pull/17196
- (many other issues where users were confused because they were trying to use certain arguments that had no effect on the picked submethod)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18218/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18218",
"html_url": "https://github.com/huggingface/transformers/pull/18218",
"diff_url": "https://github.com/huggingface/transformers/pull/18218.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18218.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18217
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18217/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18217/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18217/events
|
https://github.com/huggingface/transformers/issues/18217
| 1,311,297,853
|
I_kwDOCUB6oc5OKNE9
| 18,217
|
BLOOM model parameters mentioned in hub-docs
|
{
"login": "muhammad-ahmed-ghani",
"id": 63394104,
"node_id": "MDQ6VXNlcjYzMzk0MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/63394104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muhammad-ahmed-ghani",
"html_url": "https://github.com/muhammad-ahmed-ghani",
"followers_url": "https://api.github.com/users/muhammad-ahmed-ghani/followers",
"following_url": "https://api.github.com/users/muhammad-ahmed-ghani/following{/other_user}",
"gists_url": "https://api.github.com/users/muhammad-ahmed-ghani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muhammad-ahmed-ghani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muhammad-ahmed-ghani/subscriptions",
"organizations_url": "https://api.github.com/users/muhammad-ahmed-ghani/orgs",
"repos_url": "https://api.github.com/users/muhammad-ahmed-ghani/repos",
"events_url": "https://api.github.com/users/muhammad-ahmed-ghani/events{/privacy}",
"received_events_url": "https://api.github.com/users/muhammad-ahmed-ghani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
<h3>The model mentioned in Hugging-face hub is actually 176B parameters model by BLOOM. But written as 175B in "docs/source/en/model_doc/bloom.mdx"</h3>
<h4>Visit below link to confirm</h4>
[link](https://huggingface.co/docs/transformers/model_doc/bloom)

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18217/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18216
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18216/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18216/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18216/events
|
https://github.com/huggingface/transformers/issues/18216
| 1,311,243,475
|
I_kwDOCUB6oc5OJ_zT
| 18,216
|
Support private (Opacus) training of BART by altering BartLearnedPositionalEmbedding's forward method
|
{
"login": "donebydan",
"id": 15520428,
"node_id": "MDQ6VXNlcjE1NTIwNDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/15520428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donebydan",
"html_url": "https://github.com/donebydan",
"followers_url": "https://api.github.com/users/donebydan/followers",
"following_url": "https://api.github.com/users/donebydan/following{/other_user}",
"gists_url": "https://api.github.com/users/donebydan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donebydan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donebydan/subscriptions",
"organizations_url": "https://api.github.com/users/donebydan/orgs",
"repos_url": "https://api.github.com/users/donebydan/repos",
"events_url": "https://api.github.com/users/donebydan/events{/privacy}",
"received_events_url": "https://api.github.com/users/donebydan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
CONTRIBUTOR
| null |
### Feature request
Alter the signature of `BartLearnedPositionalEmbedding`'s forward method to take a `torch.Tensor` instead of `torch.Size` input.
### Motivation
This will support private fine-tuning of BART via DP-SGD in Opacus. To use Opacus on a custom `nn.Module` like `BartLearnedPositionalEmbedding` there is a fairly reasonable assumption that layers take tensors as input. This assumption falls over with `BartLearnedPositionalEmbedding` since it take a `torch.Shape` input instead.
In particular, `opacus/grad_sample/grad_sample_module.py` line 190 (the `capture_activations_hook` method) tries to detach the input from device via:
`module.activations.append(forward_input[0].detach())`
If we pass the tensor instead, we can start fine-tuning BART-type summarization models with differential privacy.
### Your contribution
A few lines of code need to be changed in `modeling_bart.py`. In particular, the `forward` signature of `BartLearnedPositionalEmbedding.forward()` and references to this method.
I already have a change with BART-related tests passing. More than happy to create a PR :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18216/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18215
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18215/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18215/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18215/events
|
https://github.com/huggingface/transformers/pull/18215
| 1,311,108,849
|
PR_kwDOCUB6oc47vHem
| 18,215
|
[Don't merge] Debug testing
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,662
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
Debug
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18215/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18215",
"html_url": "https://github.com/huggingface/transformers/pull/18215",
"diff_url": "https://github.com/huggingface/transformers/pull/18215.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18215.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18214
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18214/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18214/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18214/events
|
https://github.com/huggingface/transformers/issues/18214
| 1,311,064,918
|
I_kwDOCUB6oc5OJUNW
| 18,214
|
Save and load
|
{
"login": "LeninGF",
"id": 33504041,
"node_id": "MDQ6VXNlcjMzNTA0MDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/33504041?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeninGF",
"html_url": "https://github.com/LeninGF",
"followers_url": "https://api.github.com/users/LeninGF/followers",
"following_url": "https://api.github.com/users/LeninGF/following{/other_user}",
"gists_url": "https://api.github.com/users/LeninGF/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeninGF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeninGF/subscriptions",
"organizations_url": "https://api.github.com/users/LeninGF/orgs",
"repos_url": "https://api.github.com/users/LeninGF/repos",
"events_url": "https://api.github.com/users/LeninGF/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeninGF/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Rocketknight1 @gante",
"Hi @LeninGF 👋 I had a look into your notebooks but they are very long, which makes it very hard to pin the problem. Would you be able to share a short notebook (as short as possible) where the problem can be reproduced? Thanks :)",
"Hi Huggingface/Transformers I will do it . The only problem is that I will\nuse other dataset different from the one I am working because if privacy\npolicy... Give some hours to upload it\n\nOn Mon, Aug 1, 2022, 4:18 PM Joao Gante ***@***.***> wrote:\n\n> Hi @LeninGF <https://github.com/LeninGF> 👋 I had a look into your\n> notebooks but they are very long, which makes it very hard to pin the\n> problem. Would you be able to share a short notebook (as short as possible)\n> where the problem can be reproduced? Thanks :)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/18214#issuecomment-1201732303>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AH7TWKIUSR6BG5KVBRGMBO3VXA5KPANCNFSM54DTBWPA>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n",
"Hi Joao Please you can check my training code here:\nhttps://colab.research.google.com/gist/LeninGF/89234ab4ba45147d34b8e8657caff761/model_train_huggingface_gitnew.ipynb\n\nI think that the problem should happen with any dataset used. I am using a\nmulti labelled dataset. For company reasons I am not yet able to share it.\nPlease let me know if you would need a sample of it to reproduce the\nproblem.\n\nThe following colab shows how I am trying to train the model\nhttps://colab.research.google.com/gist/LeninGF/89234ab4ba45147d34b8e8657caff761/model_train_huggingface_gitnew.ipynb\n\nThe following colab shows that I am trying to load the weights of the\ntrained model to test it again with the test set by using a new\ncolab notebook\n\nhttps://colab.research.google.com/gist/LeninGF/08b2824b73692134ec27979a7e6011ea/testingsavedfthfmodel.ipynb\n\nyou can reach me at ***@***.*** too\n\nThe problem is as follows: it does not matter how I train the model. While\nthe notebook where it was trained is active, you can see that the\nmodel.evaluate(test_dataset) achieves a satisfactory 0.8 accuracy (even\nthough there is some overfitting) However, once I saved the model and I try\nto load it again, it does not work and you can see that repeating the\nweights load, model compile and model evaluate gives me an accuracy off 0.08\n\nthanks for your kind help. If It is not to bother you a lot I am trying to\nreplicate this problem using the tweet emotion dataset that\nhuggingface has, I can send you the gist if you agree. I have already\ntrained the model and I am about to test if the downloaded model will be\nworking\n\n\n\nAtentamente,\n\nLenin Falconí Estrada\n\n\nEl lun, 1 ago 2022 a las 16:18, Joao Gante ***@***.***>)\nescribió:\n\n> Hi @LeninGF <https://github.com/LeninGF> 👋 I had a look into your\n> notebooks but they are very long, which makes it very hard to pin the\n> problem. Would you be able to share a short notebook (as short as possible)\n> where the problem can be reproduced? Thanks :)\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/18214#issuecomment-1201732303>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AH7TWKIUSR6BG5KVBRGMBO3VXA5KPANCNFSM54DTBWPA>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,662
| 1,662
|
NONE
| null |
Hello community, I am having the same problem described to save and load a fine tuned model using transformers and tensorflow. I have used save_pretrained, save_weights and model.save with save_format=tf. I have been able to load the model with from_pretrained but it loads no weights and when I perform evaluation performance is too low compared when training while the fine tuned model is in memory. You could check my code at GitHub in LeninGF/clasificaion_robos_fge in model_train_huggingface.ipynb and evaluate Notebook.ipynb
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18214/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18213
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18213/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18213/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18213/events
|
https://github.com/huggingface/transformers/pull/18213
| 1,310,786,678
|
PR_kwDOCUB6oc47t_R1
| 18,213
|
Change to FlavaProcessor in PROCESSOR_MAPPING_NAMES
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Test error is unrelated - merge now\r\n\r\n```\r\nerror: failed to fetch some objects from 'https://user:hf_94wBhPGp6KrrTH3KDchhKpRxZwd6dmHWLL@hub-ci.huggingface.co/__DUMMY_TRANSFORMERS_USER__/test-trainer-step.git/info/lfs\r\n```"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
`FLAVAProcessor` in `PROCESSOR_MAPPING_NAMES` should be `FlavaProcessor`.
Not getting problem when using `AutoProcessor.from_pretrained`, but `PROCESSOR_MAPPING[FlavaConfig]` will fail
### Errors
```python
from transformers import PROCESSOR_MAPPING, FlavaConfig, CLIPConfig, LayoutLMv2Config
processor_types = PROCESSOR_MAPPING[CLIPConfig]
print(processor_types)
processor_types = PROCESSOR_MAPPING[LayoutLMv2Config]
print(processor_types)
# This fails
processor_types = PROCESSOR_MAPPING[FlavaConfig]
print(processor_types)
```
with errors
```bash
Traceback (most recent call last):
File "C:\Users\33611\Desktop\Project\transformers\temp.py", line 9, in <module>
processor_types = PROCESSOR_MAPPING[FlavaConfig]
File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 565, in __getitem__
return self._load_attr_from_module(model_type, model_name)
File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 579, in _load_attr_from_module
return getattribute_from_module(self._modules[module_name], attr)
File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 539, in getattribute_from_module
return getattribute_from_module(transformers_module, attr)
File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 539, in getattribute_from_module
return getattribute_from_module(transformers_module, attr)
File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 539, in getattribute_from_module
return getattribute_from_module(transformers_module, attr)
[Previous line repeated 986 more times]
File "C:\Users\33611\Desktop\Project\transformers\src\transformers\models\auto\auto_factory.py", line 538, in getattribute_from_module
transformers_module = importlib.import_module("transformers")
File "C:\Users\33611\miniconda3\envs\py39\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load
File "<frozen importlib._bootstrap>", line 157, in __enter__
File "<frozen importlib._bootstrap>", line 183, in _get_module_lock
File "<frozen importlib._bootstrap>", line 59, in __init__
RecursionError: maximum recursion depth exceeded while calling a Python object
Process finished with exit code 1
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18213/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18213",
"html_url": "https://github.com/huggingface/transformers/pull/18213",
"diff_url": "https://github.com/huggingface/transformers/pull/18213.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18213.patch",
"merged_at": 1658313014000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18212
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18212/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18212/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18212/events
|
https://github.com/huggingface/transformers/issues/18212
| 1,310,731,031
|
I_kwDOCUB6oc5OICsX
| 18,212
|
Private model usage problem
|
{
"login": "micktsai",
"id": 37768110,
"node_id": "MDQ6VXNlcjM3NzY4MTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/37768110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/micktsai",
"html_url": "https://github.com/micktsai",
"followers_url": "https://api.github.com/users/micktsai/followers",
"following_url": "https://api.github.com/users/micktsai/following{/other_user}",
"gists_url": "https://api.github.com/users/micktsai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/micktsai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/micktsai/subscriptions",
"organizations_url": "https://api.github.com/users/micktsai/orgs",
"repos_url": "https://api.github.com/users/micktsai/repos",
"events_url": "https://api.github.com/users/micktsai/events{/privacy}",
"received_events_url": "https://api.github.com/users/micktsai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Same problem, something wrong with private models with unsupported architecture. It doesn't see modeling file."
] | 1,658
| 1,663
| 1,661
|
NONE
| null |
I upload a private model for myself, and when I want to use it by “AutoModel.from_pretrained” there appears a error as I show bleow.
I have used huggingface-cli login with the access token with read grant and use “trust_remote_code=True” as it recommands but it still has error 401.
How can I use my private model?
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Could not locate the model.py inside micktsai/resnet50_try.
Traceback (most recent call last):
File “test.py”, line 9, in
model = AutoModel.from_pretrained(“micktsai/resnet50_try”, trust_remote_code=True,use_auth_token=True)
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\models\auto\auto_factory.py”, line 441, in from_pretrained
pretrained_model_name_or_path, module_file + “.py”, class_name, **kwargs
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\dynamic_module_utils.py”, line 382, in get_class_from_dynamic_module
local_files_only=local_files_only,
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\dynamic_module_utils.py”, line 239, in get_cached_module_file
use_auth_token=use_auth_token,
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 292, in cached_path
local_files_only=local_files_only,
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 495, in get_from_cache
_raise_for_status(r)
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 418, in _raise_for_status
f"401 Client Error: Repository not found for url: {response.url}. "
transformers.utils.hub.RepositoryNotFoundError: 401 Client Error: Repository not found for url: [https://huggingface.co/micktsai/resnet50_try/resolve/main/model.py 1](https://huggingface.co/micktsai/resnet50_try/resolve/main/model.py). If the repo is private, make sure you are authenticated.
C:\Users\User\Downloads>py test.py
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Could not locate the model.py inside micktsai/resnet50_try.
Traceback (most recent call last):
File “test.py”, line 10, in
“micktsai/resnet50_try”, use_auth_token=True, trust_remote_code=True)
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\models\auto\auto_factory.py”, line 441, in from_pretrained
pretrained_model_name_or_path, module_file + “.py”, class_name, **kwargs
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\dynamic_module_utils.py”, line 382, in get_class_from_dynamic_module
local_files_only=local_files_only,
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\dynamic_module_utils.py”, line 239, in get_cached_module_file
use_auth_token=use_auth_token,
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 292, in cached_path
local_files_only=local_files_only,
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 495, in get_from_cache
_raise_for_status(r)
File “C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\transformers\utils\hub.py”, line 418, in _raise_for_status
f"401 Client Error: Repository not found for url: {response.url}. "
transformers.utils.hub.RepositoryNotFoundError: 401 Client Error: Repository not found for url: [https://huggingface.co/micktsai/resnet50_try/resolve/main/model.py 1](https://huggingface.co/micktsai/resnet50_try/resolve/main/model.py). If the repo is private, make sure you are authenticated.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18212/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18211
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18211/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18211/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18211/events
|
https://github.com/huggingface/transformers/issues/18211
| 1,310,626,958
|
I_kwDOCUB6oc5OHpSO
| 18,211
|
The problem in BATCH generation of GPT model
|
{
"login": "yupei9",
"id": 63060915,
"node_id": "MDQ6VXNlcjYzMDYwOTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/63060915?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yupei9",
"html_url": "https://github.com/yupei9",
"followers_url": "https://api.github.com/users/yupei9/followers",
"following_url": "https://api.github.com/users/yupei9/following{/other_user}",
"gists_url": "https://api.github.com/users/yupei9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yupei9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yupei9/subscriptions",
"organizations_url": "https://api.github.com/users/yupei9/orgs",
"repos_url": "https://api.github.com/users/yupei9/repos",
"events_url": "https://api.github.com/users/yupei9/events{/privacy}",
"received_events_url": "https://api.github.com/users/yupei9/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nSee this thread for batched generation: https://github.com/huggingface/transformers/pull/7552#issue-714062850",
"> Hi,\r\n> \r\n> See this thread for batched generation: [#7552 (comment)](https://github.com/huggingface/transformers/pull/7552#issue-714062850)\r\n\r\nThanks a lot! \r\nAnd could you please tell me whether the current version supports correct sampling generation with batched setting? Thanks! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing this as the issue seems resolved."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
When I tried to use GPT model (including GPT-2, GPT-NEO-2.7B, GPT-J-6B, GPT-NEOX) to generate text, I found some strange results.
When I set the batch size to 1, all results are normal.
BUT when I set the batch_size more than 1, such as 4, 8, ..., the generated text on GPT-J-6B, GPT-NEOX is abmoral, which contains a large number of repeated consecutive letters or words, for example "AAAAAAAAAAAAAAA" or "The The The The The The The The The".
I can not find the root of this question. Could you please give some suggestions to solve this problem? Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18211/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18210
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18210/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18210/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18210/events
|
https://github.com/huggingface/transformers/issues/18210
| 1,310,616,120
|
I_kwDOCUB6oc5OHmo4
| 18,210
|
TFAutoModel does not work with gpt2 and .generate
|
{
"login": "ehrencrona",
"id": 1862212,
"node_id": "MDQ6VXNlcjE4NjIyMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1862212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehrencrona",
"html_url": "https://github.com/ehrencrona",
"followers_url": "https://api.github.com/users/ehrencrona/followers",
"following_url": "https://api.github.com/users/ehrencrona/following{/other_user}",
"gists_url": "https://api.github.com/users/ehrencrona/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ehrencrona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehrencrona/subscriptions",
"organizations_url": "https://api.github.com/users/ehrencrona/orgs",
"repos_url": "https://api.github.com/users/ehrencrona/repos",
"events_url": "https://api.github.com/users/ehrencrona/events{/privacy}",
"received_events_url": "https://api.github.com/users/ehrencrona/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @gante as well",
"Hi @ehrencrona 👋 The correct class to use for generation with decoder-only models is `TFAutoModelForCausalLM`. You can use it the same way as `TFAutoModel` but, contrarily to it, it has a language modeling head.\r\n\r\nAs for the suggested fixes -- I agree `generate` should not exist here (or better yet, that the error should be informative, as new users might not know which class to use). I've added that to the list of generate goodies to add in the near future :) \r\n\r\nThank you for flagging the issue and for the suggestions!"
] | 1,658
| 1,663
| 1,663
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import TFAutoModel, AutoTokenizer
model = TFAutoModel.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokens = tokenizer(["hey there"], return_tensors='tf')
model.generate(input_ids=tokens['input_ids'], attention_mask=tokens['attention_mask'])
```
will return `AttributeError: 'TFBaseModelOutputWithPastAndCrossAttentions' object has no attribute 'logits'`
Full stack trace:
```
Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-45-eb17c4f14b9f>](https://localhost:8080/#) in <module>()
----> 1 output = model.generate(input_ids=tokens['input_ids'], attention_mask=tokens['attention_mask'])
2 output
3 frames
[/usr/local/lib/python3.7/dist-packages/transformers/generation_tf_utils.py](https://localhost:8080/#) in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, output_scores, output_attentions, output_hidden_states, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, **model_kwargs)
594 return_dict_in_generate=return_dict_in_generate,
595 forced_bos_token_id=forced_bos_token_id,
--> 596 forced_eos_token_id=forced_eos_token_id,
597 )
598
[/usr/local/lib/python3.7/dist-packages/transformers/generation_tf_utils.py](https://localhost:8080/#) in _generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, seed, output_scores, output_attentions, output_hidden_states, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, **model_kwargs)
1589 output_scores=output_scores,
1590 return_dict_in_generate=return_dict_in_generate,
-> 1591 **model_kwargs,
1592 )
1593 elif is_sample_gen_mode:
[/usr/local/lib/python3.7/dist-packages/transformers/generation_tf_utils.py](https://localhost:8080/#) in greedy_search(self, input_ids, max_length, pad_token_id, eos_token_id, logits_processor, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, **model_kwargs)
2082 # 1st generation step has to be run before to initialize `past`
2083 generated, finished_sequences, next_tokens, cur_len, model_kwargs = greedy_search_body_fn(
-> 2084 generated, finished_sequences, input_ids, cur_len, model_kwargs
2085 )
2086
[/usr/local/lib/python3.7/dist-packages/transformers/generation_tf_utils.py](https://localhost:8080/#) in greedy_search_body_fn(generated, finished_sequences, next_tokens, cur_len, model_kwargs)
2025 output_hidden_states=output_hidden_states,
2026 )
-> 2027 next_token_logits = outputs.logits[:, -1]
2028
2029 # Store scores, attentions and hidden_states when required
AttributeError: 'TFBaseModelOutputWithPastAndCrossAttentions' object has no attribute 'logits'
```
### Expected behavior
Not sure if you consider this to be a bug, but it is a stumbling block for beginners.
If you use `TFAutoModel` to load `gpt2` you will get a `TFGPT2Model`. This class has a `generate` method but it doesn't work (because it expects a linear layer to generate logits, i.e. it only works on `TFGPT2LMHeadModel`).
I'd argue that it's a bug because if `TFGPT2Model` doesn't support generation, then it shouldn't have a `generate` method.
Possible alternative fixes:
* Throw an easier-to-understand error in this situation
* Make `TFGPT2Model` not implement `generate`
* Have `TFAutoModel` return a `TFGPT2LMHeadModel` (though this would be a breaking change)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18210/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18209
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18209/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18209/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18209/events
|
https://github.com/huggingface/transformers/issues/18209
| 1,310,526,822
|
I_kwDOCUB6oc5OHQ1m
| 18,209
|
Argument inconsistency between processor and tokenizer
|
{
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"No `from_slow` is an internal argument that determines whether the tokenizer should be loaded from slow tokenizer files or a fast tokenizer file. That is why you're not finding it in the documentation for instance.",
"@sgugger Oh thanks. I confused the purpose of `from_slow`. Now it looks that I can only use `use_fast=False` to get the slow version."
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
### System Info
transformers on master
### Who can help?
@sgugger who modified the line lastly from git blame
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In `processing_utils.py`, to use the slow version, one needs to specify `use_fast=False` in `from_pretrained`,
https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/processing_utils.py#L222-L226
while in `tokenization_utils_base.py`, to use the slow version, one needs to specify `from_slow=True` in `from_pretrained`
https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/tokenization_utils_base.py#L1804-L1815
This inconsistency leads to strange usages.
For example, when we want to use the slow version of LayoutLMv2 processor, we have to pass both arguments simultaneously:
```python
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", use_fast=False, from_slow=True)
```
### Expected behavior
I suggest we change the option in `processing_utils.py` from `use_fast` to `from_slow`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18209/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18208
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18208/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18208/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18208/events
|
https://github.com/huggingface/transformers/issues/18208
| 1,310,425,872
|
I_kwDOCUB6oc5OG4MQ
| 18,208
|
length_penalty behavior is inconsistent with documentation
|
{
"login": "artidoro",
"id": 11949572,
"node_id": "MDQ6VXNlcjExOTQ5NTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/11949572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/artidoro",
"html_url": "https://github.com/artidoro",
"followers_url": "https://api.github.com/users/artidoro/followers",
"following_url": "https://api.github.com/users/artidoro/following{/other_user}",
"gists_url": "https://api.github.com/users/artidoro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/artidoro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/artidoro/subscriptions",
"organizations_url": "https://api.github.com/users/artidoro/orgs",
"repos_url": "https://api.github.com/users/artidoro/repos",
"events_url": "https://api.github.com/users/artidoro/events{/privacy}",
"received_events_url": "https://api.github.com/users/artidoro/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @gante as well",
"Hi @artidoro 👋 Thank you for raising this issue! \r\n\r\nThere are actually two distinct problems, the first one was already on my radar:\r\n1. `length_penalty` is only used with `beam_search`-based generation techniques. `facebook/bart-large-cnn` uses them by default, and `gpt2` doesn't. So, in fact, `length_penalty` has no effect on `gpt2`, the different results you're seeing are a consequence of sampling being on by default for `gpt2` (all these hidden defaults are also going through a deprecation phase 😉 ) 👉 solution: raise warnings/exceptions when these options have no effect (already being worked on)\r\n2. The docstring really describes the opposite of what happens. As described in #4915: larger `length_penalty` -> larger denominator, increasing with output length -> larger score (because it is a negative value), increasing with output length -> benefits long outputs 👉 solution: fix the docstring (@patrickvonplaten FYI)\r\n\r\nI'll keep this issue open until the 2nd problem gets fixed.",
"Confirming point 2.) @gante we could directly fix this here: https://github.com/huggingface/transformers/blob/06d1ba1a55a12b3fb3ca081bdd4f812fda800c37/src/transformers/generation_beam_search.py#L140 as well."
] | 1,658
| 1,663
| 1,663
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.120+-x86_64-with-glibc2.27
- Python version: 3.9.12
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu116 (True)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`length_penalty` in language generation has different effects on the the length of the generation. Sometimes it makes the generation longer, sometimes it makes it shorter. This is very confusing as it is different from what the documentation says. Two previous issues touch on this problem: #4915 #16930
In Bart CNN/DM `length_penalty` **lengthens** the output.
```python
from transformers import pipeline
summarizer = pipeline("summarization", model='facebook/bart-large-cnn')
ARTICLE = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.
A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.
Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other.
In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage.
Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the
2010 marriage license application, according to court documents.
Prosecutors said the marriages were part of an immigration scam.
On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.
After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective
Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.
All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.
Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.
Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.
The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s
Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.
Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.
If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.
"""
print(summarizer(ARTICLE, max_length=512, min_length=30, do_sample=False, length_penalty=1))
print(summarizer(ARTICLE, max_length=512, min_length=30, do_sample=False, length_penalty=2))
```
Output:
`[{'summary_text': 'Liana Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002. She is believed to still be married to four men, and at one time, she was married to eight men at once.'}]`
`[{'summary_text': 'Liana Barrientos, 39, is charged with two counts of "offering a false instrument for filing in the first degree" In total, she has been married 10 times, with nine of her marriages occurring between 1999 and 2002. She is believed to still be married to four men.'}]`
In GPT-2 increasing `length_penalty` **shortens** the output.
```python
from transformers import pipeline
generator = pipeline('text-generation', model='gpt2', device=5)
print(generator("The White man worked as a", max_length=512, length_penalty=1))
print(generator("The White man worked as a", max_length=512, length_penalty=2))
```
Output:
`[{'generated_text': 'The White man worked as a receptionist for the British Consulate in Cairo and returned to Alexandria, where he was promoted to a military officer in 1953; in 1960 he worked as a consular officer, serving as secretary of state to President John F. Kennedy, and as a consul. In a conversation last fall, his grandfather told his sister Catherine, "We are going to make sure you are well."\n\nThe family is now living in a modest apartment, in a small part of town in the suburb of Alexandria.\n\n"We love you, and we love you," Catherine said, before she walked the five miles to the airport, where her husband, the first Egyptian president, has a $1 million plane ticket. The couple are still in touch with their three children, and will visit one next week.\n\nIn addition to the family, there are three other family members, one of whom has spent years as a caretaker for the hospital, which was the site of the largest civil conflict ever seen in modern Egypt. One was a nurse and family friend, who was paralyzed in a July 1975 accident.\n\n"It\'s just unbelievable," he told a reporter.\n\nThe funeral for one of the women who took her life last summer was held Wednesday at a church in the town of Dikun.\n\nIn his own words, the young woman\'s death marks a departure from his life.\n\n"I don\'t know if people would say I\'m the most important person in the world: I\'m the most beautiful person," he said. "But I did, but I will never forget that."'}]`
`[{'generated_text': "The White man worked as a mechanic.\n\nHe is said to have been very close with the White man's wife and three children. Other information came through during the early years of the investigation.\n\nPolice said they had asked the man to tell his story to police in order to gain information related to the white man's death.\n\nA source close to the father said the motive for the killings is still being investigated and the suspect was not a white man."}]`
### Expected behavior
Effect of `length_penalty` to be consistent with [documentation](https://huggingface.co/docs/transformers/v4.20.1/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.length_penalty).
Currently the documentation says:
"Exponential penalty to the length. 1.0 means that the beam score is penalized by the sequence length. 0.0 means no penalty. Set to values < 0.0 in order to encourage the model to generate longer sequences, to a value > 0.0 in order to encourage the model to produce shorter sequences."
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18208/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18207
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18207/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18207/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18207/events
|
https://github.com/huggingface/transformers/issues/18207
| 1,310,356,980
|
I_kwDOCUB6oc5OGnX0
| 18,207
|
torch.jit.trace can trace shared weights, no need to clone weights when tracing
|
{
"login": "LSC527",
"id": 34333110,
"node_id": "MDQ6VXNlcjM0MzMzMTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/34333110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LSC527",
"html_url": "https://github.com/LSC527",
"followers_url": "https://api.github.com/users/LSC527/followers",
"following_url": "https://api.github.com/users/LSC527/following{/other_user}",
"gists_url": "https://api.github.com/users/LSC527/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LSC527/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LSC527/subscriptions",
"organizations_url": "https://api.github.com/users/LSC527/orgs",
"repos_url": "https://api.github.com/users/LSC527/repos",
"events_url": "https://api.github.com/users/LSC527/events{/privacy}",
"received_events_url": "https://api.github.com/users/LSC527/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
### System Info
[source code of "tie or clone weights"](https://github.com/huggingface/transformers/blob/8a61fe023430115bb61ec328a29d35571f4fc2c4/src/transformers/modeling_utils.py#L1137)
[document](https://huggingface.co/docs/transformers/v4.20.1/en/serialization#torchscript-flag-and-tied-weights)
I did a experiment and results showed that `torch.jit.trace` can trace shared weights and use the `TorchScript` for training. Correct me if I was wrong, thx!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
import torch.nn as nn
batch_size = 32
seq_len = 32
emb_size = 128
vocab_size = 32
class Model(nn.Module):
def __init__(self):
super().__init__()
self.emb1 = nn.Embedding(seq_len, emb_size)
self.emb2 = nn.Embedding(seq_len, emb_size)
def forward(self, x):
y1 = self.emb1(x)
y2 = self.emb2(x)
return (y1, y2)
model = Model()
model.emb2.weight = model.emb1.weight
model.eval()
with torch.no_grad():
example_input = torch.randint(vocab_size, [batch_size, seq_len])
example_output = model(example_input)
s = example_output[0].size()
weight_before_train = model.emb1.weight.clone()
print(weight_before_train)
# origin model
model.train()
loss_fn = nn.L1Loss()
optimizer = torch.optim.SGD(model.parameters(), 0.1)
for _ in range(100):
optimizer.zero_grad()
inputs = torch.randint(vocab_size, [batch_size, seq_len])
targets = (torch.randn(s), torch.randn(s))
outputs = model(inputs)
assert torch.allclose(outputs[0], outputs[1])
loss = loss_fn(targets[0], outputs[0]) + loss_fn(targets[1], outputs[1])
loss.backward()
optimizer.step()
model.eval()
with torch.no_grad():
weight_after_train = model.emb1.weight.clone()
print(weight_after_train)
assert torch.equal(model.emb1.weight, model.emb2.weight)
assert not torch.allclose(weight_before_train, weight_after_train)
# traced model
traced = torch.jit.trace(model, example_input)
traced.eval()
with torch.no_grad():
weight_before_train = traced.emb1.weight.clone()
print(weight_before_train)
traced.train()
loss_fn = nn.L1Loss()
optimizer = torch.optim.SGD(traced.parameters(), 0.1)
for _ in range(100):
optimizer.zero_grad()
inputs = torch.randint(vocab_size, [batch_size, seq_len])
targets = (torch.randn(s), torch.randn(s))
outputs = traced(inputs)
assert torch.allclose(outputs[0], outputs[1])
loss = loss_fn(targets[0], outputs[0]) + loss_fn(targets[1], outputs[1])
loss.backward()
optimizer.step()
traced.eval()
with torch.no_grad():
weight_after_train = traced.emb1.weight.clone()
print(weight_after_train)
assert torch.equal(traced.emb1.weight, traced.emb2.weight)
assert not torch.allclose(weight_before_train, weight_after_train)
```
### Expected behavior
shared weights can be traced
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18207/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18206
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18206/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18206/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18206/events
|
https://github.com/huggingface/transformers/issues/18206
| 1,310,356,622
|
I_kwDOCUB6oc5OGnSO
| 18,206
|
The saved trained albert-base-v2 model does not work properly
|
{
"login": "1gst",
"id": 58930482,
"node_id": "MDQ6VXNlcjU4OTMwNDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/58930482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1gst",
"html_url": "https://github.com/1gst",
"followers_url": "https://api.github.com/users/1gst/followers",
"following_url": "https://api.github.com/users/1gst/following{/other_user}",
"gists_url": "https://api.github.com/users/1gst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1gst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1gst/subscriptions",
"organizations_url": "https://api.github.com/users/1gst/orgs",
"repos_url": "https://api.github.com/users/1gst/repos",
"events_url": "https://api.github.com/users/1gst/events{/privacy}",
"received_events_url": "https://api.github.com/users/1gst/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
### System Info
2022-07-19 15:17:57.094050: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2022-07-19 15:17:57.094178: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING:tensorflow:From C:\Users\19715\anaconda3\lib\site-packages\transformers\commands\env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2022-07-19 15:18:02.167646: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-07-19 15:18:02.185125: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2022-07-19 15:18:02.187022: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cublas64_11.dll'; dlerror: cublas64_11.dll not found
2022-07-19 15:18:02.188373: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cublasLt64_11.dll'; dlerror: cublasLt64_11.dll not found
2022-07-19 15:18:02.189480: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cusolver64_11.dll'; dlerror: cusolver64_11.dll not found
2022-07-19 15:18:02.190403: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cusparse64_11.dll'; dlerror: cusparse64_11.dll not found
2022-07-19 15:18:02.191308: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found
2022-07-19 15:18:02.191460: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1850] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.20.1
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.9.7
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.10.2+cu102 (True)
- Tensorflow version (GPU?): 2.9.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@vanpelt @arfon @pvl @xeb @LysandreJik
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
The complete code of the project can be found in the following link
https://github.com/1gst/CCAC2022/tree/master
introduction:
1.models folder is the base model, best_model folder is the model saved after training, the net.py under the models are defined for themselves the network model
2.main function of the tasker.train(best_ model=False, use_fgm=False), best_model control whether to load the best model saved, True is to load the best model, otherwise load base model
3.tasker.print_model() is to print the model parameters, inside this function you can modify the file name, and the path to load the model, self.best_config_path and self.best_model_ Path is the path of the optimal model, self.config_path and self.model_path the path of the base model
4.To change the model, you need to change the model parameter of the __init__() self.init_path in the Tasker Class (model="albert-base-v2")
Problem:
The problem I have is, In the process of using the albert-base-v2 model, I load the albert-base-v2 base model for training, after the training, use the trainer.save_model () to save the optimal model in the training process (saved in the best_model), the saved model can be predicted normally, the saved model I loaded again for training, the model will not be trained, the f1 value will always be 0, and the prediction will be invalid after interrupting the training. And the saved model each time it prints its parameters are different, the same code, if you use the roberta model will not happen, I think it may be a problem with the model saving, or a problem with the model loading, but I have modified, always not, please help me.
### Expected behavior
The expectation is to use the albert-base-v2 model for training, the end of the training to save the best model, and then load the model for training, training will not be invalid, but also able to predict normally, and each time the printed out parameters should be the same, loading the model will not warn that the internal encoder and other parameter weights inside the albert-base-v2 model are not initialized
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18206/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18205
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18205/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18205/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18205/events
|
https://github.com/huggingface/transformers/pull/18205
| 1,310,146,262
|
PR_kwDOCUB6oc47ryhw
| 18,205
|
Split docs on modality
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Not really convinced by this as it's now very unclear that all the task tutorials are task-specific tutorials. I liked it better when they were grouped altogether under task. It also backfires from the intent as this shows we don't really support vision (one entry) or speech (two entries) compared to NLP (9 entries).\r\n\r\nThis revamp also puts advanced guides on the top when they should be way lower in the table of contents (such as Benchmarks or Migrating from other packages).",
"Thanks for the feedback! 🤗\r\n\r\n> Not really convinced by this as it's now very unclear that all the task tutorials are task-specific tutorials.\r\n\r\nHmm, do you mean it’s more unclear now because each task tutorial is separated by modality? This seems clearer to me since the section headers are more scannable.\r\n\r\n> It also backfires from the intent as this shows we don't really support vision (one entry) or speech (two entries) compared to NLP (9 entries).\r\n\r\nGood perspective, and I totally see what you mean! I think another way to look at it is by creating these new sections, we’re signaling that we plan to create more content for audio/CV. This gives these sections more prominence, which shows we want to focus on audio/CV. So even though it looks pretty bare right now, I think that’s ok since these sections will grow.\r\n\r\n> This revamp also puts advanced guides on the top when they should be way lower in the table of contents (such as Benchmarks or Migrating from other packages).\r\n\r\nI don’t think we should put guides lower because they are more advanced. Instead, it may be better to prioritize guides that users are more likely to find useful. For example, I think the Migration/Train with a script guides are pretty useful. This may be a symptom of how I grouped all these guides under General Usage, in which case, we can try breaking up the section and reordering these guides by their utility.",
"> Hmm, do you mean it’s more unclear now because each task tutorial is separated by modality?\r\n\r\nThey were all under a \"Task\" section, which is not the case anymore in your proposal. In NLP, you go from fast tokenizers and multilingual to a task-specific tutorial with no warning to the user.\r\n\r\n> I don’t think we should put guides lower because they are more advanced. Instead, it may be better to prioritize guides that users are more likely to find useful.\r\n\r\nBenchmarks or migrating are both advanced and not useful (benchmarks have 0 issues and we are even questioning whether they should stay in the library and pytorch-pretrained-bert ceased to exist a **while** ago). You should check the analytics to be certain, but I'm pretty sure they are very far from the most-visited pages and they are definitely very low on the list of pages we want to nudge the users on.",
"Ok I see now! You're worried users won't know the task-specific guides are guides about fine-tuning a model for a task if it is just thrown into the NLP section. I think there are some things we can do to help make this clearer to users (in order of preference):\r\n\r\n1. Include an overview page for each modality section explaining what users can expect to find.\r\n2. Update the task-specific guides to have clearer titles like, How to fine-tune for text classification.\r\n3. Create another nested section in each modality that focuses on the task-specific guides.\r\n\r\n> Benchmarks or migrating are both advanced and not useful (benchmarks have 0 issues and we are even questioning whether they should stay in the library and pytorch-pretrained-bert ceased to exist a while ago).\r\n\r\nFor sure! Benchmark, Migration, and Troubleshoot are bottom-3 in page views in the General Usage section. I can bump these out and move them closer to the bottom. ",
"Option 2 or 3 are good compromise (my preference goes to 3 if nested-ness is not an issue). I'd leave Troubleshoot in the General Usage section (hopefully we can make it better so it gets more views), but yeah, the other two are out of place there IMO.\r\n\r\nLet's see what other people think as well, @LysandreJik @patrickvonplaten to name a few :-)",
"I also think that option 3) sounds like the best approach. I don't have a problem with adding a nesting level.",
"I nested the NLP section but it looks a little off since the content inside isn't aligned on the same level (I pinged @mishig25 on this). I didn't add a nested level for the audio and image sections since there's no content in those sections yet, and it might look a little strange. ",
"Hi team, just wanted to circle back on this and see if there are any more comments or feedback about how the docs are split. Otherwise, I think we're ready to merge! 🙂",
"Option 3.) Also looks like the right one to me :-) \r\n\r\nHowever I'm not a big fan of \"Image\" as a title. Could we maybe try to align those sections a bit with how we call the modalities on the Hub: https://huggingface.co/tasks -> so maybe replace \"Image\" with \"Computer Vision\"?\r\n\r\nWdyt @sgugger @LysandreJik @osanseviero "
] | 1,658
| 1,662
| 1,662
|
MEMBER
| null |
Currently, audio and computer vision lack content or the existing content is mixed with NLP. This PR splits the `toctree` on modality to make it easier to discover content for audio/computer vision while also allowing us to also scale to any additional modalities we want to support. As we create additional content, these new sections make it easier to collect specific content in one place. For example, the upcoming `generate` docs can be placed in the NLP section.
This structure can also help us identify gaps in the docs between each modality to ensure documentation is complete. For example, NLP has a page about tokenizers, and we can create a similar page for the other modalities using feature extractors and processors.
Other sections include:
- General usage for modality-neutral content.
- Performance and scalability for content related to large models.
- Contribute for how to test, open a PR, and add models/pipelines.
After we split the docs, the next step would be to start planning and creating additional content to make the audio/computer vision sections more complete.
Looking forward to hearing what you think :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18205/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18205",
"html_url": "https://github.com/huggingface/transformers/pull/18205",
"diff_url": "https://github.com/huggingface/transformers/pull/18205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18205.patch",
"merged_at": 1662063551000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18204
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18204/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18204/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18204/events
|
https://github.com/huggingface/transformers/pull/18204
| 1,309,914,545
|
PR_kwDOCUB6oc47q_gh
| 18,204
|
Global RiGL w/ mup
|
{
"login": "vinaysrao",
"id": 1137970,
"node_id": "MDQ6VXNlcjExMzc5NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1137970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinaysrao",
"html_url": "https://github.com/vinaysrao",
"followers_url": "https://api.github.com/users/vinaysrao/followers",
"following_url": "https://api.github.com/users/vinaysrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vinaysrao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vinaysrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinaysrao/subscriptions",
"organizations_url": "https://api.github.com/users/vinaysrao/orgs",
"repos_url": "https://api.github.com/users/vinaysrao/repos",
"events_url": "https://api.github.com/users/vinaysrao/events{/privacy}",
"received_events_url": "https://api.github.com/users/vinaysrao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,658
| 1,658
| 1,658
|
NONE
| null |
Adding mup transformer configurations to existing GPT2 models
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18204/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18204",
"html_url": "https://github.com/huggingface/transformers/pull/18204",
"diff_url": "https://github.com/huggingface/transformers/pull/18204.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18204.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18203
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18203/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18203/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18203/events
|
https://github.com/huggingface/transformers/pull/18203
| 1,309,734,027
|
PR_kwDOCUB6oc47qYk2
| 18,203
|
Update cache for CircleCI tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,662
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
After PR #18197, we need to create new cache, otherwise we get some errors, as shown in [this run](https://app.circleci.com/pipelines/github/huggingface/transformers/44104/workflows/1b63ec34-ef95-4678-adc2-773de35342ab/jobs/511895/steps), coming from some checks in `datasets` regarding module imports.
Run all tests with newly created cache + Run torch example tests with new cache loaded --> all pass
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18203/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18203",
"html_url": "https://github.com/huggingface/transformers/pull/18203",
"diff_url": "https://github.com/huggingface/transformers/pull/18203.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18203.patch",
"merged_at": 1658297651000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18202
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18202/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18202/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18202/events
|
https://github.com/huggingface/transformers/pull/18202
| 1,309,679,552
|
PR_kwDOCUB6oc47qM7R
| 18,202
|
Reduce console spam when using the KerasMetricCallback
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
MEMBER
| null |
Right now, `KerasMetricCallback` calls `model.predict()` while iterating over the input dataset. This results in some unwanted console spam when using metrics that do not call `generate()` (because `predict()` always creates a progress bar). Replacing it with `predict_on_batch` removes the spam and also improves performance of the callback.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18202/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18202/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18202",
"html_url": "https://github.com/huggingface/transformers/pull/18202",
"diff_url": "https://github.com/huggingface/transformers/pull/18202.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18202.patch",
"merged_at": 1658246435000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18201
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18201/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18201/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18201/events
|
https://github.com/huggingface/transformers/pull/18201
| 1,309,564,914
|
PR_kwDOCUB6oc47p00P
| 18,201
|
TF: Add missing cast to GPT-J
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Ah, that explains why following tests fail! Changing it 👍 \r\n"
] | 1,658
| 1,658
| 1,658
|
MEMBER
| null |
# What does this PR do?
Adds a missing cast, which was breaking the tests for mixed precision (which, for some weird cause, was also causing subsequent tests to fail 🤔 )
All slow tests pass after this change.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18201/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18201",
"html_url": "https://github.com/huggingface/transformers/pull/18201",
"diff_url": "https://github.com/huggingface/transformers/pull/18201.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18201.patch",
"merged_at": 1658242722000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18200
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18200/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18200/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18200/events
|
https://github.com/huggingface/transformers/issues/18200
| 1,309,513,864
|
I_kwDOCUB6oc5ODZiI
| 18,200
|
[TRACKER] Add BLOOM Meg-DS optimizer states
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Currently uploading here: https://huggingface.co/bigscience/bloom-optimizer-states",
"Closing since the models have been added on the forementioned repo"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
### Feature request
Add BLOOM Meg-DS optimizer state on the Hub. Feature request from: https://twitter.com/Asuna_FPS_/status/1549137254588633093?s=20&t=FhO7Tlv01Gn6r_inZGyBug
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18200/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18199
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18199/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18199/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18199/events
|
https://github.com/huggingface/transformers/issues/18199
| 1,309,439,268
|
I_kwDOCUB6oc5ODHUk
| 18,199
|
Exported DeBERTa ONNX model is incorrect
|
{
"login": "JingyaHuang",
"id": 44135271,
"node_id": "MDQ6VXNlcjQ0MTM1Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/44135271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JingyaHuang",
"html_url": "https://github.com/JingyaHuang",
"followers_url": "https://api.github.com/users/JingyaHuang/followers",
"following_url": "https://api.github.com/users/JingyaHuang/following{/other_user}",
"gists_url": "https://api.github.com/users/JingyaHuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JingyaHuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JingyaHuang/subscriptions",
"organizations_url": "https://api.github.com/users/JingyaHuang/orgs",
"repos_url": "https://api.github.com/users/JingyaHuang/repos",
"events_url": "https://api.github.com/users/JingyaHuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/JingyaHuang/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I might be suffering from similar issues. See also #18237. A PR would be appreciated @JingyaHuang",
"Thanks for reporting @JingyaHuang! Could you take a look at @iiLaurens' PR to see if it fixes your issue?",
"Thanks for the PR @iiLaurens, will look at the PR @LysandreJik 👌 . "
] | 1,658
| 1,660
| 1,660
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.21.0.dev0
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
### Who can help?
@LysandreJik
### Reproduction
__Reproduction__
```python
from pathlib import Path
from transformers.onnx import export
from transformers import AutoTokenizer, AutoModel, AutoConfig
from transformers.models.deberta_v2 import DebertaV2OnnxConfig
# load model and tokenizer
onnx_path = Path("results/deberta-v2-model.onnx")
model_ckpt = "microsoft/deberta-v2-xxlarge"
base_model = AutoModel.from_pretrained(model_ckpt)
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
onnx_config = DebertaV2OnnxConfig(base_model.config)
# export to onnx
onnx_inputs, onnx_outputs = export(tokenizer, base_model, onnx_config, onnx_config.default_onnx_opset, onnx_path)
```
__Trace Warnings__
```
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:564: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
q_ids = np.arange(0, query_size)
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:564: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
q_ids = np.arange(0, query_size)
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:565: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
k_ids = np.arange(0, key_size)
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:565: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
k_ids = np.arange(0, key_size)
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:569: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
rel_pos_ids = torch.tensor(rel_pos_ids, dtype=torch.long)
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:698: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
scale = math.sqrt(query_layer.size(-1) * scale_factor)
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:752: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
).repeat(query_layer.size(0) // self.num_attention_heads, 1, 1)
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:754: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
query_layer.size(0) // self.num_attention_heads, 1, 1
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:773: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
scale = math.sqrt(pos_key_layer.size(-1) * scale_factor)
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:785: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
scale = math.sqrt(pos_query_layer.size(-1) * scale_factor)
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:786: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if key_layer.size(-2) != query_layer.size(-2):
/usr/local/lib/python3.8/dist-packages/transformers/models/deberta_v2/modeling_deberta_v2.py:113: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
output = input.masked_fill(rmask, torch.tensor(torch.finfo(input.dtype).min))
```
There are some operations which are not torch native(numpy, math) lead to the failure of tracing.
e.g. the following graph corresponds to
https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/deberta/modeling_deberta.py#L627
https://github.com/huggingface/transformers/blob/d0acc9537829e7d067edbb791473bbceb2ecf056/src/transformers/models/deberta/modeling_deberta.py#L633-L634
<img width="437" alt="image" src="https://user-images.githubusercontent.com/44135271/179747651-815a9dd1-8ad6-44e7-9b44-f4d35380fca0.png">
As shown in the graph, the `sqrt` node has been ignored and the value of `scale` is treated as a constant.
### Expected behavior
Correctly export ONNX model without triggering `TraceWarning`
For that need to replace numpy and math ops into natively supported torch ops. I can open a PR for the replacement.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18199/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18198
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18198/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18198/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18198/events
|
https://github.com/huggingface/transformers/pull/18198
| 1,309,380,805
|
PR_kwDOCUB6oc47pNRo
| 18,198
|
Improve `generate` docstring
|
{
"login": "JoaoLages",
"id": 17574157,
"node_id": "MDQ6VXNlcjE3NTc0MTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/17574157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoaoLages",
"html_url": "https://github.com/JoaoLages",
"followers_url": "https://api.github.com/users/JoaoLages/followers",
"following_url": "https://api.github.com/users/JoaoLages/following{/other_user}",
"gists_url": "https://api.github.com/users/JoaoLages/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoaoLages/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoaoLages/subscriptions",
"organizations_url": "https://api.github.com/users/JoaoLages/orgs",
"repos_url": "https://api.github.com/users/JoaoLages/repos",
"events_url": "https://api.github.com/users/JoaoLages/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoaoLages/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> I think it's best to leave the default as they were (since they are ultimately the defaults for the model config) and put a big warning at the top of the arg section of the docstring stating that all of them will be overridden by the model config. What do you think @patrickvonplaten ?\r\n\r\nThings like `model.config.num_beams` change frequently from model to model. Looking at the 'defaults to 1' was very misleading for me.",
"Thanks for the feedback here @JoaoLages! I understand the reason behind your PR and am inclined to merge it as is - would like to get some input from @gante here as well though before merging",
"This is a tough one. The change (as it is) is possibly good for generate-savvy users but will make it more confusing for most use cases -- all those config values have their own defaults which are in fact almost always used. We would lose that very useful part of the documentation to make this caveat more visible.\r\n\r\nIn general, we can all agree that defaulting to the config specification is confusing (and a giant source of issues) -- @JoaoLages we are working on a plan to remove them, which is actually the root problem here. This means that documentation changes as a result of this PR will be temporary :) \r\n\r\nPersonally, because of the two paragraphs above, I am more inclined toward @sgugger's suggestion -- the most common situation stays clearly documented, and a temporary warning gets added. @JoaoLages WDYT? ",
"> In general, we can all agree that defaulting to the config specification is confusing (and a giant source of issues) \r\n\r\nTotally agree with this statement!\r\n\r\n> Personally, because of the two paragraphs above, I am more inclined toward @sgugger's suggestion -- the most common situation stays clearly documented, and a temporary warning gets added. @JoaoLages WDYT?\r\n\r\nThe warning would help 👍 ",
"Awesome, I think we can move forward with it then :) \r\n\r\nOne detail -- this warning should go in FLAX's and TF's docstring as well. If it is not asking too much @JoaoLages, can you copy it to the other frameworks as well? 🙏 ",
"> Awesome, I think we can move forward with it then :)\r\n> \r\n> One detail -- this warning should go in FLAX's and TF's docstring as well. If it is not asking too much @JoaoLages, can you copy it to the other frameworks as well? 🙏\r\n\r\nActually,[ the warning is already in the docstring](https://github.com/huggingface/transformers/blob/a68454bdfcc14e40e67502722a4d802a2ae26999/src/transformers/generation_utils.py#L910), right? I guess it is not that visible 😅 ",
"> Thanks for iterating with us!\r\n\r\nYou were too fast 😂 \r\nI also added the changes for TF and FLAX. Opened another PR https://github.com/huggingface/transformers/pull/18432"
] | 1,658
| 1,659
| 1,659
|
CONTRIBUTOR
| null |
# What does this PR do?
The generate docstring is not correct, because it has a lot of defaults that read from `model.config` and that is not clearly stated in the method description.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger @patrickvonplaten I believe this one is for one of you two?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18198/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18198/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18198",
"html_url": "https://github.com/huggingface/transformers/pull/18198",
"diff_url": "https://github.com/huggingface/transformers/pull/18198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18198.patch",
"merged_at": 1659460975000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18197
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18197/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18197/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18197/events
|
https://github.com/huggingface/transformers/pull/18197
| 1,309,253,952
|
PR_kwDOCUB6oc47ox2-
| 18,197
|
Use next-gen CircleCI convenience images
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can you just explain why this change is needed? What's better about them?",
"> Can you just explain why this change is needed? What's better about them?\r\n\r\nSorry, I forgot to mention them in the description. I updated it. My main motivation is to avoid the deprecated (on December 31, 2021) images).\r\n"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
Use next-gen CircleCI convenience images.
From [CircleCI page](https://circleci.com/docs/circleci-images?utm_source=google&utm_medium=sem&utm_campaign=sem-google-dg--emea-en-dsa-maxConv-auth-brand&utm_term=g_-_c__dsa_&utm_content=&gclid=CjwKCAjwrNmWBhA4EiwAHbjEQJ4yXbmT654kFoIgTkjKea44E56-j7BGvVrqOkVAwCq97F_Je6EsohoC0OkQAvD_BwE):
*Legacy images with the prefix “circleci/” were deprecated on December 31, 2021. For faster builds, upgrade your projects with next-generation convenience images.*
It mentions [the following](https://circleci.com/docs/circleci-images?utm_source=google&utm_medium=sem&utm_campaign=sem-google-dg--emea-en-dsa-maxConv-auth-brand&utm_term=g_-_c__dsa_&utm_content=&gclid=CjwKCAjwrNmWBhA4EiwAHbjEQJ4yXbmT654kFoIgTkjKea44E56-j7BGvVrqOkVAwCq97F_Je6EsohoC0OkQAvD_BwE#next-generation-convenience-images):
- Faster spin-up time (but I didn't measure the spin-up time)
- Improved reliability and stability
There are some tiny things I observed: for example, running new images on GCP VM, I can use arrow up to get to the previous commands.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18197/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18197",
"html_url": "https://github.com/huggingface/transformers/pull/18197",
"diff_url": "https://github.com/huggingface/transformers/pull/18197.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18197.patch",
"merged_at": 1658238186000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18196
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18196/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18196/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18196/events
|
https://github.com/huggingface/transformers/pull/18196
| 1,309,132,725
|
PR_kwDOCUB6oc47oX8Q
| 18,196
|
Update docs README with instructions on locally previewing docs
|
{
"login": "snehankekre",
"id": 20672874,
"node_id": "MDQ6VXNlcjIwNjcyODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/20672874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snehankekre",
"html_url": "https://github.com/snehankekre",
"followers_url": "https://api.github.com/users/snehankekre/followers",
"following_url": "https://api.github.com/users/snehankekre/following{/other_user}",
"gists_url": "https://api.github.com/users/snehankekre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snehankekre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snehankekre/subscriptions",
"organizations_url": "https://api.github.com/users/snehankekre/orgs",
"repos_url": "https://api.github.com/users/snehankekre/repos",
"events_url": "https://api.github.com/users/snehankekre/events{/privacy}",
"received_events_url": "https://api.github.com/users/snehankekre/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks again for your contribution!"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
This small PR updates the README in `/docs/` with instructions on how to use `doc-builder` to locally preview the documentation before submitting a PR. The current docs say previewing is not possible. However, the `doc-builder` [repo](https://github.com/huggingface/doc-builder#previewing) contains previewing instructions.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @sgugger @stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18196/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18196",
"html_url": "https://github.com/huggingface/transformers/pull/18196",
"diff_url": "https://github.com/huggingface/transformers/pull/18196.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18196.patch",
"merged_at": 1658224047000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18195
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18195/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18195/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18195/events
|
https://github.com/huggingface/transformers/pull/18195
| 1,309,128,219
|
PR_kwDOCUB6oc47oW_M
| 18,195
|
Typo in readme
|
{
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18195/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18195",
"html_url": "https://github.com/huggingface/transformers/pull/18195",
"diff_url": "https://github.com/huggingface/transformers/pull/18195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18195.patch",
"merged_at": 1658237317000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18194
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18194/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18194/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18194/events
|
https://github.com/huggingface/transformers/pull/18194
| 1,309,116,600
|
PR_kwDOCUB6oc47oUf8
| 18,194
|
Add vision example to README
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
The main README was only showing NLP examples, this PR removes the question answering example to replace it with an object detection one. You can see the new README [here](https://github.com/huggingface/transformers/tree/readme_vision).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18194/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18194",
"html_url": "https://github.com/huggingface/transformers/pull/18194",
"diff_url": "https://github.com/huggingface/transformers/pull/18194.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18194.patch",
"merged_at": 1658216778000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18193
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18193/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18193/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18193/events
|
https://github.com/huggingface/transformers/issues/18193
| 1,309,020,516
|
I_kwDOCUB6oc5OBhFk
| 18,193
|
when i use TFGPT2LMHeadModel, how can i build labels and input_ids?
|
{
"login": "Orient12",
"id": 39329359,
"node_id": "MDQ6VXNlcjM5MzI5MzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/39329359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Orient12",
"html_url": "https://github.com/Orient12",
"followers_url": "https://api.github.com/users/Orient12/followers",
"following_url": "https://api.github.com/users/Orient12/following{/other_user}",
"gists_url": "https://api.github.com/users/Orient12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Orient12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Orient12/subscriptions",
"organizations_url": "https://api.github.com/users/Orient12/orgs",
"repos_url": "https://api.github.com/users/Orient12/repos",
"events_url": "https://api.github.com/users/Orient12/events{/privacy}",
"received_events_url": "https://api.github.com/users/Orient12/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hey @Orient12! The `TFGPT2LMHeaDModel` works with CLM objectives. To that end, I think the best way for you to understand how it works would be to try it using the following script: https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling\r\n\r\nThis fine-tunes models with the CLM objective.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
### System Info
when i use TFGPT2LMHeadModel,i don't know how to build input_ids and labels!
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
def encode_example(ds, limit=-1):
print(len(ds))
input_ids_list = []
attention_maks_list = []
label_list = []
for row in ds:
input_ids_list.append(row["input_ids"])
attention_maks_list.append(row["attention_mask"])
label_list.append(row["labels"])
return tf.data.Dataset.from_tensor_slices(
(input_ids_list, attention_maks_list, label_list)).map(map_example_to_dict)
or like this
def encode_example(ds, limit=-1):
print(len(ds))
input_ids_list = []
attention_maks_list = []
label_list = []
for row in ds:
input_ids_list.append(row["input_ids"][:-1])
attention_maks_list.append(row["attention_mask"][:-1])
label_list.append([-100 if k == 1 else k for k in row["labels"][1:]])
return tf.data.Dataset.from_tensor_slices(
(input_ids_list, attention_maks_list, label_list)).map(map_example_to_dict)
### Expected behavior
who can tell me the input_ids and label format?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18193/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18192
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18192/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18192/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18192/events
|
https://github.com/huggingface/transformers/pull/18192
| 1,309,019,613
|
PR_kwDOCUB6oc47n_1H
| 18,192
|
Remove use_auth_token from the from_config method
|
{
"login": "duongna21",
"id": 38061659,
"node_id": "MDQ6VXNlcjM4MDYxNjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/38061659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duongna21",
"html_url": "https://github.com/duongna21",
"followers_url": "https://api.github.com/users/duongna21/followers",
"following_url": "https://api.github.com/users/duongna21/following{/other_user}",
"gists_url": "https://api.github.com/users/duongna21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duongna21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duongna21/subscriptions",
"organizations_url": "https://api.github.com/users/duongna21/orgs",
"repos_url": "https://api.github.com/users/duongna21/repos",
"events_url": "https://api.github.com/users/duongna21/events{/privacy}",
"received_events_url": "https://api.github.com/users/duongna21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes `TypeError: __init__() got an unexpected keyword argument 'use_auth_token'` in `run_mlm_flax.py`, `run_clm_flax.py`, `run_t5_mlm_flax.py`, `run_summarization_flax.py`, `run_image_classification.py` by removing the `use_auth_token` argument from the `from_config` method.

## Who can review?
cc potential reviewers: @patrickvonplaten, @sgugger, @patil-suraj
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18192/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18192",
"html_url": "https://github.com/huggingface/transformers/pull/18192",
"diff_url": "https://github.com/huggingface/transformers/pull/18192.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18192.patch",
"merged_at": 1658211201000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18191
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18191/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18191/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18191/events
|
https://github.com/huggingface/transformers/issues/18191
| 1,308,842,203
|
I_kwDOCUB6oc5OA1jb
| 18,191
|
add Decision Transformer ONNX config to Transformers
|
{
"login": "skanjila",
"id": 674374,
"node_id": "MDQ6VXNlcjY3NDM3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/674374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skanjila",
"html_url": "https://github.com/skanjila",
"followers_url": "https://api.github.com/users/skanjila/followers",
"following_url": "https://api.github.com/users/skanjila/following{/other_user}",
"gists_url": "https://api.github.com/users/skanjila/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skanjila/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skanjila/subscriptions",
"organizations_url": "https://api.github.com/users/skanjila/orgs",
"repos_url": "https://api.github.com/users/skanjila/repos",
"events_url": "https://api.github.com/users/skanjila/events{/privacy}",
"received_events_url": "https://api.github.com/users/skanjila/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@ChainYo @regisss Issue is here, will add PR once its in a workable state",
"@ChainYo @regisss I am finally starting work on this, sorry about the delay, so I was reading through the code in this PR and using that as an example: https://github.com/huggingface/transformers/pull/14059/files, one question here, I was trying to understand how we determine what goes in the json structure below, I understand about the last config term but its the terms before it that I was trying to dig into, any insight you guys can provide into this would be most helpful:\r\n\r\n\"camembert\": supported_features_mapping(\r\n \"default\",\r\n \"causal-lm\",\r\n \"sequence-classification\",\r\n \"token-classification\",\r\n \"question-answering\",\r\n onnx_config_cls=CamembertOnnxConfig,\r\n ),",
"> @ChainYo @regisss I am finally starting work on this, sorry about the delay, so I was reading through the code in this PR and using that as an example: https://github.com/huggingface/transformers/pull/14059/files, one question here, I was trying to understand how we determine what goes in the json structure below, I understand about the last config term but its the terms before it that I was trying to dig into, any insight you guys can provide into this would be most helpful:\r\n\r\nHi @skanjila, if you check the associated docs for `Decision Transformer`, you can see that there is no other feature than the default: https://huggingface.co/docs/transformers/model_doc/decision_transformer\r\n\r\n\r\n\r\nI think that for this model, `default` is the only convenient feature.\r\n\r\n",
"@ChainYo I think what you're saying is that the only parameters that are needed are the following as mentioned in the documentation in the configuration section, is that correct? \r\n\r\n( state_dim = 17act_dim = 4hidden_size = 128max_ep_len = 4096action_tanh = Truevocab_size = 1n_positions = 1024n_embd = 768n_layer = 3n_head = 1n_inner = Noneactivation_function = 'relu'resid_pdrop = 0.1embd_pdrop = 0.1attn_pdrop = 0.1layer_norm_epsilon = 1e-05initializer_range = 0.02summary_type = 'cls_index'summary_use_proj = Truesummary_activation = Nonesummary_proj_to_labels = Truesummary_first_dropout = 0.1scale_attn_weights = Trueuse_cache = Truebos_token_id = 50256eos_token_id = 50256scale_attn_by_inverse_layer_idx = Falsereorder_and_upcast_attn = False**kwargs )\r\n\r\nLet me know if I am missing anything here",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,667
| 1,667
|
NONE
| null |
### Feature request
Add Decision Transformer OnnxConfig to make this model available for conversion.
### Motivation
This is part of adding OnnxConfigs for unsupported models https://huggingface.co/docs/transformers/v4.20.1/en/serialization#exporting-a-model-for-an-unsupported-architecture
### Your contribution
I will be submitting a new PR to address DecisionTransformer model
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18191/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18191/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18190
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18190/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18190/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18190/events
|
https://github.com/huggingface/transformers/issues/18190
| 1,308,379,298
|
I_kwDOCUB6oc5N_Eii
| 18,190
|
Longformer EncoderDecoder (LED)-Large model finetuning for summarization results in </s><s><s><s><s><s><s><s><s><s><s>... output
|
{
"login": "ratishsp",
"id": 3006607,
"node_id": "MDQ6VXNlcjMwMDY2MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3006607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratishsp",
"html_url": "https://github.com/ratishsp",
"followers_url": "https://api.github.com/users/ratishsp/followers",
"following_url": "https://api.github.com/users/ratishsp/following{/other_user}",
"gists_url": "https://api.github.com/users/ratishsp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratishsp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratishsp/subscriptions",
"organizations_url": "https://api.github.com/users/ratishsp/orgs",
"repos_url": "https://api.github.com/users/ratishsp/repos",
"events_url": "https://api.github.com/users/ratishsp/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratishsp/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Hi @ratishsp . Thanks for reporting, I will take a look. Do you have (some) results from the previous checkpoints? Do they have better rouge scores and a bit meaningful outputs than checkpoint 1800?",
"Hi @ydshieh thanks for looking into the issue. In a previous checkpoint 1500, the model produced a good output for the above news article: `</s><s>The Eiffel Tower is the tallest building in the world, with a height of 300 metres (1,063 ft).</s>`",
"What is surprising is that the eval rouge fluctuates a lot till checkpoint 1500, after which it remains close to 0. I have attached below a tensorboard image of eval_rouge1\r\n\r\n",
"Even more suprising, LED-Base model seems to be doing quite well!\r\n\r\n\r\nModel output (checkpoint 1600):\r\n`</s><s>The Eiffel Tower in Paris is the tallest structure in the world.</s>`",
"Actually I checked the output of base models... Was really quite good. Better if increase max_length\nLike 64/ ...128 \n",
"I had the same issue. `allenai/led-base-16384` works well but `allenai/led-large-16384` and `allenai/PRIMERA` simply generates `\"\"` after about a few hundreds steps of training.",
"I assume that it is an error in the `generate` method, since the training loss curves for the `base` and `large` models look really similar and both of them are reasonable. ",
"Hi @ydshieh, checking if you were able to look into the issue.",
"Hi, @ratishsp I will look this issue this week :-) hope I can have some insight!",
"Hi, @ratishsp I haven't running the script myself, but I see something already.\r\n\r\nYou mentioned you use `examples/pytorch/summarization/run_summarization.py`. That file is a general training script.\r\nHowever, `LEDModel/LEDForConditionalGeneration` is somehow special: it uses `global_attention_mask`.\r\n\r\nAs you are running summarization, it is `LEDForConditionalGeneration`. For this model, we should put `1` for the `global_attention_mask` on the first token `<s>` in the encoder input sequence.\r\n\r\n- [doc](https://huggingface.co/docs/transformers/model_doc/led): search `For summarization, it is advised to put`.\r\n- [model card](https://huggingface.co/allenai/led-large-16384-arxiv)\r\n\r\n\r\nIn fact, in your inference code snippet, you also have it:\r\n```python\r\n global_attention_mask = torch.zeros_like(inputs)\r\n global_attention_mask[:, 0] = 1\r\n```\r\n\r\nSo (one of) the problem(s) must come from the fact that you don't include `global_attention_mask` in your **training** script. It should be fairly to add it. But you can also check [this notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) by my colleague @patrickvonplaten (I believe he is the author of this notebook).\r\n\r\nLet me know if you get desired results once you train with global attention!\r\n\r\n(I am surprised the base model works fine however)",
"Hi @ydshieh I had missed to mention this in the original issue description. I had experimented with setting the global attention mask during training. But it didn't change the outcome.",
"Would you like to share you entire code, so we can avoid the difference between your code and mine :-)\r\n(the one you have with global attention)",
"I had added the line `model_inputs[\"global_attention_mask\"] = [[1 if y == tokenizer.cls_token_id else 0 for y in x] for x in model_inputs[\"input_ids\"]]` into the code after https://github.com/huggingface/transformers/blob/0d0aada56444ad554021947addaa035feb55948f/examples/pytorch/summarization/run_summarization.py#L536",
"Hi @ratishsp After a long investigation, although not fully understanding the model behavior, here is the observation\r\n\r\n`led-large` (without further finetuning) will produce the same LM logits for `[2, 0]`, i.e. the tokens `[<eos>, <bos>]` (or say `[</s>, <s>]`), no matter what the encoder input sequences are (at least for `xsum` datasets), and therefore the same predicted token ids. I provide the script to confirm this below, and the results in the next 2 comments. The results for `led-large` is [here](https://github.com/huggingface/transformers/issues/18190#issuecomment-1216585363).\r\n\r\nDuring training however, `<eos>` is required to predict the label `<bos>`, and `<bos>` is required to predict the first **non-special** tokens in a sentence. Since they have the same logits, it causes the training difficulty , and ends up learning\r\n```bash\r\n<eos> --> <bos>\r\n<bos> --> <bos>\r\n```\r\n(as both have the same predicted logits).\r\n\r\nThere is one related discussion [here](https://github.com/huggingface/transformers/issues/15559#issuecomment-1062880564). The solution is to `perturb the representation of bos_token`. I haven't tried it yet, but it makes sense to me.\r\n\r\nHowever, why `led-large` (or say, `bart-large`) has this issue is still mysterious to me!\r\n\r\n\r\n## To verify\r\n\r\nTo have more information printed\r\n```bash\r\ngit fetch https://github.com/ydshieh/transformers.git check_gen:check_gen\r\ngit checkout check_gen\r\n```\r\n\r\nRun this script (inside `/examples/pytorch/summarization/`)\r\n```python\r\nimport numpy as np\r\nimport torch\r\n\r\nfrom transformers import AutoTokenizer\r\nfrom transformers import LEDModel, LEDForConditionalGeneration\r\n\r\nimport datasets\r\n\r\nsummarization_name_mapping = {\r\n \"cnn_dailymail\": (\"article\", \"highlights\"),\r\n \"xsum\": (\"document\", \"summary\"),\r\n}\r\n\r\nckpt_led_base = \"allenai/led-base-16384\"\r\nckpt_led_large = \"allenai/led-large-16384\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(ckpt_led_base)\r\nmodel = LEDForConditionalGeneration.from_pretrained(ckpt_led_base)\r\n\r\ndef get_dataset(dataset_name):\r\n\r\n max_source_length = 1024\r\n max_target_length = 128\r\n padding = True\r\n ignore_pad_token_for_loss = True\r\n padding = \"max_length\"\r\n prefix = \"\"\r\n max_train_samples = 1024\r\n max_eval_samples = 256\r\n preprocessing_num_workers = 8\r\n\r\n raw_datasets = datasets.load_dataset(dataset_name)\r\n\r\n text_column, summary_column = summarization_name_mapping[dataset_name]\r\n\r\n def foo(x):\r\n\r\n if x == tokenizer.cls_token_id:\r\n return 1\r\n elif x == tokenizer.pad_token_id:\r\n return -1\r\n else:\r\n return 0\r\n\r\n def preprocess_function(examples):\r\n # remove pairs where at least one record is None\r\n\r\n inputs, targets = [], []\r\n for i in range(len(examples[text_column])):\r\n if examples[text_column][i] and examples[summary_column][i]:\r\n inputs.append(examples[text_column][i])\r\n targets.append(examples[summary_column][i])\r\n\r\n inputs = [prefix + inp for inp in inputs]\r\n model_inputs = tokenizer(inputs, max_length=max_source_length, padding=padding, truncation=True)\r\n\r\n # Tokenize targets with the `text_target` keyword argument\r\n labels = tokenizer(text_target=targets, max_length=max_target_length, padding=padding, truncation=True)\r\n\r\n # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore\r\n # padding in the loss.\r\n if padding == \"max_length\" and ignore_pad_token_for_loss:\r\n labels[\"input_ids\"] = [\r\n [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels[\"input_ids\"]\r\n ]\r\n\r\n model_inputs[\"labels\"] = labels[\"input_ids\"]\r\n\r\n if model.__class__.__name__.startswith(\"LED\"):\r\n model_inputs[\"global_attention_mask\"] = [[foo(y) for y in x] for x in model_inputs[\"input_ids\"]]\r\n\r\n decoder_input_ids = model.prepare_decoder_input_ids_from_labels(labels=torch.tensor(model_inputs[\"labels\"], dtype=torch.int32))\r\n decoder_input_ids = decoder_input_ids.numpy().tolist()\r\n model_inputs[\"decoder_input_ids\"] = decoder_input_ids\r\n\r\n return model_inputs\r\n\r\n train_dataset = raw_datasets[\"train\"]\r\n eval_dataset = raw_datasets[\"validation\"]\r\n\r\n train_dataset = train_dataset.select(range(max_train_samples))\r\n eval_dataset = eval_dataset.select(range(max_eval_samples))\r\n\r\n train_dataset = train_dataset.map(\r\n preprocess_function,\r\n batched=True,\r\n num_proc=preprocessing_num_workers,\r\n remove_columns=['document', 'summary', 'id'],\r\n desc=\"Running tokenizer on train dataset\",\r\n )\r\n eval_dataset = eval_dataset.map(\r\n preprocess_function,\r\n batched=True,\r\n num_proc=preprocessing_num_workers,\r\n remove_columns=['document', 'summary', 'id'],\r\n desc=\"Running tokenizer on validation dataset\",\r\n )\r\n\r\n return train_dataset, eval_dataset\r\n\r\ntrain_dataset, eval_dataset = get_dataset(\"xsum\")\r\nfor idx, eval_example in enumerate(eval_dataset):\r\n\r\n eval_example.pop(\"labels\")\r\n\r\n decoder_input_ids = eval_example.pop(\"decoder_input_ids\")\r\n eval_example[\"decoder_input_ids\"] = [2, 0] + decoder_input_ids[2:5]\r\n\r\n for k in eval_example:\r\n eval_example[k] = torch.tensor([eval_example[k]], dtype=torch.int32)\r\n\r\n model.led.decoder.buffer = {}\r\n output = model(**eval_example)\r\n\r\n print(f\"example idx: {idx}\")\r\n\r\n for k in model.led.decoder.buffer:\r\n h = model.led.decoder.buffer[k]\r\n if not isinstance(h, dict):\r\n pass\r\n # print(f'max diff in {k}: {np.amax(np.abs((h[0, 0] - h[0, 1]).detach().to(\"cpu\").numpy()))}')\r\n else:\r\n layer_idx = k\r\n buffer = h\r\n for name in buffer:\r\n h = buffer[name]\r\n #print(f'layer {layer_idx} - {name}: max <eos> = {torch.max(torch.abs(h[0, 0]))}')\r\n #print(f'layer {layer_idx} - {name}: max <bos> = {torch.max(torch.abs(h[0, 1]))}')\r\n #print(f'layer {layer_idx} - {name}: max <eos> dim = {torch.argmax(torch.abs(h[0, 0]), dim=-1)}')\r\n #print(f'layer {layer_idx} - {name}: max <bos> dim = {torch.argmax(torch.abs(h[0, 1]), dim=-1)}')\r\n #top = torch.topk(torch.abs(h[0, 0]), k=8, dim=-1, largest=True, sorted=True)\r\n #print(f'layer {layer_idx} - {name}: top <eos> indices = {top.indices}')\r\n #print(f'layer {layer_idx} - {name}: top <eos> values = {top.values}')\r\n #print(f'layer {layer_idx} - {name}: var <eos> = {torch.var(h[0, 0], unbiased=False)}')\r\n #print(f'layer {layer_idx} - {name}: var <bos> = {torch.var(h[0, 1], unbiased=False)}')\r\n if \"hidden_states: ffn: final_layer_norm\" in name:\r\n print(f'max diff in layer {layer_idx} - {name}: {np.amax(np.abs((h[0, 0] - h[0, 1]).detach().to(\"cpu\").numpy()))}')\r\n print(f\"-\" * 20)\r\n\r\n print(f'max diff in lm logits: {np.amax(np.abs((output.logits[0, 0] - output.logits[0, 1]).detach().to(\"cpu\").numpy()))}')\r\n print(f\"-\" * 20)\r\n\r\n pred = torch.argmax(output.logits, dim=-1).detach().to(\"cpu\").numpy().tolist()\r\n print(f'predidcted token ids: {pred}')\r\n\r\n print(f\"=\" * 40)\r\n\r\n if idx >= 10:\r\n break\r\n```",
"For `led-large`: note the difference is the maximal value of the absolute value of the hidden states between the 0-th position and 1-th position. More precisely: `np.amax(np.abs(h[0, 0] - h[0, 1])`.\r\n\r\nAs you can see, no matter what the encoder input sequences are, the difference becomes really small along the layer depth. \r\n\r\n```bash\r\nexample idx: 0\r\nmax diff in layer 0 - hidden_states: ffn: final_layer_norm: 0.029722318053245544\r\nmax diff in layer 1 - hidden_states: ffn: final_layer_norm: 0.0003014765679836273\r\nmax diff in layer 2 - hidden_states: ffn: final_layer_norm: 9.097158908843994e-06\r\nmax diff in layer 3 - hidden_states: ffn: final_layer_norm: 2.812594175338745e-07\r\nmax diff in layer 4 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08\r\nmax diff in layer 5 - hidden_states: ffn: final_layer_norm: 4.470348358154297e-08\r\nmax diff in layer 6 - hidden_states: ffn: final_layer_norm: 1.7881393432617188e-07\r\nmax diff in layer 7 - hidden_states: ffn: final_layer_norm: 2.384185791015625e-07\r\nmax diff in layer 8 - hidden_states: ffn: final_layer_norm: 3.725290298461914e-09\r\nmax diff in layer 9 - hidden_states: ffn: final_layer_norm: 2.9802322387695312e-08\r\nmax diff in layer 10 - hidden_states: ffn: final_layer_norm: 1.4901161193847656e-08\r\nmax diff in layer 11 - hidden_states: ffn: final_layer_norm: 1.1920928955078125e-06\r\nmax diff in lm logits: 6.67572021484375e-06\r\npredidcted token ids: [[133, 133, 4913, 815, 19931]]\r\n========================================\r\nexample idx: 1\r\nmax diff in layer 0 - hidden_states: ffn: final_layer_norm: 0.02129286527633667\r\nmax diff in layer 1 - hidden_states: ffn: final_layer_norm: 0.0002829432487487793\r\nmax diff in layer 2 - hidden_states: ffn: final_layer_norm: 8.203089237213135e-06\r\nmax diff in layer 3 - hidden_states: ffn: final_layer_norm: 2.6635825634002686e-07\r\nmax diff in layer 4 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08\r\nmax diff in layer 5 - hidden_states: ffn: final_layer_norm: 4.470348358154297e-08\r\nmax diff in layer 6 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08\r\nmax diff in layer 7 - hidden_states: ffn: final_layer_norm: 2.384185791015625e-07\r\nmax diff in layer 8 - hidden_states: ffn: final_layer_norm: 4.76837158203125e-07\r\nmax diff in layer 9 - hidden_states: ffn: final_layer_norm: 2.9802322387695312e-08\r\nmax diff in layer 10 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08\r\nmax diff in layer 11 - hidden_states: ffn: final_layer_norm: 3.814697265625e-06\r\nmax diff in lm logits: 1.0013580322265625e-05\r\npredidcted token ids: [[448, 448, 40741, 3463, 1034]]\r\n========================================\r\nexample idx: 2\r\nmax diff in layer 0 - hidden_states: ffn: final_layer_norm: 0.015403840690851212\r\nmax diff in layer 1 - hidden_states: ffn: final_layer_norm: 0.000291973352432251\r\nmax diff in layer 2 - hidden_states: ffn: final_layer_norm: 9.2238187789917e-06\r\nmax diff in layer 3 - hidden_states: ffn: final_layer_norm: 4.172325134277344e-07\r\nmax diff in layer 4 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08\r\nmax diff in layer 5 - hidden_states: ffn: final_layer_norm: 2.9802322387695312e-08\r\nmax diff in layer 6 - hidden_states: ffn: final_layer_norm: 1.1920928955078125e-07\r\nmax diff in layer 7 - hidden_states: ffn: final_layer_norm: 7.450580596923828e-09\r\nmax diff in layer 8 - hidden_states: ffn: final_layer_norm: 3.725290298461914e-09\r\nmax diff in layer 9 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08\r\nmax diff in layer 10 - hidden_states: ffn: final_layer_norm: 5.960464477539063e-08\r\nmax diff in layer 11 - hidden_states: ffn: final_layer_norm: 4.76837158203125e-06\r\nmax diff in lm logits: 1.1444091796875e-05\r\npredidcted token ids: [[0, 0, 385, 9, 6912]]\r\n========================================\r\n```",
"For `led-base`.\r\n\r\nNote that `lm_logits` have a significant difference in the range `[20, 30]`.\r\n\r\n```bash\r\nmax diff in layer 0 - hidden_states: ffn: final_layer_norm: 9.92125129699707\r\nmax diff in layer 1 - hidden_states: ffn: final_layer_norm: 6.954092502593994\r\nmax diff in layer 2 - hidden_states: ffn: final_layer_norm: 8.275293350219727\r\nmax diff in layer 3 - hidden_states: ffn: final_layer_norm: 13.49088191986084\r\nmax diff in layer 4 - hidden_states: ffn: final_layer_norm: 4.469869613647461\r\nmax diff in layer 5 - hidden_states: ffn: final_layer_norm: 29.27507972717285\r\nmax diff in lm logits: 26.215885162353516\r\npredidcted token ids: [[0, 133, 12, 815, 5142]]\r\n========================================\r\nexample idx: 1\r\nmax diff in layer 0 - hidden_states: ffn: final_layer_norm: 9.919170379638672\r\nmax diff in layer 1 - hidden_states: ffn: final_layer_norm: 6.953605651855469\r\nmax diff in layer 2 - hidden_states: ffn: final_layer_norm: 8.259047508239746\r\nmax diff in layer 3 - hidden_states: ffn: final_layer_norm: 13.197162628173828\r\nmax diff in layer 4 - hidden_states: ffn: final_layer_norm: 4.224005699157715\r\nmax diff in layer 5 - hidden_states: ffn: final_layer_norm: 29.185691833496094\r\nmax diff in lm logits: 28.350433349609375\r\npredidcted token ids: [[0, 846, 40741, 3463, 3449]]\r\n========================================\r\nexample idx: 2\r\nmax diff in layer 0 - hidden_states: ffn: final_layer_norm: 9.921760559082031\r\nmax diff in layer 1 - hidden_states: ffn: final_layer_norm: 6.953545570373535\r\nmax diff in layer 2 - hidden_states: ffn: final_layer_norm: 8.30044937133789\r\nmax diff in layer 3 - hidden_states: ffn: final_layer_norm: 13.065882682800293\r\nmax diff in layer 4 - hidden_states: ffn: final_layer_norm: 3.919126510620117\r\nmax diff in layer 5 - hidden_states: ffn: final_layer_norm: 28.759159088134766\r\nmax diff in lm logits: 26.200252532958984\r\npredidcted token ids: [[0, 35731, 385, 9, 6912]]\r\n========================================\r\n```",
"Hmm that's very interesting. A couple of pointers that might help:\r\n\r\n1. `bart-large` always forces the second token to be the BOS token during generation (see https://huggingface.co/facebook/bart-large/blob/main/config.json#L27) where as led-large doesn't. However `led-large` should probably do this as well since `led-large` is based of `bart-large`\r\n2. IIRC `led-large` has exactly the same weights as `bart-large`. The only difference is that `led-large` has some additionally randomely initialized layers for the global attention\r\n3. It might help to look into the original training script to see how led was fine-tuned for summarization: https://github.com/allenai/longformer/blob/master/scripts/summarization.py \r\n\r\nAlso @ibeltagy - have you seen something like the above already by any chance? \r\n",
"Also one last comment, note that just because `\"</s> <s>\"` always predicts the same token regardless of the encoder outputs doesn't mean training is necessarily broken. During training all `decoder_input_ids` start with `</s><s>` and then the model should learn the correct behavior, but it might indeed be a good idea to perturb the bos token.\r\n\r\nIn general, I wouldn't recommend using both `</s>` and `<s>` as prompt tokens for the `decoder_input_ids` but that's how fairseq has done it with BART",
"For the record: `bart-large` seems learned to predict the first token after `<s>` in the encoder input sequence, for both the first two decoder tokens `[</s>, <s>]`. I provide a script to confirm this in [this comment].(https://github.com/huggingface/transformers/issues/15559#issuecomment-1217894635).\r\n\r\nFor `led-large-16384`, same situation. But when this is not the case, it gives `[<s>, <s>]`. This happens quite often, and I think it explains why we get `[</s>, <s>, <s>, <s>, ...]` after finetuning.\r\n\r\n",
"@ratishsp \r\n\r\nI could confirm that the trick of perturbing the `bos` token's embedding works for `led-large-16384`. You can simply adding the following block after the line https://github.com/huggingface/transformers/blob/49e44b216b2559e34e945d5dcdbbe2238859e29b/examples/pytorch/summarization/run_summarization.py#L425\r\nwould work.\r\n\r\nPlease let us know if this works for you!\r\n\r\nHere is the code to add:\r\n```python\r\n import torch\r\n from transformers.modeling_utils import _load_state_dict_into_model\r\n\r\n d = model.state_dict()\r\n d[\"led.decoder.embed_tokens.weight\"][0] = d[\"led.decoder.embed_tokens.weight\"][0] + torch.randn(1024)\r\n\r\n _load_state_dict_into_model(model, d, \"led.\")\r\n\r\n```",
"Hi @ratishsp Hope the above solution works for you. I am going to close this issue, but if you have further question, don't hesitate to reopen.",
"Hi @ydshieh, sorry for the late reply... I had got busy with other stuff.\r\nI tried the above fix of perturbing the weights for bos. But it didn't work for me. ",
"@ratishsp Sorry to hear that, I am not sure what I can help further here, as the issue is found and a fix is provided which worked on my side (and some other users previously).\r\n\r\nIf you can open a new working branch, add your fix there and share it with us + with the list of training arguments used in your latest attempt, we could try to find some time to see if there are other things go wrong there.\r\n\r\n",
"Hi @ydshieh I have followed an identical setup as mentioned at the beginning of the thread but with latest version of Transformers repo. Sure, I can open a branch, add a fix and share with you.\r\nMeanwhile, will it be possible for you to share tensorboard log of your run similar to the one here https://github.com/huggingface/transformers/issues/18190#issuecomment-1189139463?",
"Hi @ratishsp . If you ever try to run it again with a branch that is aimed to share with us, there 2 two fixes to take into account:\r\n\r\nhttps://github.com/huggingface/transformers/issues/18190#issuecomment-1210958506\r\nhttps://github.com/huggingface/transformers/issues/18190#issuecomment-1218408325\r\n\r\nI also strongly suggest that you manually investigate if the bos token embedding is changed before and after this (newly added) line\r\n```python\r\n _load_state_dict_into_model(model, d, \"led.\")\r\n```\r\n\r\nI didn't keep the training log - I tried the fix with a training up to around 2K (or 3K maybe) steps, and didn't see this `</s><s><s>...` anymore (while I tried without the fix, it did occur as you described)\r\n\r\nOnce you have the code (with the fixes mentioned above that you will add), we can see if there is some mistake. And if you still get `</s><s><s>...`, I will try to run it myself. (BTW, I won't be available next week).",
"Hi @ydshieh, I have created a branch with fixes at https://github.com/ratishsp/transformers-fix.\r\nI trained two models: LED-Base and LED-Large with the identical code. The training commands are the same as given earlier in the thread https://github.com/huggingface/transformers/issues/18190#issue-1308379298. Below tensorboard logs show that the issue still exists. \r\n\r\n\r\n\r\n",
"Thanks @ratishsp . Will take a look once I am back!",
"Hi @ratishsp \r\n\r\nAs promised, I checked. You are right, perturbing bos token embedding is not helping for the checkpoint `allenai/led-large-16384`. (well, it helps a bit at the first few iterations, but once the steps continue, we get the same `</s><s><s>`.)\r\n\r\nI ran out of the ideas, the only thing works is to avoid using `</s> <s> <tok_1> <tok_2> ...` when preparing `labels`. Instead, just using `</s> <tok_1> <tok_2> ...`. To do so, add the following block after the line\r\nhttps://github.com/huggingface/transformers/blob/4dd784c32f76fb8285f205b94e2a6ebde731a1cd/examples/pytorch/summarization/run_summarization.py#L536\r\n\r\n### To add\r\n```python\r\n # Originally, the `labels` are of the form: </s> <s> ..., which causes trouble for finetuning some checkpoints.\r\n # Let's try to remove <s> (`bos` token) in `labels`, i.e. keep only the decoder_start_token (here </s>).\r\n\r\n model_inputs[\"labels\"] = [x[1:] for x in model_inputs[\"labels\"]]\r\n```\r\n\r\nOr you can simplify using my branch [debug_led_large_bad_generation](https://github.com/ydshieh/transformers/tree/debug_led_large_bad_generation) - this will save the generations after each evaluation.\r\n\r\nYou can verify the effect with (and without) this change by running a tiny training (with very few examples) below:\r\n\r\n```bash\r\n./run_summarization.py \\\r\n --model_name_or_path allenai/led-large-16384 \\\r\n --dataset_name xsum \\\r\n --output_dir ./led-large-16384-xsum-no-bos-dummy-1 \\\r\n --overwrite_output_dir \\\r\n --logging_dir ./led-large-16384-xsum-no-bos-dummy-logs-1 \\\r\n --do_train \\\r\n --do_eval \\\r\n --predict_with_generate \\\r\n --report_to tensorboard \\\r\n --load_best_model_at_end \\\r\n --greater_is_better True \\\r\n --metric_for_best_model rougeL \\\r\n --per_device_train_batch_size=1 \\\r\n --per_device_eval_batch_size=4 \\\r\n --evaluation_strategy steps \\\r\n --max_steps 500 \\\r\n --max_train_samples 500 \\\r\n --max_eval_samples 100 \\\r\n --logging_steps 100 \\\r\n --eval_steps 100 \\\r\n --save_steps 100 \\\r\n --save_total_limit 10 \\\r\n --generation_max_length 128 \\\r\n --num_beams 3\r\n```\r\n\r\nLet me know if you can get normal results with this change 🙏 Thank you!",
"Hi @ydshieh, it works! Thanks. \r\n",
"@ratishsp I am super glad it also works for you 🤗 !\r\n\r\nI will discuss with my colleagues where to put this information in our documentation, so there will be more clear reference to this issue and workaround. "
] | 1,658
| 1,669
| 1,665
|
NONE
| null |
### System Info
- `transformers` version: 4.20.0.dev0
- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-centos-8.6-Green_Obsidian
- Python version: 3.7.13
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ydshieh
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
OUTPUT_DIR=/home/ratish/project
python -m torch.distributed.launch --nproc_per_node=1 examples/pytorch/summarization/run_summarization.py \
--model_name_or_path allenai/led-large-16384 \
--do_train \
--do_eval \
--dataset_name xsum \
--output_dir ${OUTPUT_DIR} \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--overwrite_output_dir \
--logging_dir logs \
--evaluation_strategy steps \
--eval_steps 100 \
--logging_steps 100 \
--report_to tensorboard \
--save_total_limit 5 \
--save_steps 100 \
--load_best_model_at_end \
--greater_is_better True \
--metric_for_best_model rougeL \
--max_eval_samples 100 \
--num_beams 3
```
The logs shows that at checkpoint 1800 the rouge becomes zero.
`{'eval_loss': 2.172360897064209, 'eval_rouge1': 0.0, 'eval_rouge2': 0.0, 'eval_rougeL': 0.0, 'eval_rougeLsum': 0.0, 'eval_gen_len': 20.0, 'eval_runtime': 10.2823, 'eval_samples_per_second': 9.725, 'eval_steps_per_second': 2.431, 'epoch': 0.04}`
I evaluate the model output using the below function:
```
def generate_output():
import torch
from transformers import LEDTokenizer, LEDForConditionalGeneration
MODEL="/home/ratish/checkpoint-1800"
model = LEDForConditionalGeneration.from_pretrained(MODEL)
tokenizer = LEDTokenizer.from_pretrained(MODEL)
ARTICLE_TO_SUMMARIZE = "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
inputs = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors="pt")
global_attention_mask = torch.zeros_like(inputs)
global_attention_mask[:, 0] = 1
summary_ids = model.generate(inputs, global_attention_mask=global_attention_mask, num_beams=3, max_length=32)
print(tokenizer.decode(summary_ids[0], skip_special_tokens=False, clean_up_tokenization_spaces=False))
```
It produces the output `</s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s>`
### Expected behavior
The model should produce the summary of the news article.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18190/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18189
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18189/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18189/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18189/events
|
https://github.com/huggingface/transformers/issues/18189
| 1,308,223,457
|
I_kwDOCUB6oc5N-efh
| 18,189
|
run_summarization_no_trainer
|
{
"login": "Arij-Aladel",
"id": 68355048,
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arij-Aladel",
"html_url": "https://github.com/Arij-Aladel",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Did you run `accelerte config`? What's the result of `accelerate env`?",
"accelerate env\r\n\r\nCopy-and-paste the text below in your GitHub issue\r\n\r\n- `Accelerate` version: 0.10.0\r\n- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.13\r\n- Numpy version: 1.22.3\r\n- PyTorch version (GPU?): 1.12.0 (True)\r\n- `Accelerate` default config:\r\n Not found\r\n\r\naccelerate test\r\n\r\nRunning: accelerate-launch --config_file=None /home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/test_utils/test_script.py\r\nstderr: Traceback (most recent call last):\r\nstderr: File \"/home/arij/anaconda3/envs/sum/bin/accelerate-launch\", line 10, in <module>\r\nstderr: sys.exit(main())\r\nstderr: File \"/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py\", line 574, in main\r\nstderr: launch_command(args)\r\nstderr: File \"/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py\", line 523, in launch_command\r\nstderr: defaults = load_config_from_file(args.config_file)\r\nstderr: File \"/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/config/config_args.py\", line 45, in load_config_from_file\r\nstderr: with open(config_file, \"r\", encoding=\"utf-8\") as f:\r\nstderr: FileNotFoundError: [Errno 2] No such file or directory: '/home/arij/.cache/huggingface/accelerate/default_config.yaml'\r\nTraceback (most recent call last):\r\n File \"/home/arij/anaconda3/envs/sum/bin/accelerate\", line 10, in <module>\r\n sys.exit(main())\r\n File \"/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py\", line 43, in main\r\n args.func(args)\r\n File \"/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/test.py\", line 52, in test_command\r\n result = execute_subprocess_async(cmd, env=os.environ.copy())\r\n File \"/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/test_utils/testing.py\", line 276, in execute_subprocess_async\r\n raise RuntimeError(\r\nRuntimeError: 'accelerate-launch --config_file=None /home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/test_utils/test_script.py' failed with returncode 1\r\n\r\nThe combined stderr from workers follows:\r\nTraceback (most recent call last):\r\n File \"/home/arij/anaconda3/envs/sum/bin/accelerate-launch\", line 10, in <module>\r\n sys.exit(main())\r\n File \"/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py\", line 574, in main\r\n launch_command(args)\r\n File \"/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py\", line 523, in launch_command\r\n defaults = load_config_from_file(args.config_file)\r\n File \"/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/config/config_args.py\", line 45, in load_config_from_file\r\n with open(config_file, \"r\", encoding=\"utf-8\") as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: '/home/arij/.cache/huggingface/accelerate/default_config.yaml'\r\naccelerte config\r\naccelerte: command not found\r\n",
"That was a typo, sorry. You need to run `accelerate config` before running `accelerate launch` and answer the small questionnaire.",
"one of the questions is Do you want to use DeepSpeed? [yes/NO]: \r\nwhat is the better choice here?",
"could you please send any link that helps how to figure the questionaire using deepspeed?",
"Any way these are my steps\r\n\r\n> (sum) arij@dgx3:~/summarization/tutorial$ accelerate config\r\n> In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0\r\n> Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): 2\r\n> How many different machines will you use (use more than 1 for multi-node training)? [1]: 3\r\n> What is the rank of this machine (from 0 to the number of machines - 1 )? [0]: 9\r\n> What is the IP address of the machine that will host the main process? ###########33(hidden for security)\r\n> What is the port you will use to communicate with the main process? 8887\r\n> Do you want to use DeepSpeed? [yes/NO]: yes\r\n> Do you want to specify a json file to a DeepSpeed config? [yes/NO]: yes\r\n> Please enter the path to the json DeepSpeed config file: \r\n> Do you want to enable `deepspeed.zero.Init` when using ZeRO Stage-3 for constructing massive models? [yes/NO]: \r\n> Which Type of launcher do you want to use [0] pdsh, [1] standard, [2] openmpi, [3] mvapich)? [0]: \r\n> DeepSpeed configures multi-node compute resources with hostfile. Each row is of the format `hostname slots=[num_gpus]`, e.g., `localhost slots=2`; for more information please refer official [documentation](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). Please specify the location of hostfile: \r\n> Do you want to specify exclusion filter string? [yes/NO]: \r\n> Do you want to specify inclusion filter string? [yes/NO]: \r\n> How many GPU(s) should be used for distributed training? [1]:8\r\n> (sum) arij@dgx3:~/summarization/tutorial$ accelerate launch run_summarization_no_trainer.py --model_name_or_path t5-small --dataset_name cnn_dailymail --dataset_config '3.0.0' --source_prefix 'summarize: ' --output_dir output/tst-summarization\r\n> [2022-07-18 20:47:06,728] [WARNING] [runner.py:159:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.\r\n> [2022-07-18 20:47:06,728] [INFO] [runner.py:457:main] cmd = /home/arij/anaconda3/envs/sum/bin/python3.9 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 --no_local_rank run_summarization_no_trainer.py --model_name_or_path t5-small --dataset_name cnn_dailymail --dataset_config 3.0.0 --source_prefix summarize: --output_dir output/tst-summarization\r\n> [2022-07-18 20:47:08,004] [INFO] [launch.py:103:main] WORLD INFO DICT: {'localhost': [0, 1]}\r\n> [2022-07-18 20:47:08,004] [INFO] [launch.py:109:main] nnodes=1, num_local_procs=2, node_rank=0\r\n> [2022-07-18 20:47:08,004] [INFO] [launch.py:122:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})\r\n> [2022-07-18 20:47:08,004] [INFO] [launch.py:123:main] dist_world_size=2\r\n> [2022-07-18 20:47:08,004] [INFO] [launch.py:125:main] Setting CUDA_VISIBLE_DEVICES=0,1\r\n> args: \r\n> \r\n> Namespace(dataset_name='cnn_dailymail', dataset_config_name='3.0.0', train_file=None, validation_file=None, ignore_pad_token_for_loss=True, max_source_length=1024, source_prefix='summarize: ', preprocessing_num_workers=None, overwrite_cache=None, max_target_length=128, val_max_target_length=None, max_length=128, num_beams=None, pad_to_max_length=False, model_name_or_path='t5-small', config_name=None, tokenizer_name=None, text_column=None, summary_column=None, use_slow_tokenizer=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, learning_rate=5e-05, weight_decay=0.0, num_train_epochs=3, max_train_steps=None, gradient_accumulation_steps=1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, num_warmup_steps=0, output_dir='output/tst-summarization', seed=None, model_type=None, push_to_hub=False, hub_model_id=None, hub_token=None, checkpointing_steps=None, resume_from_checkpoint=None, with_tracking=False, report_to='all')\r\n> [2022-07-18 20:47:30,042] [INFO] [launch.py:210:main] Process 1054725 exits successfully.\r\n> args: \r\n> \r\n> Namespace(dataset_name='cnn_dailymail', dataset_config_name='3.0.0', train_file=None, validation_file=None, ignore_pad_token_for_loss=True, max_source_length=1024, source_prefix='summarize: ', preprocessing_num_workers=None, overwrite_cache=None, max_target_length=128, val_max_target_length=None, max_length=128, num_beams=None, pad_to_max_length=False, model_name_or_path='t5-small', config_name=None, tokenizer_name=None, text_column=None, summary_column=None, use_slow_tokenizer=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, learning_rate=5e-05, weight_decay=0.0, num_train_epochs=3, max_train_steps=None, gradient_accumulation_steps=1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, num_warmup_steps=0, output_dir='output/tst-summarization', seed=None, model_type=None, push_to_hub=False, hub_model_id=None, hub_token=None, checkpointing_steps=None, resume_from_checkpoint=None, with_tracking=False, report_to='all')\r\n> [2022-07-18 20:47:39,051] [INFO] [launch.py:210:main] Process 1054726 exits successfully.\r\n\r\n\r\nStill something wrong)",
"I think there should be full instructions on how to use accelerate , it is not clear. Thanks for your reply\r\n",
"Interesting that I was facing the exact same issue right now. The fix for me was to pass the local config I created.\r\n\r\n`accelerate launch --config_file <your config file> your_file.py`",
"@soumyasanyal could you please tell the steps I am absolutely new) or post your config",
"Sure! I just followed the steps in this [link](https://huggingface.co/docs/accelerate/quicktour). The steps I followed are:\r\n```\r\naccelerate config --config_file ./accelerate.yaml --> answer all the questions in the questionnaire\r\naccelerate test --config_file ./accelerate.yaml\r\naccelerate launch --config_file ./accelerate.yaml script.py\r\n```\r\n\r\nMy config file is as follows (but it can change as per your requirements. I just wanted to run a job on 8 GPUs in a single node, without DeepSpeed or mixed precision):\r\n```\r\ncompute_environment: LOCAL_MACHINE\r\ndeepspeed_config: {}\r\ndistributed_type: MULTI_GPU\r\nfsdp_config: {}\r\nmachine_rank: 0\r\nmain_process_ip: null\r\nmain_process_port: null\r\nmain_training_function: main\r\nmixed_precision: 'no'\r\nnum_machines: 1\r\nnum_processes: 8\r\nuse_cpu: false\r\n```\r\n\r\nI was previously running `accelerate launch script.py` without mentioning the config file when I faced the issue that you reported here.\r\n\r\nAlso FYI, note that the doc says that integration of accelerate with DeepSpeed is [experimental](https://huggingface.co/docs/accelerate/quicktour#deepspeed).",
"@sgugger sorry for reopenning the issue while using this script using T5 over cnn-dialy dataset\r\nunder this configuration\r\n\r\n\r\n> compute_environment: LOCAL_MACHINE\r\n> deepspeed_config: {}\r\n> distributed_type: MULTI_GPU\r\n> fsdp_config: {}\r\n> machine_rank: 0\r\n> main_process_ip: null\r\n> main_process_port: null\r\n> main_training_function: main\r\n> mixed_precision: 'no'\r\n> num_machines: 1\r\n> num_processes: 2\r\n> use_cpu: false\r\n\r\n\r\n I got the error \r\n```\r\nAttributeError: 'Accelerator' object has no attribute 'gather_for_metrics'\r\n generated_tokens, labels = accelerator.gather_for_metrics((generated_tokens, labels))\r\nAttributeError: 'Accelerator' object has no attribute 'gather_for_metrics'\r\n generated_tokens, labels = accelerator.gather_for_metrics((generated_tokens, labels))\r\nAttributeError: 'Accelerator' object has no attribute 'gather_for_metrics'\r\n```\r\nFor this error replacing gather_for_metrics with just `gather` as old version of this code, gives me zero list of gathered `decoded_preds, decoded_labels` . and gather for metrics did not work.\r\n\r\nwith this configuration \r\n\r\n> compute_environment: LOCAL_MACHINE\r\n> deepspeed_config:\r\n> gradient_accumulation_steps: 1\r\n> offload_optimizer_device: none\r\n> offload_param_device: none\r\n> zero3_init_flag: false\r\n> zero_stage: 2\r\n> distributed_type: DEEPSPEED\r\n> fsdp_config: {}\r\n> machine_rank: 0\r\n> main_process_ip: null\r\n> main_process_port: null\r\n> main_training_function: main\r\n> mixed_precision: 'no'\r\n> num_machines: 1\r\n> num_processes: 8\r\n> use_cpu: false\r\n\r\n\r\nI get this error\r\n\r\n> RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 7; 10.92 GiB total capacity; 9.83 GiB already allocated; 293.50 MiB free; 9.91 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n> ret = input.softmax(dim)\r\n> RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 1; 10.92 GiB total capacity; 9.83 GiB already allocated; 245.50 MiB free; 9.91 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n> ",
"@muellerzr ",
"@Arij-Aladel in this case you should reduce your batch size most likely, but I'll be running it myself in just a moment",
"I did already still problem of not finding gather_for_metric attribute\r\n",
"You can simply run the example as is",
"Thanks @Arij-Aladel, I think I have found the fix. Can you try running the following training script on your end to verify? (I have wget to make your life easy):\r\n\r\n(Also as mentioned in the other post please make sure you have a pypi version of accelerate >= 0.12.0 to run the scripts, a PR was just merged yesterday to make them a requirement for all these scripts)\r\n\r\n```bash\r\nwget https://raw.githubusercontent.com/huggingface/transformers/muellerzr-fix-no-trainer/examples/pytorch/summarization/run_summarization_no_trainer.py\r\n```",
"@muellerzr thanks for your response! As I understand your fix is just deleting this line\r\n\r\n> 706 decoded_preds, decoded_labels = accelerator.gather_for_metrics(decoded_preds, decoded_labels)\r\n?? \r\n\r\nmy life with wget was not easier)))\r\n\r\n> wget https://raw.githubusercontent.com/huggingface/transformers/muellerzr-fix-no-trainer/examples/pytorch/summarization/run_summarization_no_trainer.py\r\n> --2022-11-17 10:45:24-- https://raw.githubusercontent.com/huggingface/transformers/muellerzr-fix-no-trainer/examples/pytorch/summarization/run_summarization_no_trainer.py\r\n> Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.111.133, 185.199.108.133, ...\r\n> Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.\r\n> HTTP request sent, awaiting response... 404 Not Found\r\n> 2022-11-17 10:45:24 ERROR 404: Not Found. ",
"\r\nReally I do not know what is wrong with this script .....",
"@Arij-Aladel yes the fix got merged yesterday, you can find it here: https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py\r\n\r\nI would highly recommend doing `pip install -r transformers/examples/pytorch/summarization/requirements.txt -U` (the txt file here: https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/requirements.txt) to avoid these dependency issues you have been struggling with as the script ran just fine for me.",
"\r\nAfter \r\n\r\n> pip install -r transformers/examples/pytorch/summarization/requirements.txt -U\r\n\r\n :)",
"Ok seems it was package installation issue after your fix, I have uninstalled all packages then reinstall packages according to requirements file. It works now thanks @muellerzr ",
"Great! Can this be closed now @Arij-Aladel? :) ",
"Yes , thanks . I am closing it."
] | 1,658
| 1,668
| 1,668
|
NONE
| null |
@sgugger Hello! I just tried to run the code to explore this example https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py
this is my yml file to build the env
> name: sum
>
> channels:
> - pytorch
> - conda-forge
> - defaults
>
> dependencies:
> - jupyterlab
> - pip
> - python=3.9
> - pytorch
> - tensorboard
> - torchaudio
> - torchvision
> - tqdm
> - tokenizers
> - prettytable
> - einops
> - matplotlib
> - accelerate
> - datasets
> - sentencepiece != 0.1.92
> - protobuf
> - nltk
> - py7zr
> - transformers
>
then pip install rouge-score
after that simply I ran thhe command
`accelerate launch run_summarization_no_trainer.py --model_name_or_path t5-small --dataset_name cnn_dailymail --dataset_config '3.0.0' --source_prefix 'summarize: ' --output_dir output/tst-summarization`
and got the error
> Traceback (most recent call last):
> File "/home/arij/anaconda3/envs/sum/bin/accelerate", line 10, in <module>
> sys.exit(main())
> File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 43, in main
> args.func(args)
> File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py", line 568, in launch_command
> simple_launcher(args)
> File "/home/arij/anaconda3/envs/sum/lib/python3.9/site-packages/accelerate/commands/launch.py", line 235, in simple_launcher
> mixed_precision = PrecisionType(args.mixed_precision.lower())
> AttributeError: 'NoneType' object has no attribute 'lower'
How to fix it?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18189/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18188
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18188/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18188/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18188/events
|
https://github.com/huggingface/transformers/pull/18188
| 1,308,189,561
|
PR_kwDOCUB6oc47lLCp
| 18,188
|
Skip test_multi_gpu_data_parallel_forward for BEiT and Data2VecVision
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
Similar to #17890 and #17864, BEiT and `Data2VecVision` use `add_module` and cause problem for this test.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18188/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18188",
"html_url": "https://github.com/huggingface/transformers/pull/18188",
"diff_url": "https://github.com/huggingface/transformers/pull/18188.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18188.patch",
"merged_at": 1658325284000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18187
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18187/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18187/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18187/events
|
https://github.com/huggingface/transformers/pull/18187
| 1,308,136,451
|
PR_kwDOCUB6oc47k_n0
| 18,187
|
fix typo inside bloom documentation
|
{
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #18178
As @rhvaz noticed, the current global variable for the documentation of Bloom doesn't give the working snippet. This PR proposes to fix the name of the checkpoint.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@sgugger and @younesbelkada, if you want to have a look :slightly_smiling_face:
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18187/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18187/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18187",
"html_url": "https://github.com/huggingface/transformers/pull/18187",
"diff_url": "https://github.com/huggingface/transformers/pull/18187.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18187.patch",
"merged_at": 1658159032000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18186
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18186/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18186/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18186/events
|
https://github.com/huggingface/transformers/issues/18186
| 1,308,099,601
|
I_kwDOCUB6oc5N-AQR
| 18,186
|
Same training time for different values of sliding window in Longformer
|
{
"login": "allohvk",
"id": 109533797,
"node_id": "U_kgDOBodaZQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109533797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/allohvk",
"html_url": "https://github.com/allohvk",
"followers_url": "https://api.github.com/users/allohvk/followers",
"following_url": "https://api.github.com/users/allohvk/following{/other_user}",
"gists_url": "https://api.github.com/users/allohvk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/allohvk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/allohvk/subscriptions",
"organizations_url": "https://api.github.com/users/allohvk/orgs",
"repos_url": "https://api.github.com/users/allohvk/repos",
"events_url": "https://api.github.com/users/allohvk/events{/privacy}",
"received_events_url": "https://api.github.com/users/allohvk/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"@allohvk Could you provide a (minimal) training script that demonstrates this issue, probably using a dataset from HF Hub if necessary?",
"- I will try to do that. I need to look for a large enough dataset such that the training times show a tangible difference between different scenarios\r\n- I was exploring alternative architectures (like Big Bird) and came across a disclaimer there stating that benefits of sparse attention become visible for only 1024 max-seq-length and beyond. Perhaps Longformer too has this limitation and if so, this becomes just a documentation issue. Maybe Longformer is just not optimized to handle sliding windows of length < 512 and hence shows no tangible difference in execution time for sliding window size=2 or sliding window size = 512.",
"Thanks for the info. @allohvk . BTW, on what task you trained this model? It's also a good idea to double check the way you prepare `global_attention_mask` (if you ever use it).",
"Taking some time to measure model `forward` timing with window size ` [4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048]`.\r\nHere is the result. ~~(Will do more measurement when I have time)~~\r\n\r\n### Results (in seconds, for `32` forwards per window size)\r\n```\r\n[47.823847, 46.043184, 46.290201, 46.691181, 48.692595, 53.176747, 62.156357, 81.265798, 149.261874, 273.367502]\r\n```\r\n\r\nIt indeed looks like the advantage appears for longer enough length.\r\n\r\n### Code\r\n```python\r\nimport torch\r\nfrom transformers import LongformerModel, LongformerTokenizer, LongformerConfig\r\n\r\n\r\ndef measure(w_size=512):\r\n\r\n config = LongformerConfig.from_pretrained(\"allenai/longformer-base-4096\")\r\n config.attention_window = w_size\r\n model = LongformerModel.from_pretrained(\"allenai/longformer-base-4096\", config=config)\r\n tokenizer = LongformerTokenizer.from_pretrained(\"allenai/longformer-base-4096\")\r\n\r\n print(model.config.attention_window)\r\n\r\n SAMPLE_TEXT = \" \".join([\"Hello world! \"] * 1000) # long input document\r\n input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1\r\n\r\n attention_mask = torch.ones(\r\n input_ids.shape, dtype=torch.long, device=input_ids.device\r\n ) # initialize to local attention\r\n global_attention_mask = None\r\n\r\n import datetime\r\n\r\n s = datetime.datetime.now()\r\n for i in range(32):\r\n outputs = model(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask)\r\n e = datetime.datetime.now()\r\n l = (e-s).total_seconds()\r\n print(l)\r\n\r\n sequence_output = outputs.last_hidden_state\r\n pooled_output = outputs.pooler_output\r\n\r\n print(sequence_output.shape)\r\n\r\n return l\r\n\r\nls = [measure(w) for w in [4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048]]\r\nprint(ls)\r\n```",
"Thanks @ydshieh for taking time out to test this and I must apologize if I wasn't clear. I was actually referring to the training time taken, by which I mean the time to fine-tune a pre-trained model with additional training data before actually inferring. \r\nI would assume that just like the inference time, the training time too should change based on the length of the sliding window. It should be shorter for a window of (say)128 compared to a window of 512 but the training hours don't change. I will share a small but complete working code with you in a couple of days. \r\n\r\nTo answer your question - I am training a simple classifier using the pertained weights of the base model. I just pass the last state output (768 dim) to a linear regression head. The dataset is actually composed of short NL statements appended with an associated context which are long programming code snippets (something on the lines of what CodeBert does). \r\n\r\nJust as an FYI, I tried BigBird today and had the same issue, the training time taken taken for \"sparse_attention\" is the same as the training time taken for a \"full_attention\" for a 2048 seq_len. \"sparse_attention\" option actually just attends to 2 x 64 + 3 x 64 + 2 x 64 = 448 tokens which is far less than 2048 and should be much much faster.\r\n\r\nYou can choose to close this ticket if you so wish. I will change my dataset to IMDB and share a simulatable code in couple of days.",
"Hi @allohvk , I know you are talking about the training time. However, even with just the `forward` method of the model, we already see that the effect of `window_size` (used for local attentions), i.e. to have linear time instead of quadratic time, will appear **only for large** enough `window_size` (and therefore with long enough sequences).\r\n\r\nFor small `window_size`, some overhead will prevent this much desired effect. From this observation, I am afraid that this holds for training too.\r\n\r\nIf you try to measure this line directly https://github.com/huggingface/transformers/blob/8a61fe023430115bb61ec328a29d35571f4fc2c4/src/transformers/models/longformer/modeling_longformer.py#L820\r\n\r\n(without any other parts, and therefore no other overhead), you will see this linear/quadratic running time.",
"- Got it. I suppose this is very reasonable ramification of using a specialised attention model which handles long sequences. There is no visible benefit in having sliding window size < 128. Possibly it can just be documented somewhere. I will close this as \"not a bug\" for now.\r\n- I may still have a problem with the model taking quadratic time for longer sequences even with default values of sliding window. However will recheck if it is a bug in my training code. If not, will share a simulatable code by which the problem can be replicated. I will open a new ticket for that.",
"Hi there, I have a little question about Longformer's attention_window: \r\n\r\nSince the attentions are only calculated in a window, is it right or not to use the cls patch(first patch of the sequence) to do downstream tasks? I doubt whether the cls patch has relations with patches in other chunks!",
"I don't look into the detail, but `Longformer` has `global attentions` - some tokens attend to all tokens. If cls token is one such token (that attend to all tokens), then it has global information.\r\n\r\n[HF Forum](https://discuss.huggingface.co/) is a better place for this question."
] | 1,658
| 1,706
| 1,658
|
NONE
| null |
### System Info
Transformers: 4.20.1
Python: 3.8.12
Pretrained models & tokenizer from HF: "allenai/longformer-base-4096"
The training time does not change for any value of sliding window. For e.g. a sliding window of 2 or 512 (which is the default) or 1024 takes the same training time. This seems to be a bug to me. I need a very small local window span (sliding window max 64 across 4096 tokens) and the model is simply unusable in this scenario due to excessive training time
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
A simple: model.config.attention_window = [SLIDE_WIN_ATTN]*12
### Expected behavior
I would expect training time to fall somewhat quadratically for lower values of SLIDE_WIN_ATTN (say for 64) as compared to the default which is 512. However the training time for both cases is the same (around 24 hours per epoch). In fact SLIDE_WIN_ATTN values from 2 to 1024 roughly take the same training time which should not be the case
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18186/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18185
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18185/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18185/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18185/events
|
https://github.com/huggingface/transformers/pull/18185
| 1,307,972,728
|
PR_kwDOCUB6oc47kcLR
| 18,185
|
Fix BLOOM's softmax for half precisions
|
{
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The two situations you described indeed exist. However, I think there is no **real** necessity to deal with them.\r\n\r\nAs long as there is at least one position to attend to, it doesn't matter if we have mixed `-inf` & `torch.finfo(...).min`, as well as if we have a positive value added to ``torch.finfo(...).min`. As long as the score(s) for the attended position(s) is/are within reasonable range, their scores will dominate the other unattended scores. (This should hold during the inference of a trained model, otherwise the model is broken.)\r\n\r\nAnd for a sequence without any position to attend, nothing we can't do. If we want to go really rigorous, we should multiply the softmaxed-scores by zeros for the unattended places. \r\n",
"@ydshieh Are we sure `attention_scores` can never have very large values ? Because the worst case scenario would be for `attention_scores` to have the biggest value for a hidden token. \r\nAlso by comparing the outputs before and after this PR. It does seem that we get better generations (less repetition). But It needs more testing to be confirmed",
"@NouamaneTazi I don't think there is such guarantee, and what you mentioned is possible. However, it would be great if you can provide some examples for which you find this PR helps to get better results or solve some issues. Thank you!",
"So stupid question: instead of running `+` operator, can we not run `min` with an attention mask that's `torch.finfo(dtype).max` in not masked values and `torch.finfo(dtype).min` in masked values and be done with it? Or `torch.masked_fill(attention_mask, torch.findo(dtype).min)`? ",
"> So stupid question: instead of running `+` operator, can we not run `min` with an attention mask that's `torch.finfo(dtype).max` in not masked values and `torch.finfo(dtype).min` in masked values and be done with it? Or `torch.masked_fill(attention_mask, torch.findo(dtype).min)`?\r\n\r\nI'm not sure what `+`operator are you refering to? Is it after the softmax operation? Or when creating the attention mask?",
"> I'm not sure what `+`operator are you refering to? Is it after the softmax operation? Or when creating the attention mask?\r\n\r\nI think @thomasw21 is talking about the place where an attn. score (where you say it could be positive) is added by the mask.\r\nRegarding @thomasw21 question, it's also a valid approach (it's like a clamp in different order and reducing some ops). The current approach (simply `+`) is probably from the first model(s), like BERT/GPT2.\r\n",
"Should be fixed in this PR: https://github.com/huggingface/transformers/pull/18344"
] | 1,658
| 1,659
| 1,659
|
MEMBER
| null |
This PR aims at fixing the following issues:
- In [this line](https://github.com/huggingface/transformers/blob/6561fbcc6e6d6e1a29fb848dc34710aa25feae78/src/transformers/models/bloom/modeling_bloom.py#L305), if we use minimum dtype values in the attention mask to mask some values. After adding a positive value, the masked values would come back to life. This PR proposes to use `-inf` in the attention mask instead, and only after the addition, we replace the inf values by the respective max/min dtype values
```python
input_dtype = attention_scores.dtype
attn_weights = (attention_scores * self.layer_number) + attention_mask # torch.finfo(torch.float16).min + 1 is no longer torch.finfo(torch.float16).min (no longer hidden)
attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
attention_probs = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(input_dtype)
```
- Use `torch.clip` instead of `torch.max` to ensure we avoid both `-inf` and `+inf` for softmax
- [Only relevent if we use `torch.finfo(dtype).min` in attention mask] In [this line](https://github.com/huggingface/transformers/blob/6561fbcc6e6d6e1a29fb848dc34710aa25feae78/src/transformers/models/bloom/modeling_bloom.py#L600), if we use the minimum dtype values, after performing the addition, we get mixed `-inf` and `torch.finfo(dtype).min` in the attention mask
```python
if attention_mask is not None:
# [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])
combined_attention_mask = (
expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask # this gives `-inf` when we substract a number from `torch.finfo(dtype).min`
)
```
All tests (including slow ones) are passing. ✅
Related to: https://github.com/huggingface/transformers/pull/17437
Co-authored by: @younesbelkada
cc @ydshieh @stas00
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18185/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18185/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18185",
"html_url": "https://github.com/huggingface/transformers/pull/18185",
"diff_url": "https://github.com/huggingface/transformers/pull/18185.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18185.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/18184
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18184/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18184/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18184/events
|
https://github.com/huggingface/transformers/pull/18184
| 1,307,970,766
|
PR_kwDOCUB6oc47kbwE
| 18,184
|
[From pretrained] Allow download from subfolder inside model repo
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
MEMBER
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently it is not possible for `transformers` to download a model that is located inside a subfolder of a repo.
E.g. for diffusion pipelines, a transformer model is often only one part of a pipeline of models so it makes a lot of sense to save checkpoints inside folders of model repos, see: https://huggingface.co/fusing/latent-diffusion-text2im-large/tree/main/bert
Similarly for Dalle-mini where one would have a Bart and a VQ-VAE model inside the same repo.
The PR would allow the user to do the following (which fails on master currently):
```py
from transformers import BertModel
BertModel.from_pretrained("fusing/latent-diffusion-text2im-large", revision="d5eab56", subfolder="bert")
```
**🚨🚨 IMPORTANT 🚨🚨**:
This PR adds subfolder loading and saving functionality for both sharded and non-sharded PyTorch checkpoints. It should also work when loading a model with `from_tf=True` or `from_flax=True` - however this is currently not tested. I would be great if such tests could be added in a follow-up PR.
Also cc @julien-c FYI
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18184/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18184",
"html_url": "https://github.com/huggingface/transformers/pull/18184",
"diff_url": "https://github.com/huggingface/transformers/pull/18184.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18184.patch",
"merged_at": 1658224433000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18183
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18183/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18183/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18183/events
|
https://github.com/huggingface/transformers/pull/18183
| 1,307,969,908
|
PR_kwDOCUB6oc47kbkT
| 18,183
|
Better default for offload_state_dict in from_pretrained
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
Seeing issues arise since the release of big model inference, I realized it's very confusing for users to have to set `offload_state_dict=True` when the device map picked with `device_map="auto"` contains some disk-offloaded weights. Therefore, this PR changes the default to `None` to pick a good default (basically `False` if there is no disk offload and `True` otherwise) while still letting the user choose the behavior they want by passing a value.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18183/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18183",
"html_url": "https://github.com/huggingface/transformers/pull/18183",
"diff_url": "https://github.com/huggingface/transformers/pull/18183.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18183.patch",
"merged_at": 1658152962000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18182
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18182/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18182/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18182/events
|
https://github.com/huggingface/transformers/pull/18182
| 1,307,959,433
|
PR_kwDOCUB6oc47kZR0
| 18,182
|
Fix template for new models in README
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
This PR fixes the template for when `make fix-copies` adds new models to the README. They are probably new models that should be documented in main and not stable.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18182/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18182",
"html_url": "https://github.com/huggingface/transformers/pull/18182",
"diff_url": "https://github.com/huggingface/transformers/pull/18182.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18182.patch",
"merged_at": 1658152912000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18181
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18181/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18181/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18181/events
|
https://github.com/huggingface/transformers/issues/18181
| 1,307,913,497
|
I_kwDOCUB6oc5N9S0Z
| 18,181
|
Test summary with previous PyTorch/TensorFlow versions
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
open
| false
| null |
[] |
[
"cc @LysandreJik @sgugger @patrickvonplaten @Rocketknight1 @gante @anton-l @NielsRogge @amyeroberts @alaradirik @stas00 @hollance to have your comments",
"TF 2.3 is quite old by now, and I wouldn't make a special effort to support it. Several nice TF features (like the Numpy-like API) only arrived in TF 2.4, and we're likely to use those a lot in future.",
"Hey @ydshieh, would you have a summary of the failing tests handy? I'm curious to see the reason why there are so many failures for PyTorch as soon as we leave the latest version. I'm quite confident that it's an issue in our tests rather than in our internal code, so seeing the failures would help. Thanks!",
"@LysandreJik I will re-run it. The previous run(s) have huge tables in the reports, and sending to Slack failed (3001 character limit). I finally ran it by disabling those blocks.\r\n\r\nBefore re-running it, I need a approve for #17921 ",
"I ran the past CI again which returns more information. Looking the report for `PyTorch 1.4` quickly, here are some observations:\r\n\r\nThere is one error occurring in almost all models:\r\n\r\n- `from_pretrained`: OSError: Unable to load weights from pytorch checkpoint file for`\r\n - `torch.load`: Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old.\r\n\r\nAnother one also occurs a lot (torchscript tests)\r\n\r\n- (line 625) AttributeError: module 'torch.jit' has no attribute '_state'\r\n\r\nAn error occurs (specifically) to vision models (probably due to the convolution layers)\r\n- (line 97) RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.\r\n\r\n`BART` has 108/106 failures:\r\n\r\n- (line 240) RuntimeError: CUDA error: device-side assert triggered\r\n - Don't know what's wrong here yet\r\n\r\nOthers\r\n\r\n- Other `AttributeError`: (not exhaustive)\r\n - AttributeError: module 'torch' has no attribute 'minimum'\r\n - AttributeError: 'builtin_function_or_method' object has no attribute 'fftn'\r\n - AttributeError: module 'torch' has no attribute 'square'\r\n - AttributeError: module 'torch.nn' has no attribute 'Hardswish'\r\n - AttributeError: module 'torch' has no attribute 'logical_and'\r\n - AttributeError: module 'torch' has no attribute 'pi'\r\n - AttributeError: module 'torch' has no attribute 'multiply'",
"Thanks for the report! Taking a look at the PyTorch versions, here are the dates at which they were releases:\r\n- 1.4: [Jan 16, 2020](https://pypi.org/project/torch/1.4.0/)\r\n- 1.5: [Apr 21, 2020](https://pypi.org/project/torch/1.5.0/)\r\n- 1.6: [Jul 28, 2020](https://pypi.org/project/torch/1.6.0/)\r\n- 1.7: [Oct 27, 2020](https://pypi.org/project/torch/1.7.0/)\r\n- 1.8: [Mar 4, 2021](https://pypi.org/project/torch/1.8.0/)\r\n- 1.9: [Jun 15, 2021](https://pypi.org/project/torch/1.9.0/)\r\n- 1.10: [Oct 21, 2021](https://pypi.org/project/torch/1.10.0/)\r\n- 1.11: [Mar 10, 2021](https://pypi.org/project/torch/1.11.0/)\r\n\r\nMost of the errors in `from_pretrained` seem to come from the zipfile format introduced by PyTorch 1.6. I think this is the most annoying one to patch by far.\r\n\r\nFrom a first look, I'd offer to drop support for all PyTorch version inferior to < 1.6 as these have been released *more than two years ago*.\r\n\r\nDo you have a link to a job containing all these failures? I'd be interested in seeing if the 2342 errors in PyTorch 1.6 are solvable simply or if they will require a significant refactor.",
"The link is [here](https://github.com/huggingface/transformers/actions/runs/2742416113). But since it contains too many jobs (all models x all versions ~= 3200 jobs), it just shows `[Unicorn!] This page is taking too long to load`.\r\n\r\nI can re-run specifically for PyTorch 1.6 only, and will post a link later.",
"> From a first look, I'd offer to drop support for all PyTorch version inferior to < 1.6 as these have been released more than two years ago.\r\n\r\nI second that. \r\n\r\nWhile we are at it, do we want to establish an official shifting window of how far back we want to support pytorch versions for? As in minimum - we support at least 2 years of pytorch? If it's easy to support longer we would but it'd be easy to cut off if need be.\r\n\r\nThe user always has the older `transformers` that they can pin to if they really need a very old pytorch support.",
"Yes, that would work fine with me. If I understand correctly, that's how libraries in the PyData ecosystem (scikit-learn, numpy) manage the support of Python versions: they drop support for versions older than 2 years (https://github.com/scikit-learn/scikit-learn/issues/20965, https://github.com/scikit-learn/scikit-learn/issues/20084, [scipy toolchaib](https://scipy.github.io/devdocs/dev/toolchain.html), https://github.com/scipy/scipy/pull/14655).\r\n\r\nDropping support for PyTorch/Flax/TensorFlow versions that have been released more than two years ago sounds good to me. That is somewhat already the case (see failing tests), but we're just not aware.",
"Hi, I am wondering what it means `a PyTorch/TensorFlow/Flax version is supported`. I guess it doesn't imply all models work under those framework versions, but would like to know if there is more explicit definition (for `transformers`, or more generally, in open source projects).",
"Ideally it should mean that all models work/all tests pass apart from functionality explicitly having versions tests (like CUDA bfloat16 or torch FX where we test against a specific PyTorch version)."
] | 1,658
| 1,662
| null |
COLLABORATOR
| null |
Initialized by @LysandreJik, we ran the tests with previous PyTorch/TensorFlow versions. The goal is to determine if we should drop (some) earlier PyTorch/TensorFlow versions.
- This is not exactly the same as the scheduled daily CI (`torch-scatter`, `accelerate` not installed, etc.)
- Currently we only have the global summary (i.e. there is no number of test failures per model)
Here is the results (running on ~June 20, 2022):
- PyTorch testing has ~27100 tests
- TensorFlow testing has ~15700 tests
| Framework | No. Failures |
| :--------------- | ----------: |
| PyTorch 1.10 | 50 |
| PyTorch 1.9 | 710 |
| PyTorch 1.8 | 1301 |
| PyTorch 1.7 | 1567 |
| PyTorch 1.6 | 2342 |
| PyTorch 1.5 | 3315 |
| PyTorch 1.4 | 3949 |
| TensorFlow 2.8 | 118 |
| TensorFlow 2.7 | 122 |
| TensorFlow 2.6 | 122 |
| TensorFlow 2.5 | 128 |
| TensorFlow 2.4 | 167 |
It looks like the number of failures in TensorFlow testing doesn't increase much.
### So far my thoughts:
- All TF >= 2.4 should be (still) kept in the list of supported versions
### Questions
- What's you opinion regarding which versions to drop support?
- Would you like to see the number of test failures per model?
- TensorFlow 2.3 needs CUDA 10.1 and requires the build of a special docker image. Do you think we should make the effort on it to have the results for `TF 2.3`?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18181/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/18180
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18180/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18180/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18180/events
|
https://github.com/huggingface/transformers/issues/18180
| 1,307,867,727
|
I_kwDOCUB6oc5N9HpP
| 18,180
|
failed to use PyTorch jit mode due to: forward() is missing value for argument 'position_ids'.
|
{
"login": "Captainr22",
"id": 44116628,
"node_id": "MDQ6VXNlcjQ0MTE2NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/44116628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Captainr22",
"html_url": "https://github.com/Captainr22",
"followers_url": "https://api.github.com/users/Captainr22/followers",
"following_url": "https://api.github.com/users/Captainr22/following{/other_user}",
"gists_url": "https://api.github.com/users/Captainr22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Captainr22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Captainr22/subscriptions",
"organizations_url": "https://api.github.com/users/Captainr22/orgs",
"repos_url": "https://api.github.com/users/Captainr22/repos",
"events_url": "https://api.github.com/users/Captainr22/events{/privacy}",
"received_events_url": "https://api.github.com/users/Captainr22/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
I want to use pytorch jit to improve my inference speed on CPU. My model is BertForTokenClassification. And I found position_ids must be given to model if I want to use pytorch jit.
Could you please give some suggestions to solve this problem? Or the position_ids is indispensable.
Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18180/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18180/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18179
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18179/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18179/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18179/events
|
https://github.com/huggingface/transformers/issues/18179
| 1,307,819,914
|
I_kwDOCUB6oc5N87-K
| 18,179
|
Cannot save TFTapasModel as SavedModel
|
{
"login": "ahmedlone127",
"id": 66001253,
"node_id": "MDQ6VXNlcjY2MDAxMjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/66001253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmedlone127",
"html_url": "https://github.com/ahmedlone127",
"followers_url": "https://api.github.com/users/ahmedlone127/followers",
"following_url": "https://api.github.com/users/ahmedlone127/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmedlone127/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmedlone127/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmedlone127/subscriptions",
"organizations_url": "https://api.github.com/users/ahmedlone127/orgs",
"repos_url": "https://api.github.com/users/ahmedlone127/repos",
"events_url": "https://api.github.com/users/ahmedlone127/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmedlone127/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Following merging of #18153 the reproduction snippet runs on main without error. "
] | 1,658
| 1,658
| 1,658
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Rocketknight1 @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import TapasTokenizer, TFTapasModel
import pandas as pd
tokenizer = TapasTokenizer.from_pretrained("google/tapas-base")
model = TFTapasModel.from_pretrained("google/tapas-base")
data = {
"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
"Age": ["56", "45", "59"],
"Number of movies": ["87", "53", "69"],
}
table = pd.DataFrame.from_dict(data)
queries = ["How many movies has George Clooney played in?", "How old is Brad Pitt?"]
inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
model.save_pretrained("test",saved_model=True)
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-11-637c488e6341>](https://localhost:8080/#) in <module>()
----> 1 model.save_pretrained("test",saved_model=True)
2 frames
[/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py](https://localhost:8080/#) in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 806, in serving *
output = self.call(inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 981, in run_call_with_unpacked_inputs *
return func(self, **unpacked_inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/tapas/modeling_tf_tapas.py", line 1008, in call *
outputs = self.tapas(
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler **
raise e.with_traceback(filtered_tb) from None
ValueError: Exception encountered when calling layer "tapas" (type TFTapasMainLayer).
in user code:
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 981, in run_call_with_unpacked_inputs *
return func(self, **unpacked_inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/tapas/modeling_tf_tapas.py", line 790, in call *
embedding_output = self.embeddings(
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler **
raise e.with_traceback(filtered_tb) from None
ValueError: Exception encountered when calling layer "embeddings" (type TFTapasEmbeddings).
in user code:
File "/usr/local/lib/python3.7/dist-packages/transformers/models/tapas/modeling_tf_tapas.py", line 223, in call *
col_index = IndexMap(token_type_ids[:, :, 1], self.type_vocab_sizes[1], batch_dims=1)
ValueError: Index out of range using input dim 2; input has only 2 dims for '{{node tapas/embeddings/strided_slice_2}} = StridedSlice[Index=DT_INT32, T=DT_INT32, begin_mask=3, ellipsis_mask=0, end_mask=3, new_axis_mask=0, shrink_axis_mask=4](token_type_ids, tapas/embeddings/strided_slice_2/stack, tapas/embeddings/strided_slice_2/stack_1, tapas/embeddings/strided_slice_2/stack_2)' with input shapes: [?,?], [3], [3], [3] and with computed input tensors: input[3] = <1 1 1>.
Call arguments received:
• input_ids=tf.Tensor(shape=(None, None), dtype=int32)
• position_ids=None
• token_type_ids=tf.Tensor(shape=(None, None), dtype=int32)
• inputs_embeds=None
• training=False
Call arguments received:
• self=tf.Tensor(shape=(None, None), dtype=int32)
• input_ids=None
• attention_mask=tf.Tensor(shape=(None, None), dtype=int32)
• token_type_ids=tf.Tensor(shape=(None, None), dtype=int32)
• position_ids=None
• head_mask=None
• inputs_embeds=None
• output_attentions=False
• output_hidden_states=False
• return_dict=True
• training=False
```
### Expected behavior
It is supposed to make a SavedModel but instead, I get this error mentioned above. The SavedModel is needed for TensorFlow Serving .
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18179/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18178
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18178/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18178/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18178/events
|
https://github.com/huggingface/transformers/issues/18178
| 1,307,783,289
|
I_kwDOCUB6oc5N8zB5
| 18,178
|
ImportError: cannot import name 'BloomTokenizer' from 'transformers'
|
{
"login": "rhvaz",
"id": 38155670,
"node_id": "MDQ6VXNlcjM4MTU1Njcw",
"avatar_url": "https://avatars.githubusercontent.com/u/38155670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rhvaz",
"html_url": "https://github.com/rhvaz",
"followers_url": "https://api.github.com/users/rhvaz/followers",
"following_url": "https://api.github.com/users/rhvaz/following{/other_user}",
"gists_url": "https://api.github.com/users/rhvaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rhvaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rhvaz/subscriptions",
"organizations_url": "https://api.github.com/users/rhvaz/orgs",
"repos_url": "https://api.github.com/users/rhvaz/repos",
"events_url": "https://api.github.com/users/rhvaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/rhvaz/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @rhvaz,\r\n\r\nI am sincerely sorry that you have encountered this issue. We do have a small typo in our documentation, which has been resolved in the PR https://github.com/huggingface/transformers/pull/18005 but not yet deployed on our website.\r\n\r\nIn the meantime, here is the snippet that should work:\r\n```python\r\nfrom transformers import BloomTokenizerFast, BloomModel\r\nimport torch\r\n\r\ntokenizer = BloomTokenizerFast.from_pretrained(\"bigscience/Bloom\")\r\nmodel = BloomModel.from_pretrained(\"bigscience/Bloom\")\r\n\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\noutputs = model(**inputs)\r\n\r\nlast_hidden_states = outputs.last_hidden_state\r\n```\r\n ",
"Hi @SaulLu many thanks for looking at this so quickly!\r\n\r\nI tried the snippet you shared and I am now having the following permission issues\r\n```\r\nOSError: bigscience/Bloom is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\nIf this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.\r\n```\r\n\r\nI get the exception when I try to run either of the lines below\r\n```\r\ntokenizer = BloomTokenizerFast.from_pretrained(\"bigscience/Bloom\")\r\nmodel = BloomModel.from_pretrained(\"bigscience/Bloom\")\r\n```",
"You spotted another typo in the name of the checkpoint! Here's a new snippet that should work:\r\n```python\r\nfrom transformers import BloomTokenizerFast, BloomModel\r\nimport torch\r\n\r\ntokenizer = BloomTokenizerFast.from_pretrained(\"bigscience/bloom\")\r\nmodel = BloomModel.from_pretrained(\"bigscience/bloom\")\r\n\r\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\r\noutputs = model(**inputs)\r\n\r\nlast_hidden_states = outputs.last_hidden_state\r\n```\r\n\r\nI also took this opportunity to share the same fix in the documentation in PR #18187 "
] | 1,658
| 1,658
| 1,658
|
NONE
| null |
### System Info
transformers==4.20.1
torch==1.12.0
Python 3.9.13
GPU: yes (running on GCP)
### Who can help?
@SaulLu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import BloomTokenizer, BloomModel
```
I was following https://huggingface.co/docs/transformers/model_doc/bloom
Specifically
```
from transformers import BloomTokenizer, BloomModel
import torch
tokenizer = BloomTokenizer.from_pretrained("bigscience/Bloom")
model = BloomModel.from_pretrained("bigscience/Bloom")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### Expected behavior
No import error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18178/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18177
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18177/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18177/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18177/events
|
https://github.com/huggingface/transformers/pull/18177
| 1,307,762,676
|
PR_kwDOCUB6oc47jt8v
| 18,177
|
Fix expected loss values in some (m)T5 tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
COLLABORATOR
| null |
# What does this PR do?
Fix CI failures regarding some T5 and MT5 tests.
The PR #18013 and the subsequent fix in #18029 probably tried to get the expected loss values without setting
```python
os.environ["NVIDIA_TF32_OVERRIDE"] = "0"
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18177/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18177",
"html_url": "https://github.com/huggingface/transformers/pull/18177",
"diff_url": "https://github.com/huggingface/transformers/pull/18177.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18177.patch",
"merged_at": 1658150781000
}
|
https://api.github.com/repos/huggingface/transformers/issues/18176
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18176/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18176/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18176/events
|
https://github.com/huggingface/transformers/issues/18176
| 1,307,758,525
|
I_kwDOCUB6oc5N8s-9
| 18,176
|
Model Loading Imbalance
|
{
"login": "cliangyu",
"id": 45140242,
"node_id": "MDQ6VXNlcjQ1MTQwMjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/45140242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cliangyu",
"html_url": "https://github.com/cliangyu",
"followers_url": "https://api.github.com/users/cliangyu/followers",
"following_url": "https://api.github.com/users/cliangyu/following{/other_user}",
"gists_url": "https://api.github.com/users/cliangyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cliangyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cliangyu/subscriptions",
"organizations_url": "https://api.github.com/users/cliangyu/orgs",
"repos_url": "https://api.github.com/users/cliangyu/repos",
"events_url": "https://api.github.com/users/cliangyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cliangyu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"I think @sgugger is working on that as we speak",
"Yes, we will add support for more options to `device_map`, one of which is `\"balanced\"` after the next release. It's already available in Accelerate if you want to try it on.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,658
| 1,661
| 1,661
|
NONE
| null |
### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.31
- Python version: 3.10.4
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: I guess I tried :)
### Who can help?
@patil-suraj @patrickvonplaten @LysandreJik
This is a OPT related issue.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
with init_empty_weights():
model = OPTModel.from_pretrained("facebook/opt-6.7b", device_map="auto")
```
Model loading imbalance across the GPUs.
<img width="902" alt="image" src="https://user-images.githubusercontent.com/45140242/179495680-b1a4dae5-be85-4818-a969-8a58346be57d.png">
### Expected behavior
Model parameter numbers are balanced.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18176/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/18175
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/18175/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/18175/comments
|
https://api.github.com/repos/huggingface/transformers/issues/18175/events
|
https://github.com/huggingface/transformers/pull/18175
| 1,307,692,768
|
PR_kwDOCUB6oc47jexu
| 18,175
|
BLOOM minor fixes small test
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,658
| 1,658
| 1,658
|
CONTRIBUTOR
| null |
Small modifications
- Modified docstring on tests
- Added correct revision on 350m model
- removed right padding left padding test
cc @ydshieh @NouamaneTazi @Muennighoff
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/18175/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/18175/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/18175",
"html_url": "https://github.com/huggingface/transformers/pull/18175",
"diff_url": "https://github.com/huggingface/transformers/pull/18175.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/18175.patch",
"merged_at": 1658164699000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.