url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/20885
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20885/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20885/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20885/events
|
https://github.com/huggingface/transformers/pull/20885
| 1,509,538,204
|
PR_kwDOCUB6oc5GIV25
| 20,885
|
update template
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The following substitution pattern was used : \r\n- match : `\\((?:f|F)rom ([^\\(]*)\\)(?:,)? released (?:together )?with the paper (.*) by (.*).`\r\n- sub : `($1 μμ) $3 μ $2 λ
Όλ¬Έκ³Ό ν¨κ» λ°ννμ΅λλ€.`"
] | 1,671
| 1,672
| 1,672
|
COLLABORATOR
| null |
# What does this PR do?
Adds a better template for Korean Readme and replaces the previous text.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20885/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20885",
"html_url": "https://github.com/huggingface/transformers/pull/20885",
"diff_url": "https://github.com/huggingface/transformers/pull/20885.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20885.patch",
"merged_at": 1672823745000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20884
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20884/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20884/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20884/events
|
https://github.com/huggingface/transformers/issues/20884
| 1,509,429,350
|
I_kwDOCUB6oc5Z-BBm
| 20,884
|
santacoder: saved checkpoints after fine-tuning do not have required .py files
|
{
"login": "arjunguha",
"id": 20065,
"node_id": "MDQ6VXNlcjIwMDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/20065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arjunguha",
"html_url": "https://github.com/arjunguha",
"followers_url": "https://api.github.com/users/arjunguha/followers",
"following_url": "https://api.github.com/users/arjunguha/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunguha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arjunguha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunguha/subscriptions",
"organizations_url": "https://api.github.com/users/arjunguha/orgs",
"repos_url": "https://api.github.com/users/arjunguha/repos",
"events_url": "https://api.github.com/users/arjunguha/events{/privacy}",
"received_events_url": "https://api.github.com/users/arjunguha/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Thanks for reporting! I'll have a look into it after the holidays, the first week of January.",
"Thanks for your patience. Could you try the PR linked above?",
"I'm away this week. But, I'll check it out next week. Thanks!"
] | 1,671
| 1,672
| 1,672
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.12
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The santacoder model uses `trust_remote_code=True` to load Python files from the model repository. However, when I fine-tune a model and save a checkpoint, these Python files are not placed in the repository. Thus I get an error when trying to load the saved checkpoint. Here is the smallest program that shows the problem:
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("bigcode/santacoder", revision="dedup-alt-comments", trust_remote_code=True)
model.save_pretrained("./silly-checkpoint")
model = AutoModelForCausalLM.from_pretrained(f"./silly-checkpoint", trust_remote_code=True, revision="dedup-alt-comments")
```
This produces the error `Could not locate the configuration_gpt2_mq.py inside ./silly-checkpoint.`
I can work around it by manually downloading the two Python files from the model repository:
https://huggingface.co/bigcode/santacoder/tree/dedup-alt-comments
But, this should probably not be necessary.
### Expected behavior
I think my script should work as-is, and should not require copy-pasta Python code.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20884/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20883
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20883/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20883/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20883/events
|
https://github.com/huggingface/transformers/pull/20883
| 1,509,310,336
|
PR_kwDOCUB6oc5GHkAl
| 20,883
|
Fixes typo in the help text for --max_length
|
{
"login": "makrai",
"id": 3809407,
"node_id": "MDQ6VXNlcjM4MDk0MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3809407?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makrai",
"html_url": "https://github.com/makrai",
"followers_url": "https://api.github.com/users/makrai/followers",
"following_url": "https://api.github.com/users/makrai/following{/other_user}",
"gists_url": "https://api.github.com/users/makrai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makrai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makrai/subscriptions",
"organizations_url": "https://api.github.com/users/makrai/orgs",
"repos_url": "https://api.github.com/users/makrai/repos",
"events_url": "https://api.github.com/users/makrai/events{/privacy}",
"received_events_url": "https://api.github.com/users/makrai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,673
| 1,671
|
CONTRIBUTOR
| null |
This PR fixes a typo in the help text of an example script.
- PyTorch: @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20883/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20883",
"html_url": "https://github.com/huggingface/transformers/pull/20883",
"diff_url": "https://github.com/huggingface/transformers/pull/20883.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20883.patch",
"merged_at": 1671865627000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20882
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20882/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20882/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20882/events
|
https://github.com/huggingface/transformers/issues/20882
| 1,508,811,183
|
I_kwDOCUB6oc5Z7qGv
| 20,882
|
Add OPT-IML Checkpoints
|
{
"login": "chujiezheng",
"id": 37283853,
"node_id": "MDQ6VXNlcjM3MjgzODUz",
"avatar_url": "https://avatars.githubusercontent.com/u/37283853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chujiezheng",
"html_url": "https://github.com/chujiezheng",
"followers_url": "https://api.github.com/users/chujiezheng/followers",
"following_url": "https://api.github.com/users/chujiezheng/following{/other_user}",
"gists_url": "https://api.github.com/users/chujiezheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chujiezheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chujiezheng/subscriptions",
"organizations_url": "https://api.github.com/users/chujiezheng/orgs",
"repos_url": "https://api.github.com/users/chujiezheng/repos",
"events_url": "https://api.github.com/users/chujiezheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/chujiezheng/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"π ",
"I tried to convert and also ran into this issue:\r\n\r\nhttps://github.com/facebookresearch/metaseq/issues/567\r\nhttps://github.com/facebookresearch/metaseq/issues/594\r\n\r\nBut it seems like meta folks are working to upload it to huggingface:\r\n\r\nhttps://github.com/facebookresearch/metaseq/issues/567#issuecomment-1370415582",
"maybe I will tag @patrickvonplaten in case who I believe converted the last OPT checkpoints :-)",
"I believe it was @ArthurZucker ",
"Thanks for bumping on this! ",
"Also working on this, have run into the same issues mentioned above. Let me know if I can be an extra pair of eyes/hands for working on this.",
"Weights are available here thanks to (https://huggingface.co/rpasunuru) : \r\n- https://huggingface.co/facebook/opt-iml-30b\r\n- https://huggingface.co/facebook/opt-iml-1.3b\r\nClosing! "
] | 1,671
| 1,678
| 1,678
|
NONE
| null |
### Model description
OPT-IML models are instruction-finetuned from the OPT checkpoints. Here is the [technical report](https://github.com/facebookresearch/metaseq/blob/main/projects/OPT-IML/optimal_paper_v1.pdf).
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
* **Technical report:** https://github.com/facebookresearch/metaseq/blob/main/projects/OPT-IML/optimal_paper_v1.pdf
* **Model implementation:** same as OPT
* **Model weights:** https://github.com/facebookresearch/metaseq/tree/main/projects/OPT-IML
* **Authors:** Meta AI
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20882/reactions",
"total_count": 22,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 13,
"eyes": 9
}
|
https://api.github.com/repos/huggingface/transformers/issues/20882/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20881
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20881/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20881/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20881/events
|
https://github.com/huggingface/transformers/issues/20881
| 1,508,799,578
|
I_kwDOCUB6oc5Z7nRa
| 20,881
|
__init__() missing 1 required positional argument
|
{
"login": "Changgeng-Wei",
"id": 69948679,
"node_id": "MDQ6VXNlcjY5OTQ4Njc5",
"avatar_url": "https://avatars.githubusercontent.com/u/69948679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Changgeng-Wei",
"html_url": "https://github.com/Changgeng-Wei",
"followers_url": "https://api.github.com/users/Changgeng-Wei/followers",
"following_url": "https://api.github.com/users/Changgeng-Wei/following{/other_user}",
"gists_url": "https://api.github.com/users/Changgeng-Wei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Changgeng-Wei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Changgeng-Wei/subscriptions",
"organizations_url": "https://api.github.com/users/Changgeng-Wei/orgs",
"repos_url": "https://api.github.com/users/Changgeng-Wei/repos",
"events_url": "https://api.github.com/users/Changgeng-Wei/events{/privacy}",
"received_events_url": "https://api.github.com/users/Changgeng-Wei/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi there! There is no way we will be able to help you without seeing the code you are running.",
"> The code is below\r\n\r\nfrom transformers.modeling_utils import PretrainedConfig\r\n\r\nclass TClass(PretrainedConfig):\r\ndef init(self, config):\r\nsuper(TClass, self).init()\r\nself.config = config\r\n\r\nif name == 'main':\r\nc = TClass(config='setting')\r\nprint(c)",
"Hi! I'm having the same problem... is there a documentation on how to extend transformers configs? "
] | 1,671
| 1,686
| 1,672
|
NONE
| null |
### System Info
- `transformers` version: 4.9.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?:No
### Who can help?
@you
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers.modeling_utils import PretrainedConfig
class TClass(PretrainedConfig):
def __init__(self, config):
super(TClass, self).__init__()
self.config = config
if __name__ == '__main__':
c = TClass(config='setting')
print(c)
### Expected behavior
Hi!
I'am a green hands, I wanted to define a class which is based on PretrainedConfig and sent a variate when initiate the object, but I faced a issue discribed by beflowing text. I change many transformers versions and python versions, but the issue is still happened, could you help me to solve it? Thanks a lot!
File "F:/workProject//test/test.py", line 12, in <module>
print(c)
File "E:\ProgramData\Anaconda3\envs\py37_tf250_torch\lib\site-packages\transformers\configuration_utils.py", line 613, in __repr__
return f"{self.__class__.__name__} {self.to_json_string()}"
File "E:\ProgramData\Anaconda3\envs\py37_tf250_torch\lib\site-packages\transformers\configuration_utils.py", line 674, in to_json_string
config_dict = self.to_diff_dict()
File "E:\ProgramData\Anaconda3\envs\py37_tf250_torch\lib\site-packages\transformers\configuration_utils.py", line 629, in to_diff_dict
class_config_dict = self.__class__().to_dict() if not self.is_composition else {}
TypeError: __init__() missing 1 required positional argument: 'config'"
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20881/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20880
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20880/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20880/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20880/events
|
https://github.com/huggingface/transformers/pull/20880
| 1,508,791,155
|
PR_kwDOCUB6oc5GF0dc
| 20,880
|
Fix model parallelism for ByT5
|
{
"login": "yunyu",
"id": 8008350,
"node_id": "MDQ6VXNlcjgwMDgzNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8008350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yunyu",
"html_url": "https://github.com/yunyu",
"followers_url": "https://api.github.com/users/yunyu/followers",
"following_url": "https://api.github.com/users/yunyu/following{/other_user}",
"gists_url": "https://api.github.com/users/yunyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yunyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yunyu/subscriptions",
"organizations_url": "https://api.github.com/users/yunyu/orgs",
"repos_url": "https://api.github.com/users/yunyu/repos",
"events_url": "https://api.github.com/users/yunyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yunyu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20880). All of your documentation changes will be reflected on that endpoint.",
"Hey, parallelise is very deprecated, but I believe other models might also benefit from this if it is a fix no?\r\n",
"Wasn't aware, is the recommendation to use accelerate now?\n\nThis fix only affects models with more encoder than decoder blocks. I'm pretty sure this is rare (only seen it done with ByT5, which I am using for character sensitive tasks)",
"Yes, the recommendation is to use Accelerate for this form of parallelism (which Accelerate supports for all T5 models), the old API is on its way to be deprecated and won't be maintained.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,675
| 1,675
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20879
Only assign device_map explicitly in parallelize() to model encoder and decoder if it was explicitly passed in from the caller. The encoder and decoder will automatically create a device_map if None is passed in, so the original code was redundant.
ByT5 has a much bigger encoder than decoder, so assuming that the two are the same size (and can use the same device_map) is not correct and results in an assertion error.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@ArthurZucker, @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20880/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20880",
"html_url": "https://github.com/huggingface/transformers/pull/20880",
"diff_url": "https://github.com/huggingface/transformers/pull/20880.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20880.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20879
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20879/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20879/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20879/events
|
https://github.com/huggingface/transformers/issues/20879
| 1,508,788,723
|
I_kwDOCUB6oc5Z7knz
| 20,879
|
Calling parallelize() on T5ForConditionalGeneration for ByT5 results in device_map error
|
{
"login": "yunyu",
"id": 8008350,
"node_id": "MDQ6VXNlcjgwMDgzNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8008350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yunyu",
"html_url": "https://github.com/yunyu",
"followers_url": "https://api.github.com/users/yunyu/followers",
"following_url": "https://api.github.com/users/yunyu/following{/other_user}",
"gists_url": "https://api.github.com/users/yunyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yunyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yunyu/subscriptions",
"organizations_url": "https://api.github.com/users/yunyu/orgs",
"repos_url": "https://api.github.com/users/yunyu/repos",
"events_url": "https://api.github.com/users/yunyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yunyu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Note that the `parallelize` API is going to be deprecated soon. You should load your model like this to use Accelerate instead:\r\n```python\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"google/byt5-xl\", device_map=\"balanced\")\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,677
| 1,677
|
NONE
| null |
### System Info
4.25.1
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model = T5ForConditionalGeneration.from_pretrained("google/byt5-xl")
model.parallelize()
```
Results in:
```
The device_map contains more attention blocks than this model has. Remove these from the device_map: {...}
```
### Expected behavior
The model should parallelize attention blocks properly. This is needed because ByT5 has a 3x deeper encoder than decoder, so the same device_map can't be used for both.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20879/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20878
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20878/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20878/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20878/events
|
https://github.com/huggingface/transformers/pull/20878
| 1,508,470,829
|
PR_kwDOCUB6oc5GEt68
| 20,878
|
[ `T5`] fix fp16 loading issue
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR mainly fixes https://github.com/huggingface/transformers/actions/runs/3754402958/jobs/6378652143
Since the PR https://github.com/huggingface/accelerate/pull/920 has been merged, the fix proposed in https://github.com/huggingface/transformers/pull/20760 seems to not work anymore using the main branch of `accelerate` for some specific cases.
To reproduce (use the main branch of `accelerate`):
```
import torch
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("t5-small", torch_dtype=torch.float16)
print(model.decoder.block[0].layer[2].DenseReluDense.wo.weight.dtype)
>>> torch.float16
```
Why?
I believe this is because the aforementioned PR introduced a new argument `dtype` on the function `set_module_tensor_to_device`, if this argument is set to `None` (by default), the target value [is automatically set to the `dtype` of the old tensor](https://github.com/huggingface/accelerate/blob/53b8ed1e8ed5fb8e9d2978744515c31c09e1423e/src/accelerate/utils/modeling.py#L129) - which slightly breaks some assumptions made in https://github.com/huggingface/transformers/pull/20760
I believe upstreaming this change on `modeling_utils` by adding the support of this new argument should be the fix. As some users might not use the latest version of accelerate, I added a small hack to make this change backward compatible, but I am not sure if this is the best solution
Tested this fix on the main branch of `accelerate`, `accelerate==0.15.0` and all relevant tests pass
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20878/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20878",
"html_url": "https://github.com/huggingface/transformers/pull/20878",
"diff_url": "https://github.com/huggingface/transformers/pull/20878.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20878.patch",
"merged_at": 1672045264000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20877
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20877/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20877/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20877/events
|
https://github.com/huggingface/transformers/pull/20877
| 1,508,435,332
|
PR_kwDOCUB6oc5GEmF0
| 20,877
|
[`BLIP`] Fix daily CI failing test
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hmm at the beginning I thought that the `Softmax` was causing the issue, leading to large round errors but the test pass locally with `torch+cu116==1.13.0` but does not pass on the docker image that uses the same version. Will investigate more!",
"On GCP (my own/ CI runners), all torch versions give\r\n\r\n(torch 1.13.x)\r\n```python\r\n[[0.97982633 0.02017363]]\r\n[[0.50528485]]\r\n```\r\nor (torch 1.12.1)\r\n```\r\n[[0.97982633 0.02017365]]\r\n[[0.5052849]]\r\n```\r\n\r\nso\r\n```python\r\n[[0.9798, 0.0202]]\r\n[[0.5053]]\r\n```\r\nwill work. Not sure why you got larger differ though, but it is likely an env issue.",
"Thanks so much @ydshieh π― , the tests seem to pass now on the CI docker image with your suggested values!\r\nSeems that something was wrong with my env indeed"
] | 1,671
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes: https://github.com/huggingface/transformers/actions/runs/3754402958/jobs/6378634199
## Why this fix is relevant?
The reference logits for this test were obtained under pytorch==1.13.1+cu116 and the daily CI uses pytorch==1.13.0+cu116. Setting the tolerance slightly higher (`4e-2`) fixes the test to make it cross-versions compatible.
cc @LysandreJik @sgugger @ydshieh
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20877/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20877",
"html_url": "https://github.com/huggingface/transformers/pull/20877",
"diff_url": "https://github.com/huggingface/transformers/pull/20877.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20877.patch",
"merged_at": 1672921472000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20876
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20876/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20876/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20876/events
|
https://github.com/huggingface/transformers/pull/20876
| 1,508,084,911
|
PR_kwDOCUB6oc5GDXOu
| 20,876
|
add "local_files_first" parameter
|
{
"login": "James4Ever0",
"id": 103997068,
"node_id": "U_kgDOBjLejA",
"avatar_url": "https://avatars.githubusercontent.com/u/103997068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/James4Ever0",
"html_url": "https://github.com/James4Ever0",
"followers_url": "https://api.github.com/users/James4Ever0/followers",
"following_url": "https://api.github.com/users/James4Ever0/following{/other_user}",
"gists_url": "https://api.github.com/users/James4Ever0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/James4Ever0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/James4Ever0/subscriptions",
"organizations_url": "https://api.github.com/users/James4Ever0/orgs",
"repos_url": "https://api.github.com/users/James4Ever0/repos",
"events_url": "https://api.github.com/users/James4Ever0/events{/privacy}",
"received_events_url": "https://api.github.com/users/James4Ever0/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20876). All of your documentation changes will be reflected on that endpoint.",
"how to pass the code quality check?",
"As mentioned in the issue, this is not a fix we are interested in adding as it would break other functionality.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,675
| 1,675
|
NONE
| null |
add "local_files_first" parameter to AutoConfig.from_pretrained
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20875
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20876/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20876",
"html_url": "https://github.com/huggingface/transformers/pull/20876",
"diff_url": "https://github.com/huggingface/transformers/pull/20876.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20876.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20875
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20875/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20875/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20875/events
|
https://github.com/huggingface/transformers/issues/20875
| 1,508,082,386
|
I_kwDOCUB6oc5Z44LS
| 20,875
|
make internet connection only if local cache is missing
|
{
"login": "James4Ever0",
"id": 103997068,
"node_id": "U_kgDOBjLejA",
"avatar_url": "https://avatars.githubusercontent.com/u/103997068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/James4Ever0",
"html_url": "https://github.com/James4Ever0",
"followers_url": "https://api.github.com/users/James4Ever0/followers",
"following_url": "https://api.github.com/users/James4Ever0/following{/other_user}",
"gists_url": "https://api.github.com/users/James4Ever0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/James4Ever0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/James4Ever0/subscriptions",
"organizations_url": "https://api.github.com/users/James4Ever0/orgs",
"repos_url": "https://api.github.com/users/James4Ever0/repos",
"events_url": "https://api.github.com/users/James4Ever0/events{/privacy}",
"received_events_url": "https://api.github.com/users/James4Ever0/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"hi james, can you assign this issue to me?",
"Thanks for opening this issue, but we're not interested in implementing this feature as this would break the auto-update mechanism (if someone updates the model, it would no longer be downloaded).\r\n\r\nIf the connection fails for any reason, local files are used instead.",
"okay, no problem, i'll look for another issue, which i can fix.\r\nthanx for replying.",
"> this would break the auto-update mechanism\r\n\r\nIn the code the default value of this parameter is set to \"False\", so it won't be turned on unless you set it to \"True\".\r\n\r\nAuto-updating model is not always needed, though by default it will check for updates everytime.",
"It's based on the \"local_files_only\" #2930, which will skip updates after all.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I've made some updates on my fork, though might be incomplete, shall cover most cases on model loading.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,677
| 1,677
|
NONE
| null |
### Feature request
Check if local cache has the model, and download the model only if have to.
### Motivation
My connection to github or huggingface is unstable. I don't want make this unstable internet connection if I can find the cache of the model, breaking things and logics.
### Your contribution
I already mentioned it at #2867. I also made some changes at #20876
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20875/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20875/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20874
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20874/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20874/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20874/events
|
https://github.com/huggingface/transformers/pull/20874
| 1,508,051,930
|
PR_kwDOCUB6oc5GDP7-
| 20,874
|
Adding doc page for the object detection task
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"If I missed someone who has to be invited as a reviewer, please feel free to add them. ",
"@MKhalusova thanks for doing this! I will take a look tomorrow my time. \r\n\r\nI think you can follow the instructions noted [here](https://github.com/huggingface/transformers/pull/16255#discussion_r830432539) to resolve the quality bug in the CI. Let me know if anything's unclear. ",
"Did a rebase in an attempt to fix the CI issue. Accidentally added a whole bunch of unrelated commits to the PR. Figuring out how to remove them. ",
"> Did a rebase in an attempt to fix the CI issue. Accidentally added a whole bunch of unrelated commits to the PR. Figuring out how to remove them.\r\n\r\nYou might want to revert to the previous commit. [This thread](https://stackoverflow.com/questions/4114095/how-do-i-revert-a-git-repository-to-a-previous-commit) might be helpful in that regard. And then from there:\r\n\r\n* Create a separate Python virtual environment. \r\n* Make sure you're in the virtual environment you just created and then from the `transformers` directory root run `pip install -e .[quality]`.\r\n* Now, once the dependencies have been installed to the new virtual environment, run `make style`. \r\n\r\nThis should likely fix it. Since the code quality errors were previously coming from a doc page (c.f. https://app.circleci.com/pipelines/github/huggingface/transformers/54549/workflows/18b53122-1cc3-4c87-ad12-486853427500/jobs/657389), I suspect this to be stemming from the task page we're adding in this PR. \r\n\r\nLet me know if anything is unclear. ",
"Closing this due to messed up rebase. The new PR is now here https://github.com/huggingface/transformers/pull/20925"
] | 1,671
| 1,673
| 1,672
|
CONTRIBUTOR
| null |
This is a PR for the https://github.com/huggingface/transformers/issues/20805 issue.
The guide has content and working code examples for:
- [x] Introduction
- [x] Loading CPPE-5 dataset from Hub
- [x] Preprocessing both images and annotations. Images are augmented, annotations are reformatted to be in the format DETR expects
- [x] Training with `Trainer`
- [x] Evaluation
- [x] Inference
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20874/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20874",
"html_url": "https://github.com/huggingface/transformers/pull/20874",
"diff_url": "https://github.com/huggingface/transformers/pull/20874.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20874.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20873
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20873/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20873/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20873/events
|
https://github.com/huggingface/transformers/issues/20873
| 1,508,009,252
|
I_kwDOCUB6oc5Z4mUk
| 20,873
|
`model_kwargs` not used in `model.generate()`
|
{
"login": "charlottecaucheteux",
"id": 24608610,
"node_id": "MDQ6VXNlcjI0NjA4NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/24608610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/charlottecaucheteux",
"html_url": "https://github.com/charlottecaucheteux",
"followers_url": "https://api.github.com/users/charlottecaucheteux/followers",
"following_url": "https://api.github.com/users/charlottecaucheteux/following{/other_user}",
"gists_url": "https://api.github.com/users/charlottecaucheteux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/charlottecaucheteux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charlottecaucheteux/subscriptions",
"organizations_url": "https://api.github.com/users/charlottecaucheteux/orgs",
"repos_url": "https://api.github.com/users/charlottecaucheteux/repos",
"events_url": "https://api.github.com/users/charlottecaucheteux/events{/privacy}",
"received_events_url": "https://api.github.com/users/charlottecaucheteux/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This snippet should work (tested on the `main` branch):\r\n```\r\nfrom transformers import GPT2LMHeadModel\r\n\r\nclass ToyGPT(GPT2LMHeadModel):\r\n def forward(self, *args, added_param=None, **kwargs):\r\n print(\"added_param\", added_param)\r\n return super().forward(*args, **kwargs)\r\n def prepare_inputs_for_generation(self, *args, added_param=None, **kwargs):\r\n output = super().prepare_inputs_for_generation(*args, **kwargs)\r\n output.update({\"added_param\": added_param})\r\n return output\r\n \r\ntoy_model = ToyGPT.from_pretrained(\"gpt2\")\r\ntoy_model.generate(added_param=1, max_length=5)\r\n```\r\nyou'll need to update the method `prepare_inputs_for_generation` to consider also your new args",
"Amazing. It works. Thank you very much !"
] | 1,671
| 1,671
| 1,671
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-124-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**Issue**: the extra `model_kwargs` are not used when using `model.generate(**model_kwargs)` (tried with transformer version '4.25.1' and '4.23.1'.
**Code to replicate**:
``` python
from transformers import GPT2LMHeadModel
class ToyGPT(GPT2LMHeadModel):
def forward(self, *args, added_param=None, **kwargs):
print("added_param", added_param)
return super().forward(*args, **kwargs)
toy_model = ToyGPT.from_pretrained("gpt2")
toy_model.generate(added_param=1, max_length=5)
```
**Current output**:
`"added_param, None"`
**Current behaviour**:
The extra `added_param` is not passed as input to the `forward()` of the model when generating new inputs, thus, `None` is printed during the forward pass.
### Expected behavior
**Expected output**
"added_param, 1"
**Expected behaviour**
The generate function should print the updated version of the model_kwargs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20873/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20872
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20872/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20872/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20872/events
|
https://github.com/huggingface/transformers/pull/20872
| 1,507,905,286
|
PR_kwDOCUB6oc5GCvYm
| 20,872
|
Add resources
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds a lot of resources for all models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20872/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20872",
"html_url": "https://github.com/huggingface/transformers/pull/20872",
"diff_url": "https://github.com/huggingface/transformers/pull/20872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20872.patch",
"merged_at": 1673973754000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20871
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20871/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20871/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20871/events
|
https://github.com/huggingface/transformers/issues/20871
| 1,507,846,644
|
I_kwDOCUB6oc5Z3-n0
| 20,871
|
Error loading text generation pipeline: Exception: Python patch version not an integer
|
{
"login": "morrisalp",
"id": 8263996,
"node_id": "MDQ6VXNlcjgyNjM5OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8263996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morrisalp",
"html_url": "https://github.com/morrisalp",
"followers_url": "https://api.github.com/users/morrisalp/followers",
"following_url": "https://api.github.com/users/morrisalp/following{/other_user}",
"gists_url": "https://api.github.com/users/morrisalp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morrisalp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morrisalp/subscriptions",
"organizations_url": "https://api.github.com/users/morrisalp/orgs",
"repos_url": "https://api.github.com/users/morrisalp/repos",
"events_url": "https://api.github.com/users/morrisalp/events{/privacy}",
"received_events_url": "https://api.github.com/users/morrisalp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is very odd.\r\n\r\nCould you share maybe a bit more about your environment so we could reproduce ?\r\nIt seems like the way Python itself is installed is odd (I'm purely inferring from the error message), maybe ?\r\n\r\nIs it possible to provide a way to reproduce maybe ? Like a docker image or something ?\r\n\r\nIt does seem to work on colab so it's hard to know what is wrong with the enviroment. It also seems like there's a mix of `conda` and `pip` install which might be at play (both link to different things, so maybe the linker is confused somehow ?)\r\nI tried googling your error message but nothing came up..\r\n",
"I met the same problem and fixed it by degrading the transformers version like 4.22.0 or others.",
"Experienced the same problem. I also downgraded to make it work.\r\n\r\ni dont know what the _commit_hash variable is used for, but removing the line in transformers/pipelines/__init__.py also seems to work.\r\n\r\nthis line `hub_kwargs[\"_commit_hash\"] = model.config._commit_hash`\r\n\r\nA fix for this would be very appreciated",
"I think it was related to this [issue](https://github.com/huggingface/safetensors/issues/142). All PyTorch container images of [NVIDIA NGC](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-22-12.html#rel-22-12) have alpha version tags for PyTorch. cc @Narsil ",
"Thanks @Codle .\r\n\r\nIt seems to be indeed the issue. Releasing a new version soon so everyone has access.",
"Should be fixed with new version (0.2.8), could you confirm ?",
"Hi @Narsil, sorry for my late response. After updating safetensors to 0.2.8, it works fine for me.",
"Closing this. Thank you for sharing !",
"Updating safetensors solved it for me too. Thanks!",
"despite downgrading my safetensors, I get the following error \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/suryahari/Vornoi/QA.py\", line 5, in <module>\r\n model = AutoModelForQuestionAnswering.from_pretrained(\"deepset/roberta-base-squad2\")\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/suryahari/miniconda3/envs/diffusers/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py\", line 493, in from_pretrained\r\n return model_class.from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/suryahari/miniconda3/envs/diffusers/lib/python3.11/site-packages/transformers/modeling_utils.py\", line 2629, in from_pretrained\r\n state_dict = load_state_dict(resolved_archive_file)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/suryahari/miniconda3/envs/diffusers/lib/python3.11/site-packages/transformers/modeling_utils.py\", line 447, in load_state_dict\r\n with safe_open(checkpoint_file, framework=\"pt\") as f:\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nOSError: No such device (os error 19)\r\n```\r\n\r\n"
] | 1,671
| 1,690
| 1,675
|
NONE
| null |
### System Info
Platform: Ubuntu 20.04.5, Jupyter Lab 3.5.2, dockerized
Python version: 3.8.13
`pip freeze` output:
absl-py==1.2.0
accelerate==0.15.0
aiohttp==3.8.3
aiosignal==1.3.1
alabaster==0.7.12
anyio==3.6.1
apex==0.1
appdirs==1.4.4
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1660605382950/work
async-timeout==4.0.2
attrs==22.1.0
audioread==3.0.0
Babel==2.10.3
backcall @ file:///home/conda/feedstock_root/build_artifacts/backcall_1592338393461/work
backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1618230623929/work
beautifulsoup4 @ file:///home/conda/feedstock_root/build_artifacts/beautifulsoup4_1649463573192/work
bitsandbytes==0.35.4
bleach==5.0.1
blis @ file:///home/conda/feedstock_root/build_artifacts/cython-blis_1656314523915/work
brotlipy @ file:///home/conda/feedstock_root/build_artifacts/brotlipy_1648854175163/work
cachetools==5.2.0
catalogue @ file:///home/conda/feedstock_root/build_artifacts/catalogue_1661366525041/work
certifi==2022.9.24
cffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1656782821535/work
chardet @ file:///home/conda/feedstock_root/build_artifacts/chardet_1656142044710/work
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1655906222726/work
click @ file:///home/conda/feedstock_root/build_artifacts/click_1651215152883/work
cloudpickle==2.2.0
codecov==2.1.12
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1655412516417/work
conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1663583601093/work
contourpy==1.0.5
coverage==6.5.0
cryptography @ file:///home/conda/feedstock_root/build_artifacts/cryptography_1665535545125/work
cuda-python @ file:///rapids/cuda_python-11.7.0%2B0.g95a2041.dirty-cp38-cp38-linux_x86_64.whl
cudf @ file:///rapids/cudf-22.8.0a0%2B304.g6ca81bbc78.dirty-cp38-cp38-linux_x86_64.whl
cugraph @ file:///rapids/cugraph-22.8.0a0%2B132.g2daa31b6.dirty-cp38-cp38-linux_x86_64.whl
cuml @ file:///rapids/cuml-22.8.0a0%2B52.g73b8d00d0.dirty-cp38-cp38-linux_x86_64.whl
cupy-cuda118 @ file:///rapids/cupy_cuda118-11.0.0-cp38-cp38-linux_x86_64.whl
cycler==0.11.0
cymem @ file:///home/conda/feedstock_root/build_artifacts/cymem_1636053152744/work
Cython==0.29.32
dask @ file:///rapids/dask-2022.7.1-py3-none-any.whl
dask-cuda @ file:///rapids/dask_cuda-22.8.0a0%2B36.g9860cad-py3-none-any.whl
dask-cudf @ file:///rapids/dask_cudf-22.8.0a0%2B304.g6ca81bbc78.dirty-py3-none-any.whl
dataclasses @ file:///home/conda/feedstock_root/build_artifacts/dataclasses_1628958434797/work
debugpy==1.6.3
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
defusedxml==0.7.1
diffusers==0.11.0
distributed @ file:///rapids/distributed-2022.7.1-py3-none-any.whl
docutils==0.17.1
entrypoints==0.3
et-xmlfile==1.1.0
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1665301981797/work
expecttest==0.1.3
fastjsonschema==2.16.2
fastrlock==0.8
filelock @ file:///home/conda/feedstock_root/build_artifacts/filelock_1660129891014/work
flake8==3.7.9
Flask==2.2.2
fonttools==4.37.4
frozenlist==1.3.3
fsspec==2022.8.2
functorch==0.3.0a0
future==0.18.2
glob2==0.7
google-auth==2.12.0
google-auth-oauthlib==0.4.6
graphsurgeon @ file:///workspace/TensorRT-8.5.0.12/graphsurgeon/graphsurgeon-0.4.6-py2.py3-none-any.whl
grpcio==1.49.1
HeapDict==1.0.1
huggingface-hub==0.11.0
hypothesis==4.50.8
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1642433548627/work
imagesize==1.4.1
importlib-metadata==5.0.0
importlib-resources==5.10.0
iniconfig==1.1.1
iopath==0.1.10
ipykernel==6.16.0
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1662481517711/work
ipython-genutils==0.2.0
ipywidgets==8.0.2
itsdangerous==2.1.2
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1659959867326/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1654302431367/work
joblib==1.2.0
json5==0.9.10
jsonschema==4.16.0
jupyter-core==4.11.1
jupyter-server==1.21.0
jupyter-tensorboard @ git+https://github.com/cliffwoolley/jupyter_tensorboard.git@ffa7e26138b82549453306e06b535a9ac36db17a
jupyter_client==7.4.2
jupyterlab==2.3.2
jupyterlab-pygments==0.2.2
jupyterlab-server==1.2.0
jupyterlab-widgets==3.0.3
jupytext==1.14.1
kiwisolver==1.4.4
langcodes @ file:///home/conda/feedstock_root/build_artifacts/langcodes_1636741340529/work
libarchive-c @ file:///home/conda/feedstock_root/build_artifacts/python-libarchive-c_1649436017468/work
librosa==0.9.2
lightning-utilities==0.4.2
llvmlite==0.39.1
lmdb==1.3.0
locket==1.0.0
Markdown==3.4.1
markdown-it-py==2.1.0
MarkupSafe @ file:///home/conda/feedstock_root/build_artifacts/markupsafe_1648737563195/work
matplotlib==3.6.2
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1660814786464/work
mccabe==0.6.1
mdit-py-plugins==0.3.1
mdurl==0.1.2
mistune==2.0.4
mock @ file:///home/conda/feedstock_root/build_artifacts/mock_1648992799371/work
msgpack==1.0.4
multidict==6.0.3
murmurhash @ file:///home/conda/feedstock_root/build_artifacts/murmurhash_1636019583024/work
nbclassic==0.4.5
nbclient==0.7.0
nbconvert==7.2.1
nbformat==5.7.0
nest-asyncio==1.5.6
networkx==2.6.3
nltk==3.7
notebook==6.4.10
notebook-shim==0.1.0
numba==0.56.2
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1643958805350/work
nvidia-dali-cuda110==1.18.0
nvidia-pyindex==1.0.9
nvtx==0.2.5
oauthlib==3.2.1
onnx @ file:///opt/pytorch/pytorch/third_party/onnx
openpyxl==3.0.10
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1637239678211/work
pandas==1.4.4
pandocfilters==1.5.0
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1638334955874/work
partd==1.3.0
pathy @ file:///home/conda/feedstock_root/build_artifacts/pathy_1656568808184/work
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1602535608087/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
Pillow @ file:///tmp/pillow-simd
pkginfo @ file:///home/conda/feedstock_root/build_artifacts/pkginfo_1654782790443/work
pkgutil_resolve_name==1.3.10
pluggy==1.0.0
polygraphy==0.42.1
pooch==1.6.0
portalocker==2.5.1
preshed @ file:///home/conda/feedstock_root/build_artifacts/preshed_1636077712344/work
prettytable==3.4.1
prometheus-client==0.15.0
prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1662384672173/work
protobuf==3.20.1
psutil @ file:///home/conda/feedstock_root/build_artifacts/psutil_1662356143277/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1642875951954/work
py==1.11.0
pyarrow @ file:///rapids/pyarrow-8.0.0-cp38-cp38-linux_x86_64.whl
pyasn1==0.4.8
pyasn1-modules==0.2.8
pybind11==2.10.0
pycocotools @ git+https://github.com/nvidia/cocoapi.git@142b17a358fdb5a31f9d5153d7a9f3f1cd385178#subdirectory=PythonAPI
pycodestyle==2.5.0
pycosat @ file:///home/conda/feedstock_root/build_artifacts/pycosat_1649384811940/work
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work
pydantic @ file:///home/conda/feedstock_root/build_artifacts/pydantic_1636021149719/work
pydot==1.4.2
pyflakes==2.1.1
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1660666458521/work
pylibcugraph @ file:///rapids/pylibcugraph-22.8.0a0%2B132.g2daa31b6.dirty-cp38-cp38-linux_x86_64.whl
pynvml==11.4.1
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1643496850550/work
pyparsing @ file:///home/conda/feedstock_root/build_artifacts/pyparsing_1652235407899/work
pyrsistent==0.18.1
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1661604839144/work
pytest==6.2.5
pytest-cov==4.0.0
pytest-pythonpath==0.7.4
python-dateutil==2.8.2
python-hostlist==1.22
python-nvd3==0.15.0
python-slugify==6.1.2
pytorch-lightning==1.8.5.post0
pytorch-quantization==2.1.2
pytz @ file:///home/conda/feedstock_root/build_artifacts/pytz_1664798238822/work
PyYAML @ file:///home/conda/feedstock_root/build_artifacts/pyyaml_1648757091578/work
pyzmq==24.0.1
raft @ file:///rapids/raft-22.8.0a0%2B70.g9070c30.dirty-cp38-cp38-linux_x86_64.whl
regex==2022.9.13
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1656534056640/work
requests-oauthlib==1.3.1
resampy==0.4.2
revtok @ git+git://github.com/jekbradbury/revtok.git@f1998b72a941d1e5f9578a66dc1c20b01913caab
rmm @ file:///rapids/rmm-22.8.0a0%2B62.gf6bf047.dirty-cp38-cp38-linux_x86_64.whl
rsa==4.9
ruamel-yaml-conda @ file:///home/conda/feedstock_root/build_artifacts/ruamel_yaml_1653464386701/work
sacremoses==0.0.53
safetensors==0.2.6
scikit-learn @ file:///rapids/scikit_learn-0.24.2-cp38-cp38-manylinux2010_x86_64.whl
scipy @ file:///home/conda/feedstock_root/build_artifacts/scipy_1619561901336/work
seaborn==0.12.1
Send2Trash==1.8.0
shellingham @ file:///home/conda/feedstock_root/build_artifacts/shellingham_1659638615822/work
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
smart-open @ file:///home/conda/feedstock_root/build_artifacts/smart_open_1630238320325/work
sniffio==1.3.0
snowballstemmer==2.2.0
sortedcontainers==2.4.0
soundfile==0.11.0
soupsieve @ file:///home/conda/feedstock_root/build_artifacts/soupsieve_1658207591808/work
spacy @ file:///home/conda/feedstock_root/build_artifacts/spacy_1644657943105/work
spacy-legacy @ file:///home/conda/feedstock_root/build_artifacts/spacy-legacy_1660748275723/work
spacy-loggers @ file:///home/conda/feedstock_root/build_artifacts/spacy-loggers_1661365735520/work
Sphinx==5.2.3
sphinx-glpi-theme==0.3
sphinx-rtd-theme==1.0.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
srsly @ file:///home/conda/feedstock_root/build_artifacts/srsly_1638879568141/work
stack-data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1664126450622/work
tabulate==0.9.0
tblib==1.7.0
tensorboard==2.10.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorboardX==2.5.1
tensorrt @ file:///workspace/TensorRT-8.5.0.12/python/tensorrt-8.5.0.12-cp38-none-linux_x86_64.whl
terminado==0.16.0
text-unidecode==1.3
thinc @ file:///home/conda/feedstock_root/build_artifacts/thinc_1638980259098/work
threadpoolctl==3.1.0
tinycss2==1.1.1
tokenizers==0.13.2
toml @ file:///home/conda/feedstock_root/build_artifacts/toml_1604308577558/work
tomli==2.0.1
toolz @ file:///home/conda/feedstock_root/build_artifacts/toolz_1657485559105/work
torch==1.13.0a0+d0d6b1f
torch-tensorrt @ file:///opt/pytorch/torch_tensorrt/py/dist/torch_tensorrt-1.3.0a0-cp38-cp38-linux_x86_64.whl
torchinfo==1.7.1
torchmetrics==0.11.0
torchtext==0.11.0a0
torchvision @ file:///opt/pytorch/vision
tornado==6.2
tqdm==4.64.1
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1663005918942/work
transformer-engine @ file:///tmp/te_wheel/transformer_engine-0.1.0-cp38-cp38-linux_x86_64.whl
transformers==4.25.1
treelite @ file:///rapids/treelite-2.4.0-py3-none-manylinux2014_x86_64.whl
treelite-runtime @ file:///rapids/treelite_runtime-2.4.0-py3-none-manylinux2014_x86_64.whl
typer @ file:///home/conda/feedstock_root/build_artifacts/typer_1657029164904/work
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1665144421445/work
ucx-py @ file:///rapids/ucx_py-0.27.0a0%2B29.ge9e81f8-cp38-cp38-linux_x86_64.whl
uff @ file:///workspace/TensorRT-8.5.0.12/uff/uff-0.6.9-py2.py3-none-any.whl
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1658789158161/work
wasabi @ file:///home/conda/feedstock_root/build_artifacts/wasabi_1658931821849/work
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1600965781394/work
webencodings==0.5.1
websocket-client==1.4.1
Werkzeug==2.2.2
widgetsnbextension==4.0.3
xgboost @ file:///rapids/xgboost-1.6.1-cp38-cp38-linux_x86_64.whl
yarl==1.8.2
zict==2.2.0
zipp==3.9.0
### Who can help?
@ArthurZucker @younesbelkada @gante @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
generator = pipeline('text-generation', model='gpt2')
```
Output:
```
Downloading: 0%| | 0.00/665 [00:00<?, ?B/s]
Downloading: 0%| | 0.00/548M [00:00<?, ?B/s]
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
Cell In [2], line 1
----> 1 generator = pipeline('text-generation', model='gpt2')
File /storage/morrisalper/notebooks/envs/notebook_env/lib/python3.8/site-packages/transformers/pipelines/__init__.py:724, in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
720 # Infer the framework from the model
721 # Forced if framework already defined, inferred if it's None
722 # Will load the correct model if possible
723 model_classes = {"tf": targeted_task["tf"], "pt": targeted_task["pt"]}
--> 724 framework, model = infer_framework_load_model(
725 model,
726 model_classes=model_classes,
727 config=config,
728 framework=framework,
729 task=task,
730 **hub_kwargs,
731 **model_kwargs,
732 )
734 model_config = model.config
735 hub_kwargs["_commit_hash"] = model.config._commit_hash
File /storage/morrisalper/notebooks/envs/notebook_env/lib/python3.8/site-packages/transformers/pipelines/base.py:257, in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)
251 logger.warning(
252 "Model might be a PyTorch model (ending with `.bin`) but PyTorch is not available. "
253 "Trying to load the model with Tensorflow."
254 )
256 try:
--> 257 model = model_class.from_pretrained(model, **kwargs)
258 if hasattr(model, "eval"):
259 model = model.eval()
File /storage/morrisalper/notebooks/envs/notebook_env/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py:463, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
461 elif type(config) in cls._model_mapping.keys():
462 model_class = _get_model_class(config, cls._model_mapping)
--> 463 return model_class.from_pretrained(
464 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
465 )
466 raise ValueError(
467 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
468 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
469 )
File /storage/morrisalper/notebooks/envs/notebook_env/lib/python3.8/site-packages/transformers/modeling_utils.py:2230, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2227 if from_pt:
2228 if not is_sharded and state_dict is None:
2229 # Time to load the checkpoint
-> 2230 state_dict = load_state_dict(resolved_archive_file)
2232 # set dtype to instantiate the model under:
2233 # 1. If torch_dtype is not None, we use that dtype
2234 # 2. If torch_dtype is "auto", we auto-detect dtype from the loaded state_dict, by checking its first
2235 # weights entry that is of a floating type - we assume all floating dtype weights are of the same dtype
2236 # we also may have config.torch_dtype available, but we won't rely on it till v5
2237 dtype_orig = None
File /storage/morrisalper/notebooks/envs/notebook_env/lib/python3.8/site-packages/transformers/modeling_utils.py:386, in load_state_dict(checkpoint_file)
381 """
382 Reads a PyTorch checkpoint file, returning properly formatted errors if they arise.
383 """
384 if checkpoint_file.endswith(".safetensors") and is_safetensors_available():
385 # Check format of the archive
--> 386 with safe_open(checkpoint_file, framework="pt") as f:
387 metadata = f.metadata()
388 if metadata.get("format") not in ["pt", "tf", "flax"]:
Exception: Python patch version not an integer
```
### Expected behavior
Should not output an exception. E.g. this code runs as-is (after `pip install transformers`) in Google Colab.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20871/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20870
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20870/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20870/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20870/events
|
https://github.com/huggingface/transformers/pull/20870
| 1,507,595,506
|
PR_kwDOCUB6oc5GBpu5
| 20,870
|
Add japanese translation of template
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You might be able to modify the README with simple regex/pattern finding ",
"_The documentation is not available anymore as the PR was closed or merged._",
"yes, I'll do it now",
"Thanks for giving me the tip about https://regex101.com/ which made the conversion process much faster! \r\nShould be good now"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds Japanese template of the README-jp.md following the same convention as what is done in Chinese
cc @sgugger @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20870/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20870",
"html_url": "https://github.com/huggingface/transformers/pull/20870",
"diff_url": "https://github.com/huggingface/transformers/pull/20870.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20870.patch",
"merged_at": 1671802783000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20869
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20869/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20869/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20869/events
|
https://github.com/huggingface/transformers/pull/20869
| 1,507,488,405
|
PR_kwDOCUB6oc5GBSHy
| 20,869
|
having new model entries in Hindi for Hindi README
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
1. Having new model entries in Hindi for Hindi README
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20869/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20869",
"html_url": "https://github.com/huggingface/transformers/pull/20869",
"diff_url": "https://github.com/huggingface/transformers/pull/20869.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20869.patch",
"merged_at": 1671777048000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20868
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20868/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20868/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20868/events
|
https://github.com/huggingface/transformers/pull/20868
| 1,507,283,514
|
PR_kwDOCUB6oc5GAl65
| 20,868
|
Add Onnx Config for PoolFormer
|
{
"login": "BakingBrains",
"id": 51019420,
"node_id": "MDQ6VXNlcjUxMDE5NDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/51019420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BakingBrains",
"html_url": "https://github.com/BakingBrains",
"followers_url": "https://api.github.com/users/BakingBrains/followers",
"following_url": "https://api.github.com/users/BakingBrains/following{/other_user}",
"gists_url": "https://api.github.com/users/BakingBrains/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BakingBrains/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BakingBrains/subscriptions",
"organizations_url": "https://api.github.com/users/BakingBrains/orgs",
"repos_url": "https://api.github.com/users/BakingBrains/repos",
"events_url": "https://api.github.com/users/BakingBrains/events{/privacy}",
"received_events_url": "https://api.github.com/users/BakingBrains/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ChainYo @michaelbenayoun I have mistakenly closed that previous pull request. Created this with resolved conflicts. ",
"Yeah I will do that",
"Thank you"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (https://github.com/huggingface/transformers/issues/16308)
Add changes to make PoolFormer models available for Onnx conversion.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ChainYo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20868/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20868",
"html_url": "https://github.com/huggingface/transformers/pull/20868",
"diff_url": "https://github.com/huggingface/transformers/pull/20868.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20868.patch",
"merged_at": 1671777058000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20867
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20867/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20867/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20867/events
|
https://github.com/huggingface/transformers/issues/20867
| 1,507,128,305
|
I_kwDOCUB6oc5Z1PPx
| 20,867
|
Default rescale for ImageProcessing
|
{
"login": "yazdanbakhsh",
"id": 7105134,
"node_id": "MDQ6VXNlcjcxMDUxMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7105134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yazdanbakhsh",
"html_url": "https://github.com/yazdanbakhsh",
"followers_url": "https://api.github.com/users/yazdanbakhsh/followers",
"following_url": "https://api.github.com/users/yazdanbakhsh/following{/other_user}",
"gists_url": "https://api.github.com/users/yazdanbakhsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yazdanbakhsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yazdanbakhsh/subscriptions",
"organizations_url": "https://api.github.com/users/yazdanbakhsh/orgs",
"repos_url": "https://api.github.com/users/yazdanbakhsh/repos",
"events_url": "https://api.github.com/users/yazdanbakhsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/yazdanbakhsh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@yazdanbakhsh Thanks for creating the issue and putting in so much detail. \r\n\r\nIf I've understood correctly, your question is about differences in the configuration seen on the hub e.g. [here](https://huggingface.co/google/vit-base-patch16-224/blob/main/preprocessor_config.json) and the object representation when it's loaded and whether this affects training your model. I'll answer for this, but let me know if there's anything I've missed. \r\n\r\nTLDR; This will not affect training your model unless your input images are numpy arrays and `do_resize=False`. \r\n\r\nThe feature extractors have recently been deprecated in place of image processors. The feature extractors now act as an alias for the image processor, and using `AutoFeatureExtractor` will load an image processor under the hood. \r\n\r\nAt the moment, the previous feature extractor configurations are loaded and converted to the equivalent image processor configuration. For example, you'll notice that [`size` in `preprocessor_config.json`](https://huggingface.co/google/vit-base-patch16-224/blob/main/preprocessor_config.json) is an int `224`, whereas it's a dictionary in the image processor. In time, these configurations will be updated on the hub. \r\n\r\nWith respect to `do_rescale`, this flag has been added in order to separate the concerns of certain processing logic. [Rescaling still happened in the old feature extractors](https://github.com/huggingface/transformers/blob/4fd89e49788f60b021b4e2c578a1fb12f1e900e4/src/transformers/image_utils.py#LL380C7-L380C7), this just makes the step more explicit and controllable by the user. Previously, images would have their pixel values divided by 255 if `do_normalize=True` and the input image was a `PIL.Image.Image` or `do_resize=True`, however they wouldn't be rescaled if the input was a numpy array and `do_resize=False`. This ensures consistent rescaling behaviour, regardless of the input type. As a result, there may be differences in the resulting images between the old feature extractors and new image processors if your input images are numpy arrays and `do_resize=False`, however, the resulting image should be consistent across input types for all flag combinations with the new image processors. ",
"@amyeroberts Thank you so much for the detailed explanation. That all make sense to me. I just wanted to share this to ensure that this is an expected behavior from the implementation. Based on your explanation, I think we should be able to close this issue."
] | 1,671
| 1,672
| 1,672
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-5.4.0-1096-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed
### Who can help?
@amyeroberts Is there any reason that the default value for `do_rescale` is set to `True`? Does it mess with current ViT pre-trained model such as Google-ViT? I am trying to pre-train my model and saw that there is this discrepancy that arguably could impact accuracy. I am just trying to figure out the rationale behind this.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. from transformers import AutoFeatureExtractor
2. Autofeature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224")
3. Autofeature_extractor
```
ViTImageProcessor {
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.5,
0.5,
0.5
],
"image_processor_type": "ViTImageProcessor",
"image_std": [
0.5,
0.5,
0.5
],
"resample": 2,
"rescale_factor": 0.00392156862745098,
"size": {
"height": 224,
"width": 224
}
}
```
### Expected behavior
```
ViTImageProcessor {
"do_normalize": true,
"do_resize": true,
"image_mean": [
0.5,
0.5,
0.5
],
"image_processor_type": "ViTImageProcessor",
"image_std": [
0.5,
0.5,
0.5
],
"resample": 2,
"size": {
"height": 224,
"width": 224
}
}
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20867/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20866
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20866/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20866/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20866/events
|
https://github.com/huggingface/transformers/pull/20866
| 1,506,954,569
|
PR_kwDOCUB6oc5F_fz1
| 20,866
|
Update image processor parameters if creating with kwargs
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,672
| 1,672
|
COLLABORATOR
| null |
# What does this PR do?
Ensures backwards compatibility with previous feature extractor creation when using `from_pretrained` and `from_dict`.
**Updates attribute before instantiation:**
Previously, `size` attributes were stored as an int/tuple. This has been updated to a dictionary to reduce ambiguity of whether the int represents the shortest edge, height or width. If the feature extractor / image processor is created with `size` as an int, it is converted to the appropriate dictionary with a `logging.info` message. However, if the image processor is created using `from_pretrained` or `from_dict` with `size` as a kwarg, the class is first instantiated and then the `size` kwarg overwrites the class parameter. In this case, `image_processor.size` is an int, and is not be converted to the correct dictionary format. This PR makes sure the dict creating the instance has the updated value, which is then converted to a dict if necessary.
**Renames attribute before instantiation**
Some feature extractor instance attributes have been removed when updating to image processors. For example `reduce_labels` became `do_reduce_labels` to ensure naming consistency and `max_size` is now part of the `size` dictionary as `size["longest_edge"]`. In `from_dict`, if the instance doesn't have the attribute, and the attribute is passed in as a kwarg with its old name, the instance won't have it added as an attribute. This PR will update the name of the attribute in `from_dict` to the new name if necessary.
Previously:
```python
>>> image_processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", size=600, max_size=800)
>>> image_processor
DetrImageProcessor {
"do_normalize": true,
"do_pad": true,
"do_rescale": true,
"do_resize": true,
"feature_extractor_type": "DetrFeatureExtractor",
"format": "coco_detection",
"image_mean": [
0.485,
0.456,
0.406
],
"image_processor_type": "DetrImageProcessor",
"image_std": [
0.229,
0.224,
0.225
],
"resample": 2,
"rescale_factor": 0.00392156862745098,
"size": 600
}
```
Now:
```python
>>> image_processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", size=600, max_size=800)
>>> image_processor
DetrImageProcessor {
"do_normalize": true,
"do_pad": true,
"do_rescale": true,
"do_resize": true,
"feature_extractor_type": "DetrFeatureExtractor",
"format": "coco_detection",
"image_mean": [
0.485,
0.456,
0.406
],
"image_processor_type": "DetrImageProcessor",
"image_std": [
0.229,
0.224,
0.225
],
"resample": 2,
"rescale_factor": 0.00392156862745098,
"size": {
"longest_edge": 800,
"shortest_edge": 600
}
}
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20866/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20866",
"html_url": "https://github.com/huggingface/transformers/pull/20866",
"diff_url": "https://github.com/huggingface/transformers/pull/20866.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20866.patch",
"merged_at": 1672842588000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20865
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20865/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20865/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20865/events
|
https://github.com/huggingface/transformers/pull/20865
| 1,506,818,637
|
PR_kwDOCUB6oc5F_CPR
| 20,865
|
change strings to f-strings in image_processing_utils.py
|
{
"login": "dhansmair",
"id": 21751746,
"node_id": "MDQ6VXNlcjIxNzUxNzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/21751746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhansmair",
"html_url": "https://github.com/dhansmair",
"followers_url": "https://api.github.com/users/dhansmair/followers",
"following_url": "https://api.github.com/users/dhansmair/following{/other_user}",
"gists_url": "https://api.github.com/users/dhansmair/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhansmair/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhansmair/subscriptions",
"organizations_url": "https://api.github.com/users/dhansmair/orgs",
"repos_url": "https://api.github.com/users/dhansmair/repos",
"events_url": "https://api.github.com/users/dhansmair/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhansmair/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
not top priority, it's just that there's a python string which should be a f-string :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20865/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20865",
"html_url": "https://github.com/huggingface/transformers/pull/20865",
"diff_url": "https://github.com/huggingface/transformers/pull/20865.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20865.patch",
"merged_at": 1671692810000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20864
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20864/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20864/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20864/events
|
https://github.com/huggingface/transformers/pull/20864
| 1,506,704,229
|
PR_kwDOCUB6oc5F-pWw
| 20,864
|
Adding support for `fp16` for asr pipeline.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #20862
Many things were considered before settling for this design.
- `feature_extractor(return_tensors="pt¨, torch_dtype=torch_dtype)` . This would have the advantage of being consistent, but not all feature extractors to define this, so it would affect all of them. Then why would we use `torch_dtype` instead of the more common place `dtype` which could be applied to TF and flax as well. Also it feels a bit redundant to specify both` return_tensors` and `torch_dtype`, it would be a good candidate to fuse both parameters (but outisde the scope of this PR).
- `AutoFeatureExtractor.from_pretrained(..., torch_dtype=torch_dtype)`. This would have the advantage of being overall so users don't need to respecify on each call. However we can't specifiy `return_tensors="pt" ` in there either, so for consistency I didn't try to put it there.
- `ffmpeg_read(..., dtype=dtype)` This would be nice to load directly the waveform into fp16 and just let fp16 flow through the feature_extractor. However, whisper in particular uses mel_spectrogram, so using f16 sound might actually damage performance.
In the end, this solution is the simplement I could come up with. Let `torch_dtype` flow to the pipeline, use it as a regular parameter and convert the output of the feature_extractor after.
This does incur a potentially extra copy but there's no risk of damaging quality of the input.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20864/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20864",
"html_url": "https://github.com/huggingface/transformers/pull/20864",
"diff_url": "https://github.com/huggingface/transformers/pull/20864.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20864.patch",
"merged_at": 1671787126000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20863
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20863/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20863/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20863/events
|
https://github.com/huggingface/transformers/pull/20863
| 1,506,668,133
|
PR_kwDOCUB6oc5F-hZF
| 20,863
|
Update `HubertModelIntegrationTest.test_inference_keyword_spotting`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
Our CI is updated to use torch 1.13 (which also use cuda 11.6) instead of torch 1.12 (cuda 11.3), this test fails. The tolerance `2e-2` is not enough anymore, but it passes with `3e-2`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20863/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20863",
"html_url": "https://github.com/huggingface/transformers/pull/20863",
"diff_url": "https://github.com/huggingface/transformers/pull/20863.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20863.patch",
"merged_at": 1671644414000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20862
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20862/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20862/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20862/events
|
https://github.com/huggingface/transformers/issues/20862
| 1,506,461,259
|
I_kwDOCUB6oc5ZysZL
| 20,862
|
Run `AutomaticSpeechRecognitionPipeline` with FP16
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Good catch ! Thanks for the tip.\r\n\r\nI started considering a few options into how this could be done, and I ended up doing the PR, please review if you can (sorry for doing it, I usually like when contributors can do it, but as I wasn't sure of what exactly should be done and I was exploring, I ended up doing it :D )\r\n\r\nhttps://github.com/huggingface/transformers/pull/20864",
"No problem! Thanks for the explication in the PR! Just leave some comments"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
### Feature request
Hi @Narsil,
I would like to run inference with `AutomaticSpeechRecognitionPipeline` in FP16 using some large models (e,g, whisper). But I don't believe it's supported in current version (please correct me if I'm wrong here).
### Reproduction
Below is a code snippet to reproduce the behavior.
```python
import torch
from transformers import pipeline
pipe = pipeline(model="openai/whisper-base", device=0, torch_dtype=torch.float16)
pipe("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
```
When running this we see the following stack trace:
```
RuntimeError Traceback (most recent call last)
Cell In[1], line 5
2 from transformers import pipeline
4 pipe = pipeline(model="openai/whisper-base", device=0, torch_dtype=torch.float16)
----> 5 pipe("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac")
File ~/transformers/src/transformers/pipelines/automatic_speech_recognition.py:232, in AutomaticSpeechRecognitionPipeline.__call__(self, inputs, **kwargs)
191 def __call__(
192 self,
193 inputs: Union[np.ndarray, bytes, str],
194 **kwargs,
195 ):
196 """
197 Transcribe the audio sequence(s) given as inputs to text. See the [`AutomaticSpeechRecognitionPipeline`]
198 documentation for more information.
(...)
230 `"".join(chunk["text"] for chunk in output["chunks"])`.
231 """
--> 232 return super().__call__(inputs, **kwargs)
File ~/transformers/src/transformers/pipelines/base.py:1074, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1072 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
1073 else:
-> 1074 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File ~/transformers/src/transformers/pipelines/base.py:1096, in ChunkPipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
1094 all_outputs = []
1095 for model_inputs in self.preprocess(inputs, **preprocess_params):
-> 1096 model_outputs = self.forward(model_inputs, **forward_params)
1097 all_outputs.append(model_outputs)
1098 outputs = self.postprocess(all_outputs, **postprocess_params)
File ~/transformers/src/transformers/pipelines/base.py:990, in Pipeline.forward(self, model_inputs, **forward_params)
988 with inference_context():
989 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)
--> 990 model_outputs = self._forward(model_inputs, **forward_params)
991 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu"))
992 else:
File ~/transformers/src/transformers/pipelines/automatic_speech_recognition.py:370, in AutomaticSpeechRecognitionPipeline._forward(self, model_inputs)
364 # we need to pass `processed.get("attention_mask")` here since audio encoder
365 # attention mask length is different from expected text decoder `encoder_attention_mask` length
366 # `generate` magic to create the mask automatically won't work, we basically need to help
367 # it here.
368 attention_mask = model_inputs.pop("attention_mask", None)
369 tokens = self.model.generate(
--> 370 encoder_outputs=encoder(inputs, attention_mask=attention_mask),
371 attention_mask=attention_mask,
372 )
374 out = {"tokens": tokens}
376 else:
File ~/anaconda3/envs/asr/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/transformers/src/transformers/models/whisper/modeling_whisper.py:654, in WhisperEncoder.forward(self, input_features, attention_mask, head_mask, output_attentions, output_hidden_states, return_dict)
650 output_hidden_states = (
651 output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
652 )
653 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
--> 654 inputs_embeds = nn.functional.gelu(self.conv1(input_features))
655 inputs_embeds = nn.functional.gelu(self.conv2(inputs_embeds))
657 inputs_embeds = inputs_embeds.permute(0, 2, 1)
File ~/anaconda3/envs/asr/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs)
1186 # If we don't have any hooks, we want to skip the rest of the logic in
1187 # this function, and just call forward.
1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1189 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1190 return forward_call(*input, **kwargs)
1191 # Do not call functions when jit is used
1192 full_backward_hooks, non_full_backward_hooks = [], []
File ~/anaconda3/envs/asr/lib/python3.8/site-packages/torch/nn/modules/conv.py:313, in Conv1d.forward(self, input)
312 def forward(self, input: Tensor) -> Tensor:
--> 313 return self._conv_forward(input, self.weight, self.bias)
File ~/anaconda3/envs/asr/lib/python3.8/site-packages/torch/nn/modules/conv.py:309, in Conv1d._conv_forward(self, input, weight, bias)
305 if self.padding_mode != 'zeros':
306 return F.conv1d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
307 weight, bias, self.stride,
308 _single(0), self.dilation, self.groups)
--> 309 return F.conv1d(input, weight, bias, self.stride,
310 self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same
```
### System Info
```
- `transformers` version: 4.26.0.dev0
```
### Motivation
To accelerate inference and take less memory when using the pipeline with large models
### Your contribution
Right now I just force the casting to fp16 after calling `feature_extractor` to make the pipeline run for whisper models (I'm not yet using chunked inference).
https://github.com/huggingface/transformers/blob/3090e708577e2d0145ab81d0e2362e3235aebbd9/src/transformers/pipelines/automatic_speech_recognition.py#L338-L340
```python
processed["input_features"] = processed["input_features"].to(self.model.config.torch_dtype)
```
But I understand should better wrap it in another function which considers input names for different models. Willing to make a PR if you can guide me here :)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20862/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20861
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20861/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20861/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20861/events
|
https://github.com/huggingface/transformers/pull/20861
| 1,506,417,682
|
PR_kwDOCUB6oc5F9raM
| 20,861
|
[Past CI] π₯ Leave Past CI failures in the past π₯
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,672
| 1,672
|
COLLABORATOR
| null |
# What does this PR do?
Make Past CI (with `torch 1.8`) cleaner
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20861/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20861",
"html_url": "https://github.com/huggingface/transformers/pull/20861",
"diff_url": "https://github.com/huggingface/transformers/pull/20861.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20861.patch",
"merged_at": 1672162646000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20860
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20860/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20860/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20860/events
|
https://github.com/huggingface/transformers/pull/20860
| 1,506,040,902
|
PR_kwDOCUB6oc5F8Yxq
| 20,860
|
[`MobileNet-v2`] Fix ONNX typo
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I can confirm the script below works:\r\n```\r\nfrom transformers import AutoModelForImageClassification\r\nfrom transformers.onnx import FeaturesManager\r\n\r\nmodel = AutoModelForImageClassification.from_pretrained(\"google/mobilenet_v2_1.0_224\")\r\nmodel_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model)\r\n```\r\nwhich yields to successfully retrieving the MobileNet ONNX config, however running `optimum-cli export onnx --model google/mobilenet_v2_1.0_224 onnx/` gives an error \r\n```\r\nKeyError: \"mobilenet-v2 is not supported yet. Only {'xlm', 'deberta', 'distilbert', 'electra', 'mobilebert', 'm2m-100', 'segformer', 'convbert', 'longt5', 'data2vec-text', 'flaubert', 'gptj', 'detr', 'layoutlmv3', 'mobilevit', 'groupvit', 'levit', 'mbart', 'big-bird', 'albert', 'bloom', 't5', 'swin', 'roberta', 'blenderbot', 'bert', 'yolos', 'marian', 'deit', 'layoutlm', 'perceiver', 'xlm-roberta', 'vit', 'gpt-neo', 'mt5', 'bigbird-pegasus', 'codegen', 'clip', 'whisper', 'data2vec-vision', 'squeezebert', 'convnext', 'deberta-v2', 'ibert', 'roformer', 'blenderbot-small', 'bart', 'beit', 'resnet', 'camembert', 'gpt2'} are supported. If you want to support mobilenet-v2 please propose a PR or open up an issue.\"\r\n```\r\nMobilenet is not listed in `optimum.exporters.tasks`, probably by mistake or for some other reason. Opened a PR to support `Mobilenet` in optimum, https://github.com/huggingface/optimum/pull/633 I can confirm the model can be safely exported after checking out this PR! ",
"Now the PR https://github.com/huggingface/optimum/pull/633 being merged the export works as expected, merging"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/20856
In order to get which model is exportable with ONNX, we need first to pre-process the model type before checking with `FeaturesManager` by replacing the `_` with `-`.
This has been forgotten for `mobilenet` family models which leads to errors such as the one described in #20856
It seems that the proper way to call the method `get_supported_features_for_model_type`, is to first apply some pre-preprocessing as it is done [here](https://github.com/huggingface/transformers/blob/d87e381f9303c7d6a8aa7333dc09ed767de7395f/src/transformers/onnx/features.py#L723). This patch has been added to `tests/onnx/test_onnx_v2.py` too
This PR fixes these issues.
Also I would like to hear from @lewtun as it's my first ONNX-related PR.
cc @sgugger @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20860/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20860",
"html_url": "https://github.com/huggingface/transformers/pull/20860",
"diff_url": "https://github.com/huggingface/transformers/pull/20860.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20860.patch",
"merged_at": 1671731574000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20859
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20859/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20859/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20859/events
|
https://github.com/huggingface/transformers/pull/20859
| 1,506,029,867
|
PR_kwDOCUB6oc5F8WbG
| 20,859
|
Fix past CI by Skipping `LevitModelTest.test_problem_types`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
Fix past CI by Skipping `LevitModelTest.test_problem_types` for `PyTorch 1.9`.
This test failed with torch 1.9 with some CUDA error, but it passes with `torch 1.8` and `torch >= 1.10`.
The error is
```bash
input = <[RuntimeError('CUDA error: an illegal memory access was encountered\nCUDA kernel errors might be asynchronously repor...incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.') raised in repr()] Tensor object at 0x7fcb37dbd900>
weight = <[RuntimeError('CUDA error: an illegal memory access was encountered\nCUDA kernel errors might be asynchronously repor...orrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.') raised in repr()] Parameter object at 0x7fcb36c935c0>, bias = None
FAILED tests/models/levit/test_modeling_levit.py::LevitModelTest::test_problem_types - RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20859/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20859",
"html_url": "https://github.com/huggingface/transformers/pull/20859",
"diff_url": "https://github.com/huggingface/transformers/pull/20859.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20859.patch",
"merged_at": 1671629353000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20858
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20858/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20858/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20858/events
|
https://github.com/huggingface/transformers/pull/20858
| 1,505,998,807
|
PR_kwDOCUB6oc5F8Pqx
| 20,858
|
Remove more unused attributes in config classes
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,672
| 1,672
|
COLLABORATOR
| null |
# What does this PR do?
remove more unused attributes in config classes
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20858/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20858",
"html_url": "https://github.com/huggingface/transformers/pull/20858",
"diff_url": "https://github.com/huggingface/transformers/pull/20858.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20858.patch",
"merged_at": 1672753060000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20857
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20857/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20857/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20857/events
|
https://github.com/huggingface/transformers/pull/20857
| 1,505,882,041
|
PR_kwDOCUB6oc5F72gE
| 20,857
|
Use `config.num_channels` in CLIP-like modeling files
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
`config.num_channels` is not used in some CLIP-like modeling files. Unlike previous PRs like #20596 or #20844, we use this attribute in the modeling files in this PR.
The only breaking case is when a user previously set `config.num_channels=X` with `X !=3`, which is super unlikely IMO. (Even they did so, the actual Conv2D layer still uses `3` as it is hard-coded in the current `main` branch)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20857/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20857",
"html_url": "https://github.com/huggingface/transformers/pull/20857",
"diff_url": "https://github.com/huggingface/transformers/pull/20857.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20857.patch",
"merged_at": 1671619884000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20856
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20856/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20856/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20856/events
|
https://github.com/huggingface/transformers/issues/20856
| 1,505,773,188
|
I_kwDOCUB6oc5ZwEaE
| 20,856
|
transformers.onnx mobilenet_v2 not supported but exists in supported list
|
{
"login": "ashim-mahara",
"id": 48154590,
"node_id": "MDQ6VXNlcjQ4MTU0NTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/48154590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashim-mahara",
"html_url": "https://github.com/ashim-mahara",
"followers_url": "https://api.github.com/users/ashim-mahara/followers",
"following_url": "https://api.github.com/users/ashim-mahara/following{/other_user}",
"gists_url": "https://api.github.com/users/ashim-mahara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashim-mahara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashim-mahara/subscriptions",
"organizations_url": "https://api.github.com/users/ashim-mahara/orgs",
"repos_url": "https://api.github.com/users/ashim-mahara/repos",
"events_url": "https://api.github.com/users/ashim-mahara/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashim-mahara/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"It's just a typo in the list it looks like. cc @younesbelkada or @ArthurZucker if you want to make a quick fix.",
"on it!",
"Hi @ashim-mahara \r\nYou should be now be able to export mobilenet in ONNX using the main branch of optimum and transformers",
"Hi @younesbelkada thank you! that was fast, kudos to you folks!"
] | 1,671
| 1,671
| 1,671
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-6.0.6-76060006-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes (using notebooks for training)
- Using distributed or parallel set-up in script?: no
`KeyError: "mobilenet-v2 is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'clip', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'data2vec-vision', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'groupvit', 'ibert', 'imagegpt', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'longformer', 'marian', 'mbart', 'mobilebert', 'mobilenet_v1', 'mobilenet_v2', 'mobilevit', 'mt5', 'm2m-100', 'owlvit', 'perceiver', 'resnet', 'roberta', 'roformer', 'segformer', 'squeezebert', 'swin', 't5', 'vision-encoder-decoder', 'vit', 'whisper', 'xlm', 'xlm-roberta', 'yolos'] are supported. If you want to support mobilenet-v2 please propose a PR or open up an issue."`
### Who can help?
@amyeroberts @NielsRogge
### Information
- [X] My own modified scripts
### Tasks
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForImageClassification
from transformers.onnx import FeaturesManager
model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v2_1.0_224")
model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model)
```
### Expected behavior
Should be able to get model_onnx_config for export.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20856/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20855
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20855/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20855/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20855/events
|
https://github.com/huggingface/transformers/issues/20855
| 1,505,695,114
|
I_kwDOCUB6oc5ZvxWK
| 20,855
|
Gradient accumulation trick and Activation Checkpointing feature
|
{
"login": "buttercutter",
"id": 3324659,
"node_id": "MDQ6VXNlcjMzMjQ2NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3324659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buttercutter",
"html_url": "https://github.com/buttercutter",
"followers_url": "https://api.github.com/users/buttercutter/followers",
"following_url": "https://api.github.com/users/buttercutter/following{/other_user}",
"gists_url": "https://api.github.com/users/buttercutter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buttercutter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buttercutter/subscriptions",
"organizations_url": "https://api.github.com/users/buttercutter/orgs",
"repos_url": "https://api.github.com/users/buttercutter/repos",
"events_url": "https://api.github.com/users/buttercutter/events{/privacy}",
"received_events_url": "https://api.github.com/users/buttercutter/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @sanchit-gandhi ",
"Hey @buttercutter! It looks like there are two different feature requests going on here! Let's focus on the JAX gradient accumulation one since this more relevant to the 'motivation' and code snippet you've provided. Feel free to open a separate issue for DeepSpeed activation checkpointing.\r\n\r\nUnfortunately, gradient accumulation in JAX isn't as straightforward as using `optax.apply_every`! If you dig through the source code, you'll actually find that using `apply_every` with a batch size of N/2 and 2 accumulation steps is not necessarily equivalent to not using `apply_every` with a batch size of N. See https://optax.readthedocs.io/en/latest/api.html#optax.apply_every\r\n\r\nThere is an alternative in `optax.MultiSteps`: https://optax.readthedocs.io/en/latest/api.html#optax.MultiSteps. This will give correct gradient equivalence between using gradient accumulation and not using gradient accumulation. However in my experiments, I found it to be not super memory efficient, and consequently quite an unreliable means of using gradient accumulation. For this reason, I took the decision not to add it to the examples scripts. \r\n\r\nFeel free to experiment with using `optax.MultiSteps` in your code! If you're able to get nice performance, we can explore adding it to the examples scripts! It'd be cool to benchmark the maximum permissible batch size you get without gradient accumulation, and then the maximum effective batch size you get with gradient accumulation!\r\n\r\nIn my experiments, the most memory efficient way of implementing gradient accumulation was to to write a custom loop: https://github.com/sanchit-gandhi/seq2seq-speech/blob/669e51452c396b3b8605c9ac7511da8abe31038f/run_flax_speech_recognition_seq2seq.py#L1352\r\nNow while this is the most memory efficient way, it's the most complicated in terms of code understanding! For this reason, it's also not a good fit for the Transformers examples scripts, which we try and keep as clean and lightweight as possible.",
"I am using your [custom loop](https://gist.github.com/buttercutter/34597783d681ce6407ff26ec3b76e56e/35f9ae01f85745143f56a5b049596ebe3c57a145#file-run_summarization_flax-py-L1174) for `train_step()`, but I have the following error:\r\n\r\nNote: In my code, ` training_args.per_device_gradient_accumulation_steps = 10` , and `training_args.per_device_train_batch_size = 8` and `batch` has shape of `(8, 3600)`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_summarization_flax.py\", line 1338, in <module>\r\n main()\r\n File \"run_summarization_flax.py\", line 1264, in main\r\n state, train_metric = p_train_step(state, batch)\r\n File \"/home/moe/.local/lib/python3.8/site-packages/chex/_src/fake.py\", line 175, in wrapped_fn\r\n output = vmapped_fn(*call_args)\r\n File \"run_summarization_flax.py\", line 1173, in train_step\r\n batch = jax.tree_map(\r\n File \"run_summarization_flax.py\", line 1174, in <lambda>\r\n lambda x: x.reshape(\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py\", line 793, in _reshape\r\n return lax.reshape(a, newshape, None)\r\njax.core.InconclusiveDimensionOperation: Cannot divide evenly the sizes of shapes (8, 8, 3600) and (8, 10, 8, 3600)\r\n```\r\n\r\n",
"@sanchit-gandhi When I run [your original python script without any modifications](https://github.com/sanchit-gandhi/seq2seq-speech/blob/669e51452c396b3b8605c9ac7511da8abe31038f/run_flax_speech_recognition_seq2seq.py), it gave `free(): invalid pointer` ?\r\n\r\nAnd when I use [run_librispeech.sh](https://github.com/sanchit-gandhi/seq2seq-speech/blob/2765278c6a37d642d99bda8e52dfc9d8a983b4ed/scripts/seq2seq/run_librispeech.sh) , it gave similar error on `free()`again.\r\n\r\n```\r\nsh run_librispeech.sh \r\nsrc/tcmalloc.cc:332] Attempt to free invalid pointer 0x7fc48dd90558 \r\nAborted (core dumped)\r\n```",
"@sanchit-gandhi I am not able to use your original python script, hence I proceed with [my own python script with the following slight modification ](https://gist.github.com/buttercutter/34597783d681ce6407ff26ec3b76e56e/revisions#diff-a8b873e9d2d0489c80ac16d9b2dbd0706efea6bc1947ae235eef864ee5c7b050L1175)to get it past the dimension runtime error.\r\n\r\nNote that the `-1` in the `reshape` operation means that the size of the last dimension will be inferred from the size of `x` and the other dimensions. Hence the following modification will reshape `batch` to have shape `(8, 10, 3600)`\r\n\r\n```\r\n# add a first dimension over gradient_accumulation_steps for minibatch slices\r\nbatch = jax.tree_map(\r\n lambda x: x.reshape(\r\n training_args.per_device_train_batch_size, training_args.per_device_gradient_accumulation_steps, -1 #*x.shape[1::]\r\n ),\r\n batch,\r\n)\r\n```",
"Hey @buttercutter! Sorry for the late reply here! \r\n\r\nThe shape mismatch error you are experiencing is likely due to a difference in the number of accelerator devices. I purposed my script for a TPU v3-8 (8 devices), whereas it looks like you're testing on a single GPU (1 device).\r\n\r\nWith multiple devices, we shard the data across devices by prepending an extra dimension to the start of the data: `(num_devices, per_device_train_batch_size, input_shape)`. \r\n\r\nWe don't get this extra dimension with one device: since we run everything on a single GPU, there is no need for any data sharding. This is probably the reason for the shape mis-match we are seeing here (your data is of shape `(per_device_train_batch_size, input_shape)`). The workaround with setting `-1` in the reshape operation looks valid in this case!\r\n\r\nGlad to see the script is working now! Let me know if you encounter any further issues - more than happy to help here!",
"Hey @sanchit-gandhi \r\n\r\nHow to properly [modify line 1208 till line 1230 for enabling gradient accumulation trick](https://gist.github.com/buttercutter/34597783d681ce6407ff26ec3b76e56e/4d4b958675c6c8e2f8b988227e2bc5330d8c5312#file-run_summarization_flax-py-L1208-L1230) ?\r\n\r\n\r\n",
"I have turned off `training_args.gradient_checkpointing` option for now because of the following runtime error. Could you also help to advise on this as well ?\r\n\r\n```\r\nAll the weights of FlaxLongT5ForConditionalGeneration were initialized from the model checkpoint at google/long-t5-tglobal-base.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use FlaxLongT5ForConditionalGeneration for predictions without further training.\r\nTraceback (most recent call last):\r\n File \"run_summarization_flax.py\", line 1340, in <module>\r\n main()\r\n File \"run_summarization_flax.py\", line 605, in main\r\n model.enable_gradient_checkpointing()\r\nAttributeError: 'FlaxLongT5ForConditionalGeneration' object has no attribute 'enable_gradient_checkpointing'\r\n```",
"It seems that `AttributeError: 'FlaxLongT5ForConditionalGeneration' object has no attribute 'enable_gradient_checkpointing'` is gone after forced reinstall of transformers library.\r\n\r\nThe only issue left is the [gradient accumulation](https://github.com/huggingface/transformers/issues/20855#issuecomment-1373082511)",
"@sanchit-gandhi [these code changes](https://gist.github.com/buttercutter/34597783d681ce6407ff26ec3b76e56e/revisions#diff-a8b873e9d2d0489c80ac16d9b2dbd0706efea6bc1947ae235eef864ee5c7b050) at least **bypass** the gradient accumulation runtime error for now.\r\n\r\n\r\n",
"Hey @buttercutter,\r\n\r\nFor such specific questions, it really helps to provide a reproducible code-snippet, such that the maintainer looking into the issue can replicate the error being faced and dig into the code on their end locally.\r\n\r\nIn this case, I created one that uses a ['tiny random' version](https://huggingface.co/sshleifer/bart-tiny-random) of the BART model so that the forward/backward passes are fast, and a ['mini' version](https://huggingface.co/datasets/iohadrubin/mini_xsum) of the XSUM dataset such that the dataset download and preparation time is small:\r\n```\r\npython run_summarization_flax.py \\\r\n\t--output_dir=\"./\" \\\r\n\t--model_name_or_path=\"sshleifer/bart-tiny-random\" \\\r\n\t--tokenizer_name=\"sshleifer/bart-tiny-random\" \\\r\n\t--dataset_name=\"iohadrubin/mini_xsum\" \\\r\n\t--do_train \\\r\n \t--do_eval \\\r\n\t--predict_with_generate \\\r\n\t--per_device_train_batch_size 8 \\\r\n\t--per_device_eval_batch_size 8 \\\r\n\t--overwrite_output_dir \\\r\n\t--max_source_length=\"64\" \\\r\n \t--max_target_length 32 \\ \r\n```\r\n\r\nI would highly recommend this approach of using tiny/mini versions of the model/dataset when debugging to give a fast feedback loop! Having tiny/mini versions is also good practice when sharing your code, as it allows others to try the code out locally without enormous download and wait times.\r\n\r\nThe easiest thing to do would be to remove all the layer/grad norm logs if you don't need them (L1208-1225). Otherwise, you can follow this fix.\r\n\r\nUpon inspection, the keys for the `layer_grad_norm` and `layer_param_norm` need to be changed for the BART model to include an extra key. The layer grad norm values then need to be made into a `jnp.array`:\r\n\r\n```diff\r\n logs = {\r\n \"layer_grad_norm\": layer_grad_norm,\r\n- \"encoder_grad_norm\": jnp.linalg.norm(jax.tree_util.tree_leaves(layer_grad_norm[\"encoder\"])),\r\n+ \"encoder_grad_norm\": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm[\"model\"][\"encoder\"]))),\r\n- \"decoder_grad_norm\": jnp.linalg.norm(jax.tree_util.tree_leaves(layer_grad_norm[\"decoder\"])),\r\n+ \"decoder_grad_norm\": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm[\"model\"][\"decoder\"]))),\r\n }\r\n```\r\n\r\nHere's the full corrected code snippet:\r\n```python\r\n # compute gradient norms over all layers, total encoder, total decoder and global for detailed monitoring\r\n layer_grad_norm = jax.tree_map(jnp.linalg.norm, grad)\r\n logs = {\r\n \"layer_grad_norm\": layer_grad_norm,\r\n \"encoder_grad_norm\": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm[\"model\"][\"encoder\"]))),\r\n \"decoder_grad_norm\": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm[\"model\"][\"decoder\"]))),\r\n }\r\n logs[\"grad_norm\"] = jnp.linalg.norm([logs[\"encoder_grad_norm\"], logs[\"decoder_grad_norm\"]])\r\n\r\n # compute parameter norms over all layers, total encoder, total decoder and global for detailed monitoring\r\n layer_param_norm = jax.tree_map(jnp.linalg.norm, new_state.params)\r\n logs[\"layer_param_norm\"] = layer_param_norm\r\n logs[\"encoder_param_norm\"] = jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_param_norm[\"model\"][\"encoder\"])))\r\n logs[\"decoder_param_norm\"] = jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_param_norm[\"model\"][\"decoder\"])))\r\n logs[\"param_norm\"] = jnp.linalg.norm([logs[\"encoder_param_norm\"], logs[\"decoder_param_norm\"]])\r\n```\r\n\r\nHope that helps! ",
"@sanchit-gandhi \r\n\r\n`model` key seems not found ?\r\n\r\nLet me also do some debugging at the same time.\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"run_summarization_flax.py\", line 1341, in <module>\r\n main()\r\n File \"run_summarization_flax.py\", line 1270, in main\r\n state, train_metric = p_train_step(state, batch)\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/_src/traceback_util.py\", line 162, in reraise_with_filtered_traceback\r\n return fun(*args, **kwargs)\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/_src/api.py\", line 2253, in cache_miss\r\n execute = pxla.xla_pmap_impl_lazy(fun_, *tracers, **params)\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/pxla.py\", line 974, in xla_pmap_impl_lazy\r\n compiled_fun, fingerprint = parallel_callable(\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/linear_util.py\", line 303, in memoized_fun\r\n ans = call(fun, *args)\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/pxla.py\", line 1245, in parallel_callable\r\n pmap_computation = lower_parallel_callable(\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/_src/profiler.py\", line 314, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/pxla.py\", line 1414, in lower_parallel_callable\r\n jaxpr, consts, replicas, parts, shards = stage_parallel_callable(\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/pxla.py\", line 1321, in stage_parallel_callable\r\n jaxpr, out_sharded_avals, consts = pe.trace_to_jaxpr_final(\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/_src/profiler.py\", line 314, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/partial_eval.py\", line 2065, in trace_to_jaxpr_final\r\n jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/partial_eval.py\", line 1998, in trace_to_subjaxpr_dynamic\r\n ans = fun.call_wrapped(*in_tracers_)\r\n File \"/home/moe/.local/lib/python3.8/site-packages/jax/linear_util.py\", line 167, in call_wrapped\r\n ans = self.f(*args, **dict(self.params, **kwargs))\r\n File \"run_summarization_flax.py\", line 1214, in train_step\r\n \"encoder_grad_norm\": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm[\"model\"][\"encoder\"]))),\r\njax._src.traceback_util.UnfilteredStackTrace: KeyError: 'model'\r\n\r\nThe stack trace below excludes JAX-internal frames.\r\nThe preceding is the original exception that occurred, unmodified.\r\n\r\n--------------------\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"run_summarization_flax.py\", line 1341, in <module>\r\n main()\r\n File \"run_summarization_flax.py\", line 1270, in main\r\n state, train_metric = p_train_step(state, batch)\r\n File \"run_summarization_flax.py\", line 1214, in train_step\r\n \"encoder_grad_norm\": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm[\"model\"][\"encoder\"]))),\r\nKeyError: 'model'\r\n```",
"@sanchit-gandhi I did a print on `layer_grad_norm`, and it seems that `model` is not one of the key.\r\n\r\nCould you advise ?\r\n\r\n```python\r\nlayer_grad_norm = {'decoder': {'block': {'0': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'relative_attention_bias': {'embedding': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '1': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '10': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '11': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '2': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '3': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '4': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '5': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '6': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '7': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '8': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '9': {'layer': {'0': {'SelfAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}}, 'final_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'encoder': {'block': {'0': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'global_relative_attention_bias': {'embedding': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'relative_attention_bias': {'embedding': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '1': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '10': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '11': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '2': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '3': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '4': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '5': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '6': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '7': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '8': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '9': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}}, 'final_layer_norm': {'weight': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'lm_head': {'kernel': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'shared': {'embedding': \r\nTraced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}\r\n```\r\n",
"Hey @buttercutter,\r\n\r\nUnless you're really keen for grad/param norms **and** have your logger set-up for this, the cleanest thing to do would be to strip the grad/param norm code out of the train step. Otherwise it adds unnecessary computations for results that you won't be analysing!\r\n\r\nI can't reproduce your code snippet, but it looks like the model you're using has one less `model` key in its params than the dummy one from my code snippet. If you're set on keeping the logging code in, we need to update the dict references accordingly:\r\n\r\n```python\r\n # compute gradient norms over all layers, total encoder, total decoder and global for detailed monitoring\r\n layer_grad_norm = jax.tree_util.tree_map(jnp.linalg.norm, grad)\r\n logs = {\r\n \"layer_grad_norm\": layer_grad_norm,\r\n \"encoder_grad_norm\": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm[\"encoder\"]))),\r\n \"decoder_grad_norm\": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm[\"decoder\"]))),\r\n }\r\n logs[\"grad_norm\"] = jnp.linalg.norm(jnp.array([logs[\"encoder_grad_norm\"], logs[\"decoder_grad_norm\"]]))\r\n\r\n # compute parameter norms over all layers, total encoder, total decoder and global for detailed monitoring\r\n layer_param_norm = jax.tree_util.tree_map(jnp.linalg.norm, new_state.params)\r\n logs[\"layer_param_norm\"] = layer_param_norm\r\n logs[\"encoder_param_norm\"] = jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_param_norm[\"encoder\"])))\r\n logs[\"decoder_param_norm\"] = jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_param_norm[\"decoder\"])))\r\n logs[\"param_norm\"] = jnp.linalg.norm(jnp.array([logs[\"encoder_param_norm\"], logs[\"decoder_param_norm\"]]))\r\n```\r\n",
"@sanchit-gandhi \r\n\r\nI just confirmed that the suggested code changes to properly include `logs[\"grad_norm\"]` and `logs[\"param_norm\"]` actually caused OOM error on TPU.\r\n\r\n```\r\nEpoch ... (1/16): 0%| | 0/16 [07:05<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"run_summarization_flax.py\", line 1339, in <module>\r\n main()\r\n File \"run_summarization_flax.py\", line 1268, in main\r\n state, train_metric = p_train_step(state, batch)\r\nValueError: RESOURCE_EXHAUSTED: Attempting to allocate 382.18M. That was not possible. There are 375.16M free.; (0x0x0_HBM0): while running replica 0 and partition 0 of a replicated computation (other replicas may have failed as well).\r\n```",
"That's probably because training is working now and we're managing to run the script past the previous error no? As mentioned, feel free to remove all the logger code if you're not interested in tracking param/grad norms (this will save you a bit of memory). \r\n\r\nThen you can try reducing your `per_device_train_batch_size` by factors of 2 and increasing `gradient_accumulation_steps` to compensate (i.e. try halving `per_device_train_batch_size` and doubling `gradient_accumulation_steps` until you can run the script without OOMs). We're now into the classic phase of finding a suitable training batch size for our model and accelerator device",
"@sanchit-gandhi \r\n\r\nI had reduced to even the smallest possible value for `per_device_gradient_accumulation_steps=2` with `per_device_train_batch_size=1`, but it still give memory resource exhaustion OOM error.\r\n\r\nNote: Removing all the logger code you provided earlier cleared this OOM error though.",
"Hey @buttercutter! Awesome, if gradient accumulation is working without the logging code it sounds like we're in a good position π I'll close this issue unless there's anything else regarding grad accumulation you wanted to ask!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,677
| 1,677
|
NONE
| null |
### Feature request
1. Adds gradient accumulation trick to https://github.com/huggingface/transformers/blob/main/examples/flax/summarization/run_summarization_flax.py
2. Adds [Activation Checkpointing feature](https://github.com/microsoft/DeepSpeed/issues/2302#issuecomment-1320728107)
### Motivation
For GPU memory issue as well as faster training process.
In the next `Your contribution` column, might I ask if the extra `if-else` block makes sense OR do we even need `optax.apply_every()` for gradient accumulation ?
### Your contribution
The following `jax` code is [modified](https://gist.github.com/buttercutter/34597783d681ce6407ff26ec3b76e56e/49cf1b815fce39ea9d192d5d916a51243e71a2c3#file-run_summarization_flax-py-L913-L919) from [original huggingface version](https://github.com/huggingface/transformers/blob/main/examples/flax/summarization/run_summarization_flax.py)
```
batch_size_per_update = train_batch_size * training_args.gradient_accumulation_steps
# add gradient accumulation
if training_args.gradient_accumulation_steps > 1:
optimizer = optax.chain(
optax.apply_every(batch_size_per_update), optimizer
)
# Setup train state
state = TrainState.create(apply_fn=model.__call__, params=model.params, tx=optimizer, dropout_rng=dropout_rng)
```
```
if len(accumulated_gradients) < training_args.gradient_accumulation_steps:
accumulated_gradients.append(grad)
new_state = state
else:
grad = jax.tree_multimap(lambda *x: jnp.sum(jnp.stack(x), axis=0), *accumulated_gradients)
new_state = state.apply_gradients(grads=grad, dropout_rng=new_dropout_rng)
accumulated_gradients = []
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20855/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20854
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20854/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20854/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20854/events
|
https://github.com/huggingface/transformers/issues/20854
| 1,505,634,361
|
I_kwDOCUB6oc5Zvig5
| 20,854
|
XLM-R has extremely low accuracy after fine-tuning on MNLI
|
{
"login": "Lollipop",
"id": 37130081,
"node_id": "MDQ6VXNlcjM3MTMwMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/37130081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lollipop",
"html_url": "https://github.com/Lollipop",
"followers_url": "https://api.github.com/users/Lollipop/followers",
"following_url": "https://api.github.com/users/Lollipop/following{/other_user}",
"gists_url": "https://api.github.com/users/Lollipop/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lollipop/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lollipop/subscriptions",
"organizations_url": "https://api.github.com/users/Lollipop/orgs",
"repos_url": "https://api.github.com/users/Lollipop/repos",
"events_url": "https://api.github.com/users/Lollipop/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lollipop/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I think this kind of question is more suited to the `forum`, it is a discussion rather than a bug. "
] | 1,671
| 1,675
| 1,675
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker @younesbelkada
about `xlm-roberta-large` performance on GLUR/MNLI
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Run below command from official example (run_glue.py for text classification):
```bash
seed=42
epochs=3
lr=2e-5
max_length=128
batch_size=32
# Parameters for AltCLIP
MODEL_NAME_OR_PATH=xlm-roberta-large
device=0
for task_name in mnli
do
if [ $task_name = "mrpc" ]; then
epochs=5
fi
if [ $task_name = "stsb" ]; then
metric=spearmanr
elif [ $task_name = "qqp" ] || [ $task_name = "mrpc" ]; then
metric=f1
else
metric=accuracy
fi
model_name=${MODEL_NAME_OR_PATH##*/}
output_dir=evaluation/$model_name/glue/$task_name/$seed
if [ ! -d "$output_dir" ]; then
mkdir -p $output_dir
else
echo "$output_dir does exist"
fi
CUDA_VISIBLE_DEVICES=$device python glue.py \
--model_name_or_path $MODEL_NAME_OR_PATH \
--task_name $task_name \
--cache_dir cache/$model_name \
--overwrite_cache \
--do_train \
--overwrite_output_dir \
--do_eval \
--do_predict \
--max_seq_length $max_length \
--per_device_train_batch_size $batch_size \
--per_device_eval_batch_size $batch_size \
--evaluation_strategy steps \
--learning_rate $lr \
--num_train_epochs $epochs \
--save_total_limit 2 \
--load_best_model_at_end \
--metric_for_best_model eval_$metric \
--greater_is_better true \
--seed $seed \
--output_dir $output_dir > $output_dir/log.txt 2>&1
done
```
2. The accuracy is quite low:
eval_mnli/acc: 35.44%
eval_mnli-mm/acc:35.22%
### Expected behavior
Higher performance on MNLI. Running script with `bert-base` or `roberta-base` instead yields around 85+ point.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20854/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20853
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20853/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20853/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20853/events
|
https://github.com/huggingface/transformers/pull/20853
| 1,505,068,288
|
PR_kwDOCUB6oc5F5Kzf
| 20,853
|
Fix TF generation (especially for `TFMarian`)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey @ydshieh π \r\n\r\nThank you for opening this PR, it made me realize a detail that is wrong in *both* frameworks π \r\n\r\nWe know that logprobs is a negative value, and we want to maximize it in beam search (i.e. make it as close to 0 as possible). Since logprobs is always negative, and the final score is the sum of the logprobs, we can anticipate the best possible score and use it to end beam search with no drawback. Well, it turns out that the method to compute the best possible score depends on `length_penalty`, and we are not accounting for that!\r\n\r\n- Scenario 1, length_penalty > 0.0: In this case, as the sentence grows, the denominator grows as well. This means the score can get closer to 0 (i.e. higher) as the sentence grows, and longer sentences are promoted. In this case, the best possible score can be determined from the maximum sequence length (TF implementation).\r\n- Scenario 2, length_penalty < 0.0: In this case, as the sentence grows, the denominator gets smaller. This means the score will get farther away to 0 (i.e. lower) as the sentence grows, and shorter sentences are promoted. In this case, the best possible score can be determined from the current sequence length (PT implementation).\r\n\r\nOn top of this incomplete best score computation on both ends, your PR made me realize that the stopping condition for TF also had a problem (after factoring in the correct length penalty computation, a few tests failed).\r\n\r\nI'm opening a PR to compare against this one with what I think is the correct solution to this bug π \r\n\r\n",
"Close in favor of #20901"
] | 1,671
| 1,675
| 1,672
|
COLLABORATOR
| null |
# What does this PR do?
Fix TF generation (especially for the `TFMarian` generation issue in #18149)
Fix #18149
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20853/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20853",
"html_url": "https://github.com/huggingface/transformers/pull/20853",
"diff_url": "https://github.com/huggingface/transformers/pull/20853.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20853.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20852
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20852/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20852/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20852/events
|
https://github.com/huggingface/transformers/issues/20852
| 1,504,865,697
|
I_kwDOCUB6oc5Zsm2h
| 20,852
|
Using TensorFlow XLA with MBart50 will result in a `OperatorNotAllowedInGraphError` error
|
{
"login": "xhluca",
"id": 21180505,
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xhluca",
"html_url": "https://github.com/xhluca",
"followers_url": "https://api.github.com/users/xhluca/followers",
"following_url": "https://api.github.com/users/xhluca/following{/other_user}",
"gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhluca/subscriptions",
"organizations_url": "https://api.github.com/users/xhluca/orgs",
"repos_url": "https://api.github.com/users/xhluca/repos",
"events_url": "https://api.github.com/users/xhluca/events{/privacy}",
"received_events_url": "https://api.github.com/users/xhluca/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"Hi @xhluca π \r\n\r\nI was able to successfully run your example on my end. Can you try to install an updated version of `transformers` to see if it solves the problem? (`pip install -U transformers`)",
"Thanks it works now!"
] | 1,671
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
### System Info
I used `pip install transformers>=4.21.0` to upgrade to the latest version.
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import MBart50Tokenizer, TFMBartForConditionalGeneration
model_name = "facebook/mbart-large-50-many-to-many-mmt"
model = TFMBartForConditionalGeneration.from_pretrained(model_name, from_pt=True)
tokenizer = MBart50Tokenizer.from_pretrained(model_name)
# XLA
import tensorflow as tf
xla_generate = tf.function(model.generate, jit_compile=True)
# Translation
hi_text = "ΰ€ΰ₯ΰ€΅ΰ€¨ ΰ€ΰ€ ΰ€ΰ₯ΰ€ΰ€²ΰ₯ΰ€ ΰ€¬ΰ₯ΰ€ΰ₯ΰ€Έ ΰ€ΰ₯ ΰ€€ΰ€°ΰ€Ή ΰ€Ήΰ₯ΰ₯€ΰ€ΰ₯ΰ€΅ΰ€¨ ΰ€ΰ€ ΰ€ΰ₯ΰ€ΰ€²ΰ₯ΰ€ ΰ€¬ΰ₯ΰ€ΰ₯ΰ€Έ ΰ€ΰ₯ ΰ€€ΰ€°ΰ€Ή ΰ€Ήΰ₯ΰ₯€ΰ€ΰ₯ΰ€΅ΰ€¨ ΰ€ΰ€ ΰ€ΰ₯ΰ€ΰ€²ΰ₯ΰ€ ΰ€¬ΰ₯ΰ€ΰ₯ΰ€Έ ΰ€ΰ₯ ΰ€€ΰ€°ΰ€Ή ΰ€Ήΰ₯ΰ₯€ΰ€ΰ₯ΰ€΅ΰ€¨ ΰ€ΰ€ ΰ€ΰ₯ΰ€ΰ€²ΰ₯ΰ€ ΰ€¬ΰ₯ΰ€ΰ₯ΰ€Έ ΰ€ΰ₯ ΰ€€ΰ€°ΰ€Ή ΰ€Ήΰ₯ΰ₯€ΰ€ΰ₯ΰ€΅ΰ€¨ ΰ€ΰ€ ΰ€ΰ₯ΰ€ΰ€²ΰ₯ΰ€ ΰ€¬ΰ₯ΰ€ΰ₯ΰ€Έ ΰ€ΰ₯ ΰ€€ΰ€°ΰ€Ή ΰ€Ήΰ₯ΰ₯€"
chinese_text = "ηζ΄»ε°±εδΈηε·§ε
εγηζ΄»ε°±εδΈηε·§ε
εγηζ΄»ε°±εδΈηε·§ε
εγηζ΄»ε°±εδΈηε·§ε
εγηζ΄»ε°±εδΈηε·§ε
εγηζ΄»ε°±εδΈηε·§ε
εγηζ΄»ε°±εδΈηε·§ε
εγ"
tokenizer.src_lang = "hi_IN"
encoded_hi = tokenizer([hi_text]*32, padding=True, return_tensors="tf")
tokenizer.src_lang = "zh_CN"
encoded_zh = tokenizer([chinese_text]*32, padding=True, return_tensors="tf")
# translate Hindi to French
generated_tokens = xla_generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])
x = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# translate Chinese to English
generated_tokens = xla_generate(**encoded_zh, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
y = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
```
It will result in the following error message:
```
---------------------------------------------------------------------------
OperatorNotAllowedInGraphError Traceback (most recent call last)
<timed exec> in <module>
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
883
884 with OptionalXlaContext(self._jit_compile):
--> 885 result = self._call(*args, **kwds)
886
887 new_tracing_count = self.experimental_get_tracing_count()
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
931 # This is the first call of __call__, so we have to initialize.
932 initializers = []
--> 933 self._initialize(args, kwds, add_initializers_to=initializers)
934 finally:
935 # At this point we know that the initialization is complete (or less
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
758 self._concrete_stateful_fn = (
759 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 760 *args, **kwds))
761
762 def invalid_creator_scope(*unused_args, **unused_kwds):
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
3064 args, kwargs = None, None
3065 with self._lock:
-> 3066 graph_function, _ = self._maybe_define_function(args, kwargs)
3067 return graph_function
3068
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3461
3462 self._function_cache.missed.add(call_context_key)
-> 3463 graph_function = self._create_graph_function(args, kwargs)
3464 self._function_cache.primary[cache_key] = graph_function
3465
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3306 arg_names=arg_names,
3307 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3308 capture_by_value=self._capture_by_value),
3309 self._function_attributes,
3310 function_spec=self.function_spec,
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
1005 _, original_func = tf_decorator.unwrap(python_func)
1006
-> 1007 func_outputs = python_func(*func_args, **func_kwargs)
1008
1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
666 # the function a weak reference to itself to avoid a reference cycle.
667 with OptionalXlaContext(compile_with_xla):
--> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
669 return out
670
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
992 except Exception as e: # pylint:disable=broad-except
993 if hasattr(e, "ag_error_metadata"):
--> 994 raise e.ag_error_metadata.to_exception(e)
995 else:
996 raise
OperatorNotAllowedInGraphError: in user code:
/opt/conda/lib/python3.7/site-packages/transformers/generation_tf_utils.py:590 generate *
seed=model_kwargs.pop("seed", None),
/opt/conda/lib/python3.7/site-packages/transformers/generation_tf_utils.py:1641 _generate *
input_ids,
/opt/conda/lib/python3.7/site-packages/transformers/generation_tf_utils.py:2709 beam_search_body_fn *
model_outputs = self(
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:703 run_call_with_unpacked_inputs *
return func(self, **unpacked_inputs)
/opt/conda/lib/python3.7/site-packages/transformers/models/mbart/modeling_tf_mbart.py:1328 call *
outputs = self.model(
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:703 run_call_with_unpacked_inputs *
return func(self, **unpacked_inputs)
/opt/conda/lib/python3.7/site-packages/transformers/models/mbart/modeling_tf_mbart.py:1129 call *
decoder_outputs = self.decoder(
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:703 run_call_with_unpacked_inputs *
return func(self, **unpacked_inputs)
/opt/conda/lib/python3.7/site-packages/transformers/models/mbart/modeling_tf_mbart.py:948 call *
positions = self.embed_positions(input_shape, past_key_values_length)
/opt/conda/lib/python3.7/site-packages/transformers/models/mbart/modeling_tf_mbart.py:129 call *
bsz, seq_len = input_shape[:2]
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:520 __iter__
self._disallow_iteration()
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:513 _disallow_iteration
self._disallow_when_autograph_enabled("iterating over `tf.Tensor`")
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:491 _disallow_when_autograph_enabled
" indicate you are trying to use an unsupported feature.".format(task))
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
```
### Expected behavior
This blog post shows exactly the same way to use it: https://huggingface.co/blog/tf-xla-generate
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20852/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20851
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20851/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20851/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20851/events
|
https://github.com/huggingface/transformers/pull/20851
| 1,504,865,229
|
PR_kwDOCUB6oc5F4gmW
| 20,851
|
Supporting `ImageProcessor` in place of `FeatureExtractor` for pipelines
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I don't seem to have direct write access (git is asking for credentials). I opened a PR here: https://github.com/praeclarumjj3/transformers/pull/1.",
"Hi @Narsil, please let me know if you need my assistance with the pipeline. I would like to emphasize that we should have a `task_inputs` argument if possible and provide an option to change the task on the model page under the `Hosted Inference API` to demonstrate the task-dynamic nature of a **single** OneFormer model. Thanks!\r\n\r\n<img width=\"1152\" alt=\"Screenshot 2022-12-24 at 4 09 46 PM\" src=\"https://user-images.githubusercontent.com/54928629/209432305-793e150b-2dbd-4566-958c-a53ba69c3d75.png\">\r\n\r\n",
"> Hi @Narsil, please let me know if you need my assistance with the pipeline. I would like to emphasize that we should have a `task_inputs` argument if possible and provide an option to change the task on the model page under the `Hosted Inference API` to demonstrate the task-dynamic nature of a **single** OneFormer model. \r\n\r\nIn my modifications (I'll create a PR when this one is merged, the branch already exists) it will be possible to use `subtask` which already exists as a parameter today and will work with oneformer out of the box.\r\n\r\nOn the UI front, I'm not really convinced we should add the complexity of this. This requires either changing segmentation for ALL segmentation models (which most models will only support one form, so it's a source of confusion) or add a new task (and that's a big modification, which imo is not worth it)\r\n`panoptic` is just more general than `instance` and `semantic` so it is a sound default IMO. Since the widget is here to be simple, it really seems like a good way to showcase the model's performance. For advanced use cases, using the API with subtask will work, and specific spaces and colab can showcase them further. Just like `text-generation` 's widget doesn't display all the various generation params (which are indeed useful in a lot of cases when tinkering with such a model) I don't think we should display a choice of subtask for this one specific model on the UI.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"cc @Narsil now that OneFormer has been merged, we can update the image segmentation pipeline :)",
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging since I removed the problematic part.",
"HI @amyeroberts @Narsil @sgugger , I notice some dev tests in Optimum that start to fail due to this PR. Notably, this PR break code snippets (that are working on 4.26) as:\r\n\r\n```python\r\nfrom transformers import pipeline, AutoModelForImageClassification, AutoFeatureExtractor\r\nmodel_id = \"microsoft/resnet-18\"\r\n\r\nmodel = AutoModelForImageClassification.from_pretrained(model_id)\r\nfeature_extractor = AutoFeatureExtractor.from_pretrained(model_id)\r\n\r\npipe = pipeline(task=\"image-classification\", model=model, feature_extractor=feature_extractor)\r\n```\r\n\r\nwith error `Exception: Impossible to guess which image processor to use. Please provide a PreTrainedImageProcessor class or a path/identifier to a pretrained image processor.`\r\n\r\nIt is only natural to want to pass `feature_extractor` and not `image_processor` from a user perspective, given that a lot of code snippets in README use them: https://huggingface.co/models?pipeline_tag=image-classification&sort=downloads\r\n\r\nIs this breaking change intended? Given https://github.com/huggingface/transformers/pull/21401 @ydshieh I guess it is?",
"`ImageProcessor` are replacing `FeatureExtractor` for images (so `FeatureExtractor` will stay but just for the audio.\r\n\r\nNow the breaking change you've seen is not intended. We should automatically set the image_processor when you send a `FeatureExtractor`. \r\n\r\nGoing forward it's getting extinct, but we should be able to maintain long term backward compatibility.",
"Quick question, any reason you're using this code instead of just `pipeline(model=\"microsoft/resnet-18\")` ? ",
"This line is supposed to fix the backward compability: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L795\r\n\r\nI created a PR to fix this snippet though.",
"Thank you @fxmarty @Narsil for rescuing the backward compatibility, and sorry, the CI didn't detect this edge case while I worked on #21401"
] | 1,671
| 1,675
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
~As a bonus point, it enables `OneFormer` for `image-segmentation`.~ Moved to separate PR.
Requires https://github.com/huggingface/transformers/pull/21278
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20851/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20851",
"html_url": "https://github.com/huggingface/transformers/pull/20851",
"diff_url": "https://github.com/huggingface/transformers/pull/20851.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20851.patch",
"merged_at": 1674638192000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20850
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20850/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20850/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20850/events
|
https://github.com/huggingface/transformers/pull/20850
| 1,504,737,602
|
PR_kwDOCUB6oc5F4Gn4
| 20,850
|
Adding `evaluate` to the list of libraries required in generated notebooks
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger No problem! I don't have permissions to merge though :)"
] | 1,671
| 1,674
| 1,671
|
CONTRIBUTOR
| null |
This PR is based on the discussion in [doc-builder/Add custom first cell #50](https://github.com/huggingface/doc-builder/pull/50#issuecomment-1359312952).
It modifies the config file that defines the contents of the first cell for the Colab notebooks generated from the doc pages. This change adds `evaluate` to the list of libraries that are installed in the first cell of every generated notebook.
Currently, only `transformers` and `dataset` libraries are installed by default. However, many notebooks also require `evaluate`. See examples:
https://huggingface.co/docs/transformers/tasks/sequence_classification
https://huggingface.co/docs/transformers/tasks/semantic_segmentation
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20850/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20850",
"html_url": "https://github.com/huggingface/transformers/pull/20850",
"diff_url": "https://github.com/huggingface/transformers/pull/20850.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20850.patch",
"merged_at": 1671627848000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20849
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20849/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20849/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20849/events
|
https://github.com/huggingface/transformers/pull/20849
| 1,504,557,625
|
PR_kwDOCUB6oc5F3gZm
| 20,849
|
[Time series] Temporal Fusion Transformer model
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Adding Temporal Fusion Transformer time series model https://arxiv.org/pdf/1912.09363.pdf
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20849/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20849/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20849",
"html_url": "https://github.com/huggingface/transformers/pull/20849",
"diff_url": "https://github.com/huggingface/transformers/pull/20849.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20849.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20848
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20848/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20848/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20848/events
|
https://github.com/huggingface/transformers/pull/20848
| 1,504,537,918
|
PR_kwDOCUB6oc5F3cD1
| 20,848
|
TF AdamWeightDecay fix for 2.11
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
MEMBER
| null |
The TF changelog said that the optimizer had been moved to `tf.keras.optimizer.legacy`, but the true path is `tf.keras.optimizers.legacy`. Because of the conditional in the PR, we didn't notice the error, but it's resolved now!
Fixes #20847
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20848/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20848",
"html_url": "https://github.com/huggingface/transformers/pull/20848",
"diff_url": "https://github.com/huggingface/transformers/pull/20848.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20848.patch",
"merged_at": 1671543645000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20847
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20847/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20847/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20847/events
|
https://github.com/huggingface/transformers/issues/20847
| 1,504,489,636
|
I_kwDOCUB6oc5ZrLCk
| 20,847
|
Unimplemented error when using AdamWeightDecay in TF
|
{
"login": "ZJaume",
"id": 11339330,
"node_id": "MDQ6VXNlcjExMzM5MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/11339330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZJaume",
"html_url": "https://github.com/ZJaume",
"followers_url": "https://api.github.com/users/ZJaume/followers",
"following_url": "https://api.github.com/users/ZJaume/following{/other_user}",
"gists_url": "https://api.github.com/users/ZJaume/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZJaume/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZJaume/subscriptions",
"organizations_url": "https://api.github.com/users/ZJaume/orgs",
"repos_url": "https://api.github.com/users/ZJaume/repos",
"events_url": "https://api.github.com/users/ZJaume/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZJaume/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @ZJaume, we saw this issue earlier but thought we had fixed it with #20735. I'll investigate now and see if I can reproduce it",
"Reproduced. The cause was a typo that's also present in the TF Changelog for 2.11, will push a PR now!",
"PR is up at #20848",
"@ZJaume Should be fixed now, thanks for the bug report! Let me know if installing the latest version from main doesn't fix your problem.",
"Working. Thank you!"
] | 1,671
| 1,671
| 1,671
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0.dev0
- Platform: Linux-4.15.0-200-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.10.1+cu102 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@Rocketknight1
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Coming from here: #20750. Using the example code but with AdamWeightDecay triggers the error.
The code:
```python
from transformers import TFAutoModelForSequenceClassification
from transformers.optimization_tf import create_optimizer
from transformers import AutoTokenizer
from tensorflow.keras.optimizers import Adam
from datasets import load_dataset
import tensorflow as tf
import numpy as np
dataset = load_dataset("glue", "cola")
dataset = dataset["train"] # Just take the training split for now
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenized_data = dict(tokenizer(dataset["sentence"], return_tensors="np", padding=True))
labels = np.array(dataset["label"]) # Label is already an array of 0 and 1
# Load and compile our model
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
# Lower learning rates are often better for fine-tuning transformers
optimizer, _ = create_optimizer(3e-5, 600, 100, weight_decay_rate=0.3)
model.compile(optimizer=optimizer, loss='binary_crossentropy')
model.fit(tokenized_data, labels)
```
```python
Traceback (most recent call last):
File "../test_mirrored.py", line 24, in <module>
model.fit(tokenized_data, labels)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 52, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.UnimplementedError: Graph execution error:
Detected at node 'Cast_1' defined at (most recent call last):
File "../test_mirrored.py", line 24, in <module>
model.fit(tokenized_data, labels)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1650, in fit
tmp_logs = self.train_function(iterator)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1249, in train_function
return step_function(self, iterator)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1233, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1222, in run_step
outputs = model.train_step(data)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1559, in train_step
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 527, in minimize
self.apply_gradients(grads_and_vars)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/transformers/optimization_tf.py", line 252, in apply_gradients
return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name, **kwargs)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1140, in apply_gradients
return super().apply_gradients(grads_and_vars, name=name)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 632, in apply_gradients
self._apply_weight_decay(trainable_variables)
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1159, in _apply_weight_decay
tf.__internal__.distribute.interim.maybe_merge_call(
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1155, in distributed_apply_weight_decay
distribution.extended.update(
File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1151, in weight_decay_fn
wd = tf.cast(self.weight_decay, variable.dtype)
Node: 'Cast_1'
2 root error(s) found.
(0) UNIMPLEMENTED: Cast string to float is not supported
[[{{node Cast_1}}]]
(1) CANCELLED: Function was cancelled before it was started
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_37329]
```
Setting weight decay to 0.0 does not trigger the error, so I imagine its something with [AdamWeightDecay](https://github.com/huggingface/transformers/blob/d1d3ac94033b6ea1702b203dcd74beab68d42d83/src/transformers/optimization_tf.py#L147). TensorFlow [changelog](https://github.com/tensorflow/tensorflow/releases/tag/v2.11.0) says:
> The tf.keras.optimizers.Optimizer base class now points to the new Keras optimizer, while the old optimizers have been moved to the tf.keras.optimizers.legacy namespace.
and
> Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
> Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
Could it be related to this?
### Expected behavior
Train successfully.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20847/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20846
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20846/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20846/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20846/events
|
https://github.com/huggingface/transformers/pull/20846
| 1,504,402,711
|
PR_kwDOCUB6oc5F2-mt
| 20,846
|
Deprecate `clean_up_tokenization_spaces` for BLOOM
|
{
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20846). All of your documentation changes will be reflected on that endpoint.",
"Thanks for your PR! After thinking a little more about it and in terms of user experience, I'm happy to have the warning if you think the use-case is frequent and the default behavior is misleading. \r\n\r\nHowever, I'm not too sure about deprecating/updating the value in v5. I think the current behavior isn't necessarily a bug, as the argument to toggle is clearly displayed in the docs (and I have no problem with making it more prominent, such as with the warning). Switching to `False` means that we'll start diverging between BLOOM and other tokenizers (like GPT-2) which work very similarly as of now.\r\n\r\nI'd be in favor of adding the warning mentioning to toggle it in this PR, and to wait until @sgugger is back so that we have a second opinion on the matter before mentioning that we will move it to `False` by default. Would that be ok for you @thomasw21?",
"@LysandreJik Sure! This isn't blocking anything really, the real issue is here: https://github.com/huggingface/text-generation-inference/issues/12 \r\n\r\nIMO as the tokenizer was build to be lossless, it's weird that by default it isn't. Would it make more sense to move `clean_up_tokenization_spaces` to be in `tokenizer` instead? Something like a special decoder? https://huggingface.co/docs/tokenizers/components#decoders . I understand that this is breaking, but we should be able to slightly migrate to newer setups using deprecation cycles?",
"Interesting proposal, WDYT @Narsil?",
"I think it's ok to move slowly, but touching `cleanup_tokenization_spaces` and its default are BIG changes.\r\n\r\nPersonally, I think borderline too big to migrate in V5 (it's just a really big change, that's unfortunately probably not worth the effort). \r\n\r\nThat being said, making it modifiable on a tokenizer per tokenizer basis (so updating Bloom alone) is still Ok, and is definitely a good way forward.\r\n\r\nPersonally I would focus on this user's need first, which would be solved by implementing `return_full_text=False`, it seems the lowest hanging fruit to solve the user's need. We can move forward on the \"decoder\" (or any other type of config change) later. \r\n\r\n",
"Okay so in terms of actions:\r\n - [x] Assume it's not a `transformers` bug but a `text-generation-inference` bug right now. https://github.com/huggingface/text-generation-inference/issues/12\r\n - [ ] Start thinking of a way to support `clean_up_tokenization_spaces` and `skip_special_tokens` in the tokenizer directly? Typically you want in order of priority: `user defined argument`, `tokenizer specific config`, `methods default`\r\n \r\n Would that make sense?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,675
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
Currently in `transformers`:
```python
>>> tok.decode(tok.encode("Hello , there"))
'Hello, there' # notice the missing space between "Hello" and ","
>>> tok.decode(tok.encode("Hello , there"), clean_up_tokenization_spaces=False)
'Hello , there'
```
In order too prevent issues such as this: https://huggingface.co/bigscience/bloom/discussions/153#6397907b71eb2455d898e0a4 we suggest to add a warning, suggesting to users to use `clean_up_tokenization_spaces=False` instead.
As BLOOM tokenizer was developped in order to be lossless encoding mechanism, it should make sense to always remove that option IMO, therefore I'm suggesting to deprecate that option from BLOOM tokenizer. Other option would be to change the default to `True`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20846/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20846",
"html_url": "https://github.com/huggingface/transformers/pull/20846",
"diff_url": "https://github.com/huggingface/transformers/pull/20846.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20846.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20845
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20845/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20845/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20845/events
|
https://github.com/huggingface/transformers/pull/20845
| 1,504,342,920
|
PR_kwDOCUB6oc5F2xuq
| 20,845
|
[Examples] Update big table
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Slightly related, wondering if we shouldn't link from this Big Table to the corresponding task in hf.co/tasks? (cc @merveenoyan)"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR updates the "big table of tasks" by
- adding a hyperlink to each of the example datasets
- add "image pretraining", and the Colab link for semantic segmentation
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20845/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20845",
"html_url": "https://github.com/huggingface/transformers/pull/20845",
"diff_url": "https://github.com/huggingface/transformers/pull/20845.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20845.patch",
"merged_at": 1671618871000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20844
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20844/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20844/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20844/events
|
https://github.com/huggingface/transformers/pull/20844
| 1,504,341,376
|
PR_kwDOCUB6oc5F2xZ2
| 20,844
|
remove unused `use_cache` in config classes
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
`Lilt`, `Longformer` and `Canine` only implements encoder-only task heads (QA, Sequence/Token classification etc.), and `use_cache` is not used in their modeling file.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20844/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20844",
"html_url": "https://github.com/huggingface/transformers/pull/20844",
"diff_url": "https://github.com/huggingface/transformers/pull/20844.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20844.patch",
"merged_at": 1671551204000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20843
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20843/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20843/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20843/events
|
https://github.com/huggingface/transformers/pull/20843
| 1,504,070,886
|
PR_kwDOCUB6oc5F14e-
| 20,843
|
Fix doctest
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
Fixes a bunch of failing doctest
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20843/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20843/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20843",
"html_url": "https://github.com/huggingface/transformers/pull/20843",
"diff_url": "https://github.com/huggingface/transformers/pull/20843.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20843.patch",
"merged_at": 1671636871000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20842
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20842/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20842/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20842/events
|
https://github.com/huggingface/transformers/issues/20842
| 1,503,867,593
|
I_kwDOCUB6oc5ZozLJ
| 20,842
|
Changes to BART shift_token_right and using the proper shifting index EOS or BOS.
|
{
"login": "ankitvad",
"id": 3066071,
"node_id": "MDQ6VXNlcjMwNjYwNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3066071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankitvad",
"html_url": "https://github.com/ankitvad",
"followers_url": "https://api.github.com/users/ankitvad/followers",
"following_url": "https://api.github.com/users/ankitvad/following{/other_user}",
"gists_url": "https://api.github.com/users/ankitvad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankitvad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankitvad/subscriptions",
"organizations_url": "https://api.github.com/users/ankitvad/orgs",
"repos_url": "https://api.github.com/users/ankitvad/repos",
"events_url": "https://api.github.com/users/ankitvad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankitvad/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I think I have a decent understanding of what is happening and just compiling my findings (Incase anyone else is confused?) and closing this issue.\r\n\r\n- The decoder_start_token_id is not the BOS token ID for BART. It means the token to start the decoding for BART. If the config.json is checked, it is actually forced to be = index(2) which is the EOS token. `</s>` I think.\r\n- The previous version of the code checks for the PAD token in an input and then considers the index before it as the one that contains the EOS token. This ways, it builds this with the overhead of a search and then shifts the EOS token to the beginning of the shifted decoder_input_ids\r\n- The new version of the code asks for this EOS id which is 2 in the BARTConfig and uses that.\r\nDiscussion and comments by @sshleifer and other people who worked and submitted the BART model can be found here:\r\nhttps://discuss.huggingface.co/t/what-i-know-and-dont-know-about-sequence-to-sequence-batching/1046\r\nhttps://github.com/huggingface/transformers/issues/7961\r\nhttps://github.com/huggingface/transformers/issues/5212\r\nhttps://huggingface.co/facebook/bart-base/blob/main/config.json#L19 (Here it shows the prefixed index 2 for decoder_start_token_id) "
] | 1,671
| 1,671
| 1,671
|
NONE
| null |
### System Info
- `adapter-transformers` version: 3.0.1
- Platform: Linux-4.15.0-72-generic-ppc64le-with-debian-buster-sid
- Python version: 3.6.11
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Although - this is kinda version independent and general purpose.
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have been going through the GitHub issues and the history of the [modeling_bart.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py) and I basically found that from PR #9134 , #9135 to #9343 the shift_token_right function was modified to do a bunch of things. For most of "those things" the PR docs and comments are super helpful! (For Instance, the issue with -100 in a label and changing it to the PAD token, and other issues that detail that even if 2 consecutive `PAD PAD` are changed to `PAD` it is fine since the loss does not account for that.)
However - one thing that was changed and never described or mentioned in the issues/docs/comments is how after shifting the input right the EOS index used to be appended to the beginning of the shifted input. But after #9343 the BOS index is being appended!
I realize that if we're doing BartForConditionalGeneration, it is super rare to pass labels and generate the decoder_input_ids from the labels. And this is mainly for doing MLM. This is where the problem arises. Because example/reproducible projects and models are kinda divided on which shift_tokens_right function to use when preparing the input for BART fine-tuning and inference. Some projects do a simple:
`from transformers.models.bart.modeling_bart import shift_tokens_right` and usually that is the newest and current version of shifting the input:
```
def shift_tokens_right_NEW(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
shifted_input_ids[:, 0] = decoder_start_token_id
if pad_token_id is None:
raise ValueError("self.model.config.pad_token_id has to be defined.")
# replace possible -100 values in labels by `pad_token_id`
shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
return shifted_input_ids
```
But a lot of projects using BART tend to define and utilize the original shifting code-function:
```
def shift_tokens_right_OLD(input_ids, pad_token_id):
"""Shift input ids one token to the right, and wrap the last non pad token (usually <eos>)."""
prev_output_tokens = input_ids.clone()
index_of_eos = (input_ids.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1)
prev_output_tokens[:, 0] = input_ids.gather(1, index_of_eos).squeeze()
prev_output_tokens[:, 1:] = input_ids[:, :-1]
return prev_output_tokens
```
The difference between them is that old version shifts right and appends the EOS token to the beginning. While, the new version appends the BOS token.
This has been causing some issues while preparing inputs for my project! I just wanted to know - is there a reason why the new code does not use EOS anymore? And - is one correct and one incorrect for Finetuning? Or does it make no difference?
I am asking this question from the point-of-view of a seq2seq task so no need for labels and for preparing the target decoder_input_ids for training and fine-tuning.
A sample difference between the old and true function can be seen from this example:
```
bos = 0
pad = 100
eos = 1
>>> input_id
tensor([
[ 0, 13, 8, 11, 9, 2, 2, 17, 9, 1],
[ 0, 14, 10, 7, 6, 10, 3, 1, 100, 100],
[ 0, 4, 16, 14, 2, 14, 3, 1, 100, 100],
[ 0, 16, 12, 7, 5, 14, 6, 10, 1, 100],
[ 0, 3, 12, 7, 9, 1, 100, 100, 100, 100]])
#Here it can be assumed that this is the original random target from the tokenizer after padding and EOS/BOS.
#0 = BOS, 1 = EOS, 100 = PAD. Now the prepared target after the old and new shift function will be:
>>> shift_tokens_right_OLD(input_id,pad)
tensor([
[ 1, 0, 13, 8, 11, 9, 2, 2, 17, 9],
[ 1, 0, 14, 10, 7, 6, 10, 3, 1, 100],
[ 1, 0, 4, 16, 14, 2, 14, 3, 1, 100],
[ 1, 0, 16, 12, 7, 5, 14, 6, 10, 1],
[ 1, 0, 3, 12, 7, 9, 1, 100, 100, 100]])
>>> shift_tokens_right_NEW(input_id,pad,bos)
tensor([
[ 0, 0, 13, 8, 11, 9, 2, 2, 17, 9],
[ 0, 0, 14, 10, 7, 6, 10, 3, 1, 100],
[ 0, 0, 4, 16, 14, 2, 14, 3, 1, 100],
[ 0, 0, 16, 12, 7, 5, 14, 6, 10, 1],
[ 0, 0, 3, 12, 7, 9, 1, 100, 100, 100]])
```
So the main difference in preparation is the EOS/BOS change in `decoder_input_ids[:,0]` position.
### Expected behavior
I was hoping if I could get some guidance on 2 questions:
1) Is there a difference between finetuning/inference with using EOS or BOS index after shifting? from the POV of the BART model?
2) As an extension - is one better than the other and should be preferred?
My main reason for this is to somehow combine existing projects/models and the new fine-tuning and models to have the same shift and target preparation approach!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20842/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20841
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20841/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20841/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20841/events
|
https://github.com/huggingface/transformers/pull/20841
| 1,503,820,406
|
PR_kwDOCUB6oc5F1E1a
| 20,841
|
Fix tiny typo
|
{
"login": "fzyzcjy",
"id": 5236035,
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fzyzcjy",
"html_url": "https://github.com/fzyzcjy",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20841/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20841",
"html_url": "https://github.com/huggingface/transformers/pull/20841",
"diff_url": "https://github.com/huggingface/transformers/pull/20841.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20841.patch",
"merged_at": 1671524280000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20840
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20840/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20840/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20840/events
|
https://github.com/huggingface/transformers/pull/20840
| 1,503,613,499
|
PR_kwDOCUB6oc5F0XrO
| 20,840
|
Clarify `use_fast` parameter in docstring
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> what happens if the architecture supports it but the model doesn't?\r\n\r\nHmm that's a good question! Do you happen to know if any of the other architectures have this issue or if it is just a bug with OPT? I'll remove the suggestion to check the supported framework list so we don't end up confusing anyone.",
"It's the first time I run into this inconsistency across models of the same arch, but I have never needed to ensure it was `fast` before so who knows it may have happened a lot and I wasn't the wiser.\r\n\r\nSo +1 to remove the suggestion unless we somehow can stand behind it.",
"This was the bug for OPT, so it shouldn't happen for other models."
] | 1,671
| 1,671
| 1,671
|
MEMBER
| null |
This PR addresses the ambiguity of the `use_fast` parameter raised in #20817.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20840/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20840",
"html_url": "https://github.com/huggingface/transformers/pull/20840",
"diff_url": "https://github.com/huggingface/transformers/pull/20840.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20840.patch",
"merged_at": 1671554546000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20839
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20839/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20839/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20839/events
|
https://github.com/huggingface/transformers/pull/20839
| 1,503,514,786
|
PR_kwDOCUB6oc5F0Bxn
| 20,839
|
fix typo output not ouput in bitsandbytes trainer test
|
{
"login": "Thomas-MMJ",
"id": 112830596,
"node_id": "U_kgDOBrmohA",
"avatar_url": "https://avatars.githubusercontent.com/u/112830596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Thomas-MMJ",
"html_url": "https://github.com/Thomas-MMJ",
"followers_url": "https://api.github.com/users/Thomas-MMJ/followers",
"following_url": "https://api.github.com/users/Thomas-MMJ/following{/other_user}",
"gists_url": "https://api.github.com/users/Thomas-MMJ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Thomas-MMJ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Thomas-MMJ/subscriptions",
"organizations_url": "https://api.github.com/users/Thomas-MMJ/orgs",
"repos_url": "https://api.github.com/users/Thomas-MMJ/repos",
"events_url": "https://api.github.com/users/Thomas-MMJ/events{/privacy}",
"received_events_url": "https://api.github.com/users/Thomas-MMJ/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
fixes a typo in the trainer test for bitsandbytes that was causing an error on pytest collection
## Before submitting
- [ yes] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20839/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20839",
"html_url": "https://github.com/huggingface/transformers/pull/20839",
"diff_url": "https://github.com/huggingface/transformers/pull/20839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20839.patch",
"merged_at": 1671524187000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20838
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20838/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20838/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20838/events
|
https://github.com/huggingface/transformers/issues/20838
| 1,503,464,406
|
I_kwDOCUB6oc5ZnQvW
| 20,838
|
TypeError: TextInputSequence must be str
|
{
"login": "suppathak",
"id": 30439457,
"node_id": "MDQ6VXNlcjMwNDM5NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/30439457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suppathak",
"html_url": "https://github.com/suppathak",
"followers_url": "https://api.github.com/users/suppathak/followers",
"following_url": "https://api.github.com/users/suppathak/following{/other_user}",
"gists_url": "https://api.github.com/users/suppathak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suppathak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suppathak/subscriptions",
"organizations_url": "https://api.github.com/users/suppathak/orgs",
"repos_url": "https://api.github.com/users/suppathak/repos",
"events_url": "https://api.github.com/users/suppathak/events{/privacy}",
"received_events_url": "https://api.github.com/users/suppathak/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to help debug your code or provide us with a short reproducer we can run. The notebook relies on credentials we do not have.",
"Closing the Issue. Resolved the error by replacing the dataset. There was some manually added extra values in the old dataset."
] | 1,671
| 1,672
| 1,672
|
NONE
| null |
### System Info
- `transformers` version: 4.23.1
- Platform: Linux-4.18.0-305.62.1.el8_4.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.8
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YEs
- Using distributed or parallel set-up in script?:
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to Reproduce:
1. Trying the re-run this notebook: https://github.com/suppathak/aicoe-osc-demo/blob/teach-stu/notebooks/demo2/teacher_student_exp.ipynb (Reference: https://github.com/neuralmagic/sparseml/blob/main/integrations/huggingface-transformers/tutorials/sparsifying_bert_using_recipes.md)
### Expected behavior
It should run without any error. The notebook was running perfectly until last Friday(3 days ago). When I try to re-run it again. It is failing and giving me this error. Has there been any kind of update in the system?
Any helpful feedback is appreciated. Thanks!

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20838/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20837
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20837/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20837/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20837/events
|
https://github.com/huggingface/transformers/pull/20837
| 1,503,444,281
|
PR_kwDOCUB6oc5FzyR-
| 20,837
|
Avoid collisions in writing metrics via 2 APIs - azureml + mlflow
|
{
"login": "akshaya-a",
"id": 16749003,
"node_id": "MDQ6VXNlcjE2NzQ5MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/16749003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akshaya-a",
"html_url": "https://github.com/akshaya-a",
"followers_url": "https://api.github.com/users/akshaya-a/followers",
"following_url": "https://api.github.com/users/akshaya-a/following{/other_user}",
"gists_url": "https://api.github.com/users/akshaya-a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akshaya-a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akshaya-a/subscriptions",
"organizations_url": "https://api.github.com/users/akshaya-a/orgs",
"repos_url": "https://api.github.com/users/akshaya-a/repos",
"events_url": "https://api.github.com/users/akshaya-a/events{/privacy}",
"received_events_url": "https://api.github.com/users/akshaya-a/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger thanks, I suppose I need to get a circleci account first? I can take a look tomorrow or feel free to merge this small change with an account someone already has",
"@sgugger I have signed up and connected circleci, and the failed pipeline link doesn't seem to allow me to rerun, should I just bump it with a useless commit to retrigger the checks? ",
"You can try an empty commit indeed:\r\n```\r\ngit commit -m \"Trigger CI\" --allow-empty\r\n```",
"Thanks again for yourcontribution!"
] | 1,671
| 1,672
| 1,672
|
CONTRIBUTOR
| null |
MLflow tracking API is enabled by default in AzureML and HF MLflow integration is more fully featured. I'd remove the AzureML integration but leaving the current behavior for backwards compatibility (though it should really be removed)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20837/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20837",
"html_url": "https://github.com/huggingface/transformers/pull/20837",
"diff_url": "https://github.com/huggingface/transformers/pull/20837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20837.patch",
"merged_at": 1672212294000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20836
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20836/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20836/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20836/events
|
https://github.com/huggingface/transformers/pull/20836
| 1,503,344,167
|
PR_kwDOCUB6oc5FzcE3
| 20,836
|
Remove unused `max_position_embeddings ` in config classes
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
Similar to #20596 and #20554, but here we removed unused `max_position_embeddings`.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20836/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20836",
"html_url": "https://github.com/huggingface/transformers/pull/20836",
"diff_url": "https://github.com/huggingface/transformers/pull/20836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20836.patch",
"merged_at": 1671527375000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20835
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20835/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20835/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20835/events
|
https://github.com/huggingface/transformers/pull/20835
| 1,503,304,955
|
PR_kwDOCUB6oc5FzT5g
| 20,835
|
[mBART] fix erroneous italics in docstring
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Corrects tensor dims from italics to code-blocks in the mBART doctoring, as discussed in https://github.com/huggingface/transformers/pull/20787#discussion_r1050620673.
The changes as applied to mBART are contained in https://github.com/huggingface/transformers/pull/20835/commits/284e13af61871056f64cbfb7883ea36a4bc70a39. The changes as applied to all other models that inherit using `# Copied from MBart...` are in https://github.com/huggingface/transformers/pull/20835/commits/0a5cd7c66f3be8d8f21dbdb8065cc1d87bd6f405
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20835/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20835",
"html_url": "https://github.com/huggingface/transformers/pull/20835",
"diff_url": "https://github.com/huggingface/transformers/pull/20835.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20835.patch",
"merged_at": 1671531816000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20834
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20834/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20834/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20834/events
|
https://github.com/huggingface/transformers/issues/20834
| 1,503,287,853
|
I_kwDOCUB6oc5Zmlot
| 20,834
|
my colab do not load new notebook it has bug
|
{
"login": "shahrzad-setayesh",
"id": 114135311,
"node_id": "U_kgDOBs2RDw",
"avatar_url": "https://avatars.githubusercontent.com/u/114135311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shahrzad-setayesh",
"html_url": "https://github.com/shahrzad-setayesh",
"followers_url": "https://api.github.com/users/shahrzad-setayesh/followers",
"following_url": "https://api.github.com/users/shahrzad-setayesh/following{/other_user}",
"gists_url": "https://api.github.com/users/shahrzad-setayesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shahrzad-setayesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shahrzad-setayesh/subscriptions",
"organizations_url": "https://api.github.com/users/shahrzad-setayesh/orgs",
"repos_url": "https://api.github.com/users/shahrzad-setayesh/repos",
"events_url": "https://api.github.com/users/shahrzad-setayesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/shahrzad-setayesh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,671
| 1,671
| 1,671
|
NONE
| null |
### System Info
#hi
i am university student and need colab please help me i can not open new notebook in colab .
do not load new notebook ,....
thanks
shahrzad.setayesh88@gmail.com
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
hi
i am university student and need colab please help me i can not open new notebook in colab .
### Expected behavior
do not load new notebook ,....
thanks
shahrzad.setayesh88@gmail.com
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20834/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20833
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20833/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20833/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20833/events
|
https://github.com/huggingface/transformers/pull/20833
| 1,503,151,591
|
PR_kwDOCUB6oc5Fyywq
| 20,833
|
[DETR and friends] Use AutoBackbone as alternative to timm
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@sgugger feel free to approve :)",
"Can you make all tests pass before asking for a final review?",
"The remaining tests which are failing are due to `make fix-copies`, however I'll only start updating the other models once this design is approved.",
"@sgugger thanks for the review, addressed all comments!"
] | 1,671
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR makes it possible to leverage our own backbone classes, like ResNet or Swin Transformer, instead of relying on timm for the following models:
- DETR
- Conditional DETR
- Deformable DETR
- Table Transformer
This allows people to use these frameworks without having to rely on the timm dependency.
I've added an attribute to the config "use_timm_backbone" which is set to True by default, but can be set to False.
To do:
- [x] fix copies, once design gets approved
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20833/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20833",
"html_url": "https://github.com/huggingface/transformers/pull/20833",
"diff_url": "https://github.com/huggingface/transformers/pull/20833.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20833.patch",
"merged_at": 1674472547000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20832
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20832/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20832/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20832/events
|
https://github.com/huggingface/transformers/pull/20832
| 1,502,949,058
|
PR_kwDOCUB6oc5FyGtS
| 20,832
|
Fix typing about next_beam_tokens and next_beam_indices
|
{
"login": "fzyzcjy",
"id": 5236035,
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fzyzcjy",
"html_url": "https://github.com/fzyzcjy",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20832). All of your documentation changes will be reflected on that endpoint.",
"Please make sure to run `make style` on your branch so that the quality tests pass.\r\ncc @gante for review.",
"Sure, I will do it soon together with another PR in https://github.com/huggingface/transformers/issues/20820",
"Please have each of your PR focused on one thing. We don't want to group changes that are not linked to each other in the same PR :-)",
"Sure, I mean file two PRs but do the *work* in one single time slot of mine ;)\r\n\r\nNo worries, I am experienced to make PRs (e.g. to Google Flutter https://github.com/flutter/flutter/pulls?q=is%3Apr+author%3Afzyzcjy)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"bump, will do it when having time",
"Hi, I wonder what version of `black` is required? I have tried:\r\n\r\n```\r\n(dev-transformers) β transformers git:(patch-1) python3 -m black -- examples tests src utils \r\nSkipping .ipynb files as Jupyter dependencies are not installed.\r\nYou can fix this by running ``pip install black[jupyter]``\r\nreformatted src/transformers/utils/model_parallel_utils.py\r\nreformatted tests/models/xlm_prophetnet/test_modeling_xlm_prophetnet.py\r\nreformatted src/transformers/models/layoutlmv3/tokenization_layoutlmv3.py\r\nreformatted src/transformers/models/markuplm/tokenization_markuplm.py\r\nreformatted src/transformers/models/layoutlmv2/tokenization_layoutlmv2.py\r\nreformatted examples/research_projects/lxmert/modeling_frcnn.py\r\nreformatted examples/research_projects/visual_bert/modeling_frcnn.py\r\nreformatted src/transformers/models/prophetnet/modeling_prophetnet.py\r\nreformatted src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py\r\nreformatted src/transformers/models/reformer/modeling_reformer.py\r\nreformatted src/transformers/tokenization_utils_base.py\r\n\r\nAll done! β¨ π° β¨\r\n11 files reformatted, 2138 files left unchanged.\r\n```\r\n\r\nand so on. It is changing formats for a dozen of files that I did not touch, such as:\r\n\r\n\r\n\r\n\r\n\r\n\r\n(P.S. I did not run `make style` but instead invoke that command directly because my `black` on PATH has a little conflict. I have followed https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md and create a brand new conda environment and install via pip)",
"@fzyzcjy We use black 22.3 (see [here](https://github.com/huggingface/transformers/blob/e1cd78634ae4ba2b0a3d548bd6663c08765a8b4d/setup.py#L101))",
"Hmm it is the correct version\r\n\r\n```\r\npython -m black --version\r\npython -m black, 22.3.0 (compiled: yes)\r\n```",
"@fzyzcjy Suggestion: revert to the first commit (which only touches 2 lines), run `make fixup` (which only touches modified files), then force commit the result :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Looking at source code they seem to be int not float?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20832/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20832",
"html_url": "https://github.com/huggingface/transformers/pull/20832",
"diff_url": "https://github.com/huggingface/transformers/pull/20832.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20832.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20831
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20831/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20831/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20831/events
|
https://github.com/huggingface/transformers/issues/20831
| 1,502,804,892
|
I_kwDOCUB6oc5Zkvuc
| 20,831
|
Fluent API for training arguments
|
{
"login": "apohllo",
"id": 40543,
"node_id": "MDQ6VXNlcjQwNTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apohllo",
"html_url": "https://github.com/apohllo",
"followers_url": "https://api.github.com/users/apohllo/followers",
"following_url": "https://api.github.com/users/apohllo/following{/other_user}",
"gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apohllo/subscriptions",
"organizations_url": "https://api.github.com/users/apohllo/orgs",
"repos_url": "https://api.github.com/users/apohllo/repos",
"events_url": "https://api.github.com/users/apohllo/events{/privacy}",
"received_events_url": "https://api.github.com/users/apohllo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"While I understand the idea of grouping related arguments together, the proposed approach is very functional, which is not something we use anywhere in the Transformers library. So this API would be at odds with the rest of Transformers.\r\n\r\nHappy to explore other ways to group related arguments together however, if you have other ideas.",
"That would definitely help in terms of documentation and not having to scroll each time to find a description of the argument. And besides, it would help to shorten the names of the arguments.\r\n\r\nHowever, I would add some type of `common` arguments, either in the main constructor or in the dedicated method, because some of the arguments can be shared between different stages, and redefining them in each method could be misleading and a little ambiguous for the library itself, especially if they have different values.\r\n\r\nIt could also help in integrating different argument passing packages, like `hydra`, in which we can group the arguments in the YAML file, which seems to be a more maintainable solution in comparison to the common `argparse` built-in package. As far as I know, it is widely used in corpos, such as NVIDIA.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I don't understand the argument against that approach, TBH. This is a builder pattern, which is very common in OOP. E.g. `StringBuilder` in Java uses almost the same idea (you modify the object and as far as I remember the builder is not returned). It's not functional, since you do not provide a function as an argument (i.e. this is what I understand as \"functional\"). \r\n\r\nAnyway, even though that approach is not present in Transformers, maybe it's a good moment to introduce it?\r\nCurrently, the API and the documentation of argument objects is harder and harder to use. The arguments are not sorted and there are 96!!! arguments in the TrainingArguments object. I am using the API and I am teaching NLP and OOP. When I introduce the object to my students, they have very hard time to go through and understand all the options. Many of them are completely unrelated to the training process (from the ML perspective). But the user cannot differentiation between the important and unimportant arguments.\r\n\r\nSo I think the argument classes should be extended with the suggested mechanism or a different mechanism which supports modularisation.",
"Hi @apohllo Sorry for the delay on this. Would something like in the PR linked above work for you?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### Feature request
Provide a fluent API for defining the training arguments for the Trainer class.
Instead of writing:
```
arguments = TrainingArguments(output_dir="output",
do_train=True,
do_eval=True,
evaluation_strategy='epoch',
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
learning_rate=5e-05,
num_train_epochs=1,
logging_first_step=True,
logging_strategy='steps',
logging_steps=50,
save_strategy='epoch',
fp16=True,
)
```
one would be able to write:
```
arguments = TrainingArguments("output").
evaluate(strategy='epoch', batch_size=16).
logging(first_step=True, strategy='steps', steps=50)...
```
### Motivation
Currently, the arguments submitted to Trainers are defined in a separate class, which is fine.
Yet the constructor of that class has a tone of arguments. Some of these arguments naturally stick together.
Providing a fluent API, with related arguments provided in one call, would have the following benefits:
* One giant call to the constructor would be divided into smaller calls, making the documentation of the method much
easier to read;
* It would be possible to chain argument construction - i.e. it would be easy to define a default set of options
(different than the defaults provided by the library) and then modify them according to some additional
requirements, e.g. changing LR would be just a single call on the predefined arguments object. Currently, it is
obtained by changing the value of one of the many arguments, which is harder to spot, if that number is large.
* Related arguments would be grouped together via call, rather than prefix (e.g. logging),
* Related arguments could be checked together, e.g. currently `do_evaluate` is ignored if `evaluation_strategy` is
not `None`
with fluent API an exception could be raised if someone set `evaluate=False` and sets `evaluation_strategy` to some
meaningful value (this is actually possible to do currently, but would be much easier to implement if there's
separate call for that.
### Your contribution
Implementing the basic fluent API is easy (e.g. each argument has it's own corresponding method - no change to the API of the constructor), but does not fulfill all the objectives. Yet I could provide such a PR, as a starter for the more user friendly API.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20831/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/huggingface/transformers/issues/20831/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20830
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20830/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20830/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20830/events
|
https://github.com/huggingface/transformers/issues/20830
| 1,502,787,647
|
I_kwDOCUB6oc5Zkrg_
| 20,830
|
Pipeline support for image similarity
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Ccing @NielsRogge @osanseviero @nateraw ",
"We don't currently have a `text-similarity`/`sentence-similarity` pipeline either, right? I think for that task, folks use the `feature-extraction` pipeline to get embeddings, then just compute the similarity. [Here's an example.](https://huggingface.co/optimum/sbert-all-MiniLM-L6-with-pooler)\r\n\r\nSo, with that in mind, maybe the pipeline could be an equivalent `image-feature-extraction` for vision?\r\n\r\nUnfortunately, the name '<modality>-feature extractor' is quite confusing since that's what the image processing utils are called still (I think?).",
"> So, with that in mind, maybe the pipeline could be an equivalent `image-feature-extraction` for vision?\r\n\r\nYes, let's do that!\r\n\r\n> Unfortunately, the name '-feature extractor' is quite confusing since that's what the image processing utils are called still (I think?).\r\n\r\n@amyeroberts worked on porting the image feature extractors to `***ImageProcessor` ([example](https://huggingface.co/docs/transformers/model_doc/vit#transformers.ViTImageProcessor)). We also throw a warning when users cal `XXXFeatureExtractor` from the library. \r\n\r\nWith that in mind, `image-feature-extraction` does seem alright to me. \r\n\r\n",
"Even with it being legacy, I'm slightly concerned this may become confusing to some users. I'll let some others weight in here!\r\n\r\nI can live with `image-feature-extraction` if nobody else vetos ",
"Afaik `feature-extraction` already works for image feature extraction as well (see https://huggingface.co/google/vit-base-patch16-224-in21k for example)",
"> Afaik `feature-extraction` already works for image feature extraction as well (see https://huggingface.co/google/vit-base-patch16-224-in21k for example)\r\n\r\nI need to try it out to verify if it works with the feature extraction pipeline. Will confirm soon. ",
"Yes I'm not sure there's a need for a new `image-feature-extraction` pipeline, one can leverage the `feature-extraction` pipeline already",
"I verified the feature-extraction pipeline, and it seems like it always assumes the preprocessing will use a tokenizer as opposed to an image processor:\r\n\r\n```py\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-10-55a624159a32> in <module>\r\n 1 image_one = \"https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png\"\r\n 2 \r\n----> 3 image_feature_extractor(image_one)\r\n\r\n3 frames\r\n/usr/local/lib/python3.8/dist-packages/transformers/pipelines/feature_extraction.py in __call__(self, *args, **kwargs)\r\n 103 A nested list of `float`: The features computed by the model.\r\n 104 \"\"\"\r\n--> 105 return super().__call__(*args, **kwargs)\r\n\r\n/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py in __call__(self, inputs, num_workers, batch_size, *args, **kwargs)\r\n 1072 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)\r\n 1073 else:\r\n-> 1074 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n 1075 \r\n 1076 def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params):\r\n\r\n/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py in run_single(self, inputs, preprocess_params, forward_params, postprocess_params)\r\n 1078 \r\n 1079 def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):\r\n-> 1080 model_inputs = self.preprocess(inputs, **preprocess_params)\r\n 1081 model_outputs = self.forward(model_inputs, **forward_params)\r\n 1082 outputs = self.postprocess(model_outputs, **postprocess_params)\r\n\r\n/usr/local/lib/python3.8/dist-packages/transformers/pipelines/feature_extraction.py in preprocess(self, inputs, **tokenize_kwargs)\r\n 77 def preprocess(self, inputs, **tokenize_kwargs) -> Dict[str, GenericTensor]:\r\n 78 return_tensors = self.framework\r\n---> 79 model_inputs = self.tokenizer(inputs, return_tensors=return_tensors, **tokenize_kwargs)\r\n 80 return model_inputs\r\n 81 \r\n\r\nTypeError: 'NoneType' object is not callable\r\n```\r\n\r\nThe design choice seems reasonable in case image feature extraction was not considered. \r\n\r\n[Here's](https://colab.research.google.com/gist/sayakpaul/4782955c210397dfcc5306f028cbad3d/feature_extraction_image_similarity.ipynb) my Colab Notebook. \r\n\r\n@nateraw @NielsRogge @osanseviero ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing this as we internally deprioritized this pipeline. "
] | 1,671
| 1,676
| 1,676
|
MEMBER
| null |
### Feature request
Given that we have a [tutorial notebook on image similarity](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_similarity.ipynb) and an [upcoming blog post](https://github.com/huggingface/blog/pull/663), and given the usefulness of the use case, it's time we added a pipeline for this task.
### Motivation
Image similarity is an important use case in the industry.
### Your contribution
Happy to contribute the pipeline.
Following describes some of the design decisions I had in mind for this pipeline.
By default, we provide the [most downloaded image classification model](https://huggingface.co/models?pipeline_tag=image-classification&sort=downloads) (trained on ImageNet-1k).
Image inputs to the `__call__()` of the pipeline would be similar to an [`ImageClassificationPipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.ImageClassificationPipeline) except the the input needs to be a list of two images / URLs, etc.
We return a matrix quantifying the similarity scores (cosine similarity) between all the input images.
We might want to also provide recommendations to the users when using this pipeline. For example, the input images would need to be provided in accordance with the provided model. If you're using a model that was pre-trained / fine-tuned on medical images then there's no point in passing images of cats and dogs to compute similarity over.
Related: https://github.com/huggingface/huggingface.js/issues/338
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20830/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20829
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20829/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20829/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20829/events
|
https://github.com/huggingface/transformers/pull/20829
| 1,502,754,108
|
PR_kwDOCUB6oc5Fxbzx
| 20,829
|
[Swin2SR] Add doc tests
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yeah I remember a discussion with @patrickvonplaten and @sanchit-gandhi, I used `Swin2SRImageProcessor` here instead of the Auto class to make it more explicit. \r\n\r\nBut happy to change. For me, the Auto API is handy when a model doesn't have its own preprocessing class, and for usage in the pipelines.",
"Yes I'm with @LysandreJik here! The conclusion was to move towards `AutoProcessor`/`AutoTokenizer` in the docs (_c.f._ the compelling argument from @LysandreJik https://huggingface.slack.com/archives/C01N44FJDHT/p1667824904971239?thread_ts=1667816128.702279&cid=C01N44FJDHT)."
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds Swin2SR to the doc tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20829/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20829",
"html_url": "https://github.com/huggingface/transformers/pull/20829",
"diff_url": "https://github.com/huggingface/transformers/pull/20829.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20829.patch",
"merged_at": 1671613790000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20828
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20828/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20828/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20828/events
|
https://github.com/huggingface/transformers/issues/20828
| 1,502,706,367
|
I_kwDOCUB6oc5ZkXq_
| 20,828
|
GPT Neo - no attention weights scaling in pytorch implementation of GPT Neo
|
{
"login": "FuTSy13",
"id": 38400083,
"node_id": "MDQ6VXNlcjM4NDAwMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38400083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FuTSy13",
"html_url": "https://github.com/FuTSy13",
"followers_url": "https://api.github.com/users/FuTSy13/followers",
"following_url": "https://api.github.com/users/FuTSy13/following{/other_user}",
"gists_url": "https://api.github.com/users/FuTSy13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FuTSy13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FuTSy13/subscriptions",
"organizations_url": "https://api.github.com/users/FuTSy13/orgs",
"repos_url": "https://api.github.com/users/FuTSy13/repos",
"events_url": "https://api.github.com/users/FuTSy13/events{/privacy}",
"received_events_url": "https://api.github.com/users/FuTSy13/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Do you have a source available that says they do use scaling here? Though it is common practice, I don't think it is necessarily required.",
"Yes, it seems you are right. Originally they use mesh_tensorflow realization of multi-head self-attention in which i don't see such scaling.\r\n\r\nBut at the same time in flax realization (https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neo/modeling_flax_gpt_neo.py#L230) flax.linen.attention.dot_product_attention_weights is used. And in its source code there is scaling. But i don't think that this incosistency is crucial.\r\n\r\n"
] | 1,671
| 1,671
| 1,671
|
NONE
| null |
It seems that there is no scaling of attention weights in GPT-Neo implementation of SelfAttention. Probably https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L188 should be modified as follows:
```
attn_weights = torch.matmul(query, key.transpose(-1, -2))
scale = self.head_dim ** -0.5
attn_weights = attn_weights * scale
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20828/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20827
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20827/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20827/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20827/events
|
https://github.com/huggingface/transformers/pull/20827
| 1,502,658,866
|
PR_kwDOCUB6oc5FxHIG
| 20,827
|
add: task guide on video classification model fine-tuning.
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sayakpaul",
"id": 22957388,
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayakpaul",
"html_url": "https://github.com/sayakpaul",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@nateraw I leveraged the video classification pipeline with the custom fine-tuned model and a URL from the [UCF-101 subset](https://huggingface.co/datasets/sayakpaul/ucf101-subset) (`avi` format). It worked like a charm! π₯",
"_The documentation is not available anymore as the PR was closed or merged._",
"@MKhalusova pushed a few changes. Will add another commit adding you as a co-author when you're back online. ",
"@amyeroberts thank you! Addressed all your comments. ",
"@sgugger one small pending part is https://github.com/huggingface/transformers/pull/20827#discussion_r1061285987. Once it's resolved, we should be good to merge. ",
"Updated the links. Will wait for the tests to pass and will merge afterward. "
] | 1,671
| 1,672
| 1,672
|
MEMBER
| null |
This PR adds a task guide on fine-tuning video classification models.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20827/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20827",
"html_url": "https://github.com/huggingface/transformers/pull/20827",
"diff_url": "https://github.com/huggingface/transformers/pull/20827.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20827.patch",
"merged_at": 1672859621000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20826
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20826/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20826/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20826/events
|
https://github.com/huggingface/transformers/pull/20826
| 1,502,656,750
|
PR_kwDOCUB6oc5FxGqn
| 20,826
|
Add-warning-tokenizer
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1920687293,
"node_id": "MDU6TGFiZWwxOTIwNjg3Mjkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Fast%20Tokenizers",
"name": "Fast Tokenizers",
"color": "b60205",
"default": false,
"description": ""
}
] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Should I just change the argument? \r\nI'm in favor of raising an issue rather than a warning here, WDYT? ",
"No I was talking about the docstring, but this is actually addressed by another PR. We can't suddenly raise an error for this behavior as it would be breaking."
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
Adds a warning when the user wants to use a fast tokenizer but it doesn't exist. Should help with #20817
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20826/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20826",
"html_url": "https://github.com/huggingface/transformers/pull/20826",
"diff_url": "https://github.com/huggingface/transformers/pull/20826.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20826.patch",
"merged_at": 1671643114000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20825
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20825/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20825/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20825/events
|
https://github.com/huggingface/transformers/pull/20825
| 1,502,641,062
|
PR_kwDOCUB6oc5FxDOf
| 20,825
|
[`FSMT`] Make it compatible with `xxxForConditionalGeneration` models
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I think that `inputs_embed` cannot be added as T5, since [positional embeddings needs to be computed too](https://github.com/huggingface/transformers/blob/ecd7de3dff7ea5713004b2f05e3869c24b8eb6e2/src/transformers/models/fsmt/modeling_fsmt.py#L721), and this requires having `input_ids`, except if the user can pass `inputs_position_embeds` too ",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @gsarti \r\nHere is an attempt to fix #20824 , could you double check if this fixes your root issue\r\nI ran: \r\n```\r\nimport inseq\r\n\r\nmodel = inseq.load_model(\"facebook/wmt19-en-de\", \"integrated_gradients\")\r\nout = model.attribute(\r\n \"The developer argued with the designer because her idea cannot be implemented.\",\r\n n_steps=100\r\n)\r\nout.show()\r\n```\r\nbut getting an error which is different from the one described in https://github.com/inseq-team/inseq/issues/153\r\n```\r\nValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information. \r\n(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)\r\n```",
"The error you see is a problem of Inseq due to force-setting `max_new_tokens` for generation, you can bypass it by passing a custom `generated_text` argument to `model.attribute` so that no call to model.generate is performed, but only forwards for the attribution.\r\n\r\nTrying this code now:\r\n\r\n```python\r\nimport inseq\r\n\r\nmodel = inseq.load_model(\"facebook/wmt19-en-de\", \"integrated_gradients\")\r\nout = model.attribute(\r\n \"The developer argued with the designer because her idea cannot be implemented.\",\r\n \"Hallo Welt ich bin Gabriele\",\r\n n_steps=100\r\n)\r\nout.show()\r\n```\r\n\r\nI still encounter an error:\r\n\r\n```shell\r\n/usr/local/lib/python3.8/dist-packages/transformers/models/fsmt/modeling_fsmt.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, use_cache, output_attentions, output_hidden_states, inputs_embeds, inputs_position_embeds, decoder_inputs_embeds, decoder_inputs_position_embeds, return_dict)\r\n 1090 decoder_padding_mask, causal_mask = None, None\r\n 1091 \r\n-> 1092 assert decoder_input_ids is not None\r\n 1093 \r\n 1094 if encoder_outputs is None:\r\n\r\nAssertionError:\r\n```\r\n\r\n@younesbelkada Is this check needed now? I presume extra tests for the forward call using embeddings as inputs would fail, too, if they were present!",
"Thank you very much @gsarti for double checking, I adapted the assert condition based on your suggestion, getting now an error since `inputs_position_embeds` needs to be passed too. I think the next fix should go on `inseq` side to support sending `inputs_position_embeds` too",
"Is it standard to have `inputs_position_embeds` as forward inputs? Don't recall seeing them in other models. In principle if the positions are just created from sinusoidals they could be omitted and added in the forward itself, right?",
"Thanks for the heads up @gsarti , you are correct, I overlooked at the code and thought the positional embedding was using nn.Embedding, I can confirm the script you sent me above runs now!",
"You can refer to other models for the usual docstring that we write for `input_ids` etc. Let's keep consistency wherever we can",
"I will try to find time to review tomorrow, but I also wanted to point out @patil-suraj's sync https://github.com/huggingface/transformers/pull/11218 which never got merged, but might be a useful reference as I think the same work was done there.\r\n"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/20824
`FSMT` model is an encoder-decoder model. Most of `xxxForConditionalGeneration` models can use `decoder_inputs_embeds` and `inputs_embeds` in replacement respectively of `input_ids` and `decoder_input_ids`.
`FSMT` does not implement this functionality which breaks some assumptions made by external libraries / APIs, an issue has been flagged and reported in https://github.com/inseq-team/inseq/issues/153
This PR fixes this behavior, by adding `inputs_embeds` and `decoder_inputs_embed` support for `FSMT` to make it consistent with other `xxxForConditionalGeneration` models. The PR also adds `get_encoder` and `get_decoder` class functions for `FSMT`, following the implementation of `T5`
`self.embed_positions(input_ids)` and `self.embed_positions(inputs_embeds[:, :, 0])` are equivalent as the positional embedding is computed with respect to the hidden states shape. But I added an extra check by assuming that 0 hidden states correspond to padding tokens, as this information is needed later by the position embedding layer.
All FSMT slow tests pass
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20825/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20825",
"html_url": "https://github.com/huggingface/transformers/pull/20825",
"diff_url": "https://github.com/huggingface/transformers/pull/20825.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20825.patch",
"merged_at": 1671703880000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20824
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20824/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20824/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20824/events
|
https://github.com/huggingface/transformers/issues/20824
| 1,502,622,885
|
I_kwDOCUB6oc5ZkDSl
| 20,824
|
FSTM compatibility issues with other `ForConditionalGeneration` models
|
{
"login": "gsarti",
"id": 16674069,
"node_id": "MDQ6VXNlcjE2Njc0MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/16674069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsarti",
"html_url": "https://github.com/gsarti",
"followers_url": "https://api.github.com/users/gsarti/followers",
"following_url": "https://api.github.com/users/gsarti/following{/other_user}",
"gists_url": "https://api.github.com/users/gsarti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsarti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsarti/subscriptions",
"organizations_url": "https://api.github.com/users/gsarti/orgs",
"repos_url": "https://api.github.com/users/gsarti/repos",
"events_url": "https://api.github.com/users/gsarti/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsarti/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
### Description
The implementation of FSMT models like [`facebook/wmt19-en-de`](https://huggingface.co/facebook/wmt19-en-de) is atypical with respect to different aspects that are normally supported in other `ForConditionalGeneration` models. In particular:
- Both `FSMTModel` and `FSMTForConditionalGeneration` lack utility methods like `get_encoder` and `get_decoder`
- `FSMTForConditionalGeneration` and all its subclasses do not accept `inputs_embeds` and `decoder_inputs_embeds` as possible alternatives to `input_ids` and `decoder_input_ids` for the forward pass.
In particular, this renders the model unusable when using feature attribution methods through the `inseq` library (see related issue inseq-team/inseq#153).
### Expected behavior
We would expect all models belonging to the same family to expose a consistent API for external usage.
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
### Who can help?
@ArthurZucker @younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20824/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20823
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20823/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20823/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20823/events
|
https://github.com/huggingface/transformers/pull/20823
| 1,502,595,616
|
PR_kwDOCUB6oc5Fw5o5
| 20,823
|
[OPT] Adds `GPT2TokenizerFast` to the list of tokenizer to use for OPT.
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The previous tests were passing but the tokenizer was `slow` were it should have been `fast` π
",
"a gentle ping here, as the m4 group needs to have all official opt models to support fast tokenizers.\r\n\r\nThank you, @ArthurZucker!",
"Merging ASAP.",
"cc @ydshieh the failing tests are related to the length of the dictionary of the tokenizer. Spaces are encoded to `222`, which is then passed to the model, while the vocab and embeddings are smaller. I am probably gonna skip it WDYT?",
"> cc @ydshieh the failing tests are related to the length of the dictionary of the tokenizer. Spaces are encoded to `222`, which is then passed to the model, while the vocab and embeddings are smaller. I am probably gonna skip it WDYT?\r\n\r\nHaven't looked this in very detail, but it looks like some bug exists in fast tokenizers. If this is really the case, I am not sure why we want to go ahead to enable the fast tokenizers (by skipping the failing tests).\r\n\r\nBut if I miss any context and saying any non-sense, please correct me.",
"The problem is not from the Fast tokenizer (it is a GPT2 tokenizer) but the tiny config test. \r\nGPT2TokenizerFast is pretty much full proof at this points. I was just wondering if you have a quick fix for this, ( as I said, the tokenizer's vocab lenght is `258` while the model tiny expects no more than `99`, which is causing this issue. This just means that the initialisation of the tiny tokenizer is not correct for this test ( it is using OPT-350m). \r\n",
"Sir, the PR is based on a quite old commit on `main` (2 weeks ago). Could you rebase (well, better to use merge as you have already done `merge` previously) on `main`, and see if things go better/well.\r\n\r\nFYI, the pipeline testing has some (non-trivial) change(s) in the merged PR #20426.\r\n\r\nHint: it's always nice to rebase (or any way you prefer) to have new commits on `main` in a PR.",
"we can also re-do the tiny tokenizers if they don't conform with the needs of the CI.",
"As @ydshieh said it might have been fixed, the problem is that the CI is still kind of stuck/not working ... not really sure if it is only this PR, but otherwise should be good to merge",
"- pipeline test is fine\r\n- torch test failed with `test_save_load_fast_init_to_base ` which is known to be flaky.\r\n\r\n@ArthurZucker Could you check other failing tests - probably they are just flaky ones ..?\r\n",
"I will run them locally π ",
"Good to go! ",
"cc @stas00 sorry for the long wait",
"Thank you very much for taking care of this, Arthur!"
] | 1,671
| 1,677
| 1,675
|
COLLABORATOR
| null |
# What does this PR do?
Adresses the issues with OPT where `use_fast = True` does not use the Fast GPT2 tokenizer.
A follow PR should add a warning when the Fast tokenizer is not available.
This should allow people to do :
```python
>>> from transformers import AutoTokenizers
>>> tok = AutoTokenizer.from_pretrained("facebook/opt-1.3b", use_fast = True)
>>> tok.is_fast
True
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20823/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20823/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20823",
"html_url": "https://github.com/huggingface/transformers/pull/20823",
"diff_url": "https://github.com/huggingface/transformers/pull/20823.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20823.patch",
"merged_at": 1675787731000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20822
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20822/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20822/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20822/events
|
https://github.com/huggingface/transformers/issues/20822
| 1,502,574,587
|
I_kwDOCUB6oc5Zj3f7
| 20,822
|
Train mobileBERT from scratch for other languages
|
{
"login": "fabianbrandscheid",
"id": 38261693,
"node_id": "MDQ6VXNlcjM4MjYxNjkz",
"avatar_url": "https://avatars.githubusercontent.com/u/38261693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabianbrandscheid",
"html_url": "https://github.com/fabianbrandscheid",
"followers_url": "https://api.github.com/users/fabianbrandscheid/followers",
"following_url": "https://api.github.com/users/fabianbrandscheid/following{/other_user}",
"gists_url": "https://api.github.com/users/fabianbrandscheid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fabianbrandscheid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabianbrandscheid/subscriptions",
"organizations_url": "https://api.github.com/users/fabianbrandscheid/orgs",
"repos_url": "https://api.github.com/users/fabianbrandscheid/repos",
"events_url": "https://api.github.com/users/fabianbrandscheid/events{/privacy}",
"received_events_url": "https://api.github.com/users/fabianbrandscheid/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi. I am interested in working on this."
] | 1,671
| 1,684
| null |
NONE
| null |
### Model description
Hi,
I am thinking of training a mobileBERT model from scratch for the German language. Can I use the [English mobileBERT model from HuggingFace](https://huggingface.co/google/mobilebert-uncased) to apply it to a dataset in another language? It makes sense that I would have to adapt the teacher model of mobileBERT to a BERT model of the corresponding language. Unfortunately, I could not find a parameter to adapt the teacher model.
Are there any other ideas on how best to train a mobileBERT model for another language?
Many greetings and many thanks!
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20822/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/20821
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20821/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20821/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20821/events
|
https://github.com/huggingface/transformers/issues/20821
| 1,502,556,371
|
I_kwDOCUB6oc5ZjzDT
| 20,821
|
Write inference evaluation
|
{
"login": "peregilk",
"id": 9079808,
"node_id": "MDQ6VXNlcjkwNzk4MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peregilk",
"html_url": "https://github.com/peregilk",
"followers_url": "https://api.github.com/users/peregilk/followers",
"following_url": "https://api.github.com/users/peregilk/following{/other_user}",
"gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peregilk/subscriptions",
"organizations_url": "https://api.github.com/users/peregilk/orgs",
"repos_url": "https://api.github.com/users/peregilk/repos",
"events_url": "https://api.github.com/users/peregilk/events{/privacy}",
"received_events_url": "https://api.github.com/users/peregilk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
### Feature request
This feature request is strongly inspired by T5X. They write a log every time they do an evaluation. The log is a json-lines-file that is saved in inference_eval/task-1000.jsonl. "task" is the current task, and "1000" is the checkpoint. The file is generic, and looks like this:
```json
{
"input": {
"inputs_pretokenized": "Hello World",
"inputs": [###,###,###],
"targets_pretokenized": "Hallo verden",
"targets": [###,###,###]
},
"target": "Hallo verden",
"output": "Hei verden",
"prediction": "Hei verden"
}
```
Long inputs, like audio and images, will typically be truncated.
### Motivation
This simply makes debugging a lot easier. The jsonl-format makes it really easy to open this in another program for more thorough study. Since the same file is predicted at every step, it makes it really easy to follow the development of a single target.
### Your contribution
I would be glad to make feedback on such an implementation. I am not exactly sure where this feature should be implemented today.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20821/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20820
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20820/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20820/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20820/events
|
https://github.com/huggingface/transformers/issues/20820
| 1,502,405,956
|
I_kwDOCUB6oc5ZjOVE
| 20,820
|
(Will make a PR) `BeamScorer` is super slow and takes 2x time compared with model itself, and can speed up by 1000%
|
{
"login": "fzyzcjy",
"id": 5236035,
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fzyzcjy",
"html_url": "https://github.com/fzyzcjy",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"Update: I made a vectorized version of BeamScorer.\r\n\r\n## Performance results: 1000% faster\r\n\r\n<details>\r\n\r\n### MyBeamSearchScorer: 0.45s\r\n\r\n\r\n\r\n### BeamSearchScorer: 5.72-6.28s\r\n\r\n\r\n\r\n</details>\r\n\r\n## Code\r\n\r\nQuite messy currently, but the core is beam_search.py and the rest are glues and benchmarks\r\n\r\nhttps://gist.github.com/fzyzcjy/fab4bf82c62f23b3432123c84f14a2c6",
"If you are interested please tell me and I will make a PR :)",
"Hey @fzyzcjy π \r\n\r\nWe thrive on external contributions, so you're more than welcome to open a PR. In general, these would be the requirements:\r\n1. Has a significant speedup on either CPU or GPU\r\n2. All existing tests pass\r\n3. The code maintains its readability, such that beam search is easy to understand\r\n\r\nLooking forward to the PR :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"bump, will submit PR when having time (indeed all code are already completed - see link above, just no time to sit down and PR)",
"PR: https://github.com/huggingface/transformers/pull/21234",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.23
- Python version: 3.10.0
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run `.generate(num_beams=3)` thus use beam search.
### Expected behavior
Should be fast, but it is super slow.
I am working on a fix and will soon make a PR (today?). Just want to open this issue first so that I can get some early feedbacks (e.g. do you welcome PRs?).
One cause I realize is that, the BeamScorer.process etc is working on torch tensors on gpu, *one by one*. I refactored it so it works on numpy arrays on cpu, and it is 3x faster.
---
If you are interested, here are some *very early* results:
before (scorer takes 7.37s)

after (scorer 2.72s, still slow but better)

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20820/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20819
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20819/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20819/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20819/events
|
https://github.com/huggingface/transformers/pull/20819
| 1,502,207,877
|
PR_kwDOCUB6oc5FvlZH
| 20,819
|
Add `min_new_tokens` argument in generate() implementation
|
{
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"(Note: this PR depends on the resolution of https://github.com/huggingface/transformers/issues/20814, so I'm waiting for it before I review this one)",
"> (Note: this PR depends on the resolution of #20814, so I'm waiting for it before I review this one)\r\n\r\nGreat, let me know if there are anything I can help.",
"Sorry that I have accidently rebased my PR to include the change of #20892. It seems that my operation will link this PR to all pervious commits that have been merged to main.\r\n\r\nI think it would be more convenient to make a new PR from the main branch. So I am closing this one. @gante ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20819). All of your documentation changes will be reflected on that endpoint.",
"I have made a new PR #21044 "
] | 1,671
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20756 #20814 #20614 (cc @gonced8 @kotikkonstantin)
As many said, it is better to add an argument `min_new_tokens` to the `.generate()` method to limit the length of newly generated tokens. The current parameter `min_length` limits the length of `prompt + newly generated tokens`, not the length of `newly generated tokens`.
It seems that all tests are passed
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- @gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20819/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20819",
"html_url": "https://github.com/huggingface/transformers/pull/20819",
"diff_url": "https://github.com/huggingface/transformers/pull/20819.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20819.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20818
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20818/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20818/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20818/events
|
https://github.com/huggingface/transformers/pull/20818
| 1,502,058,713
|
PR_kwDOCUB6oc5FvFBH
| 20,818
|
[clip] fix error message
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
This PR is fixing the error message:
```
You have to specify either input_ids
```
which in the code doesn't have any `either` options that I can see, so probably it was there by mistake.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20818/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20818",
"html_url": "https://github.com/huggingface/transformers/pull/20818",
"diff_url": "https://github.com/huggingface/transformers/pull/20818.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20818.patch",
"merged_at": 1671467116000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20817
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20817/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20817/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20817/events
|
https://github.com/huggingface/transformers/issues/20817
| 1,502,003,006
|
I_kwDOCUB6oc5Zhr8-
| 20,817
|
`AutoTokenizer` not enforcing `use_fast=True`
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"The name has been around for so long that we won't change it. It's not ideal but it is what is π€·ββοΈ We can definitely improve the documentation however!\r\n\r\nUnrelated: why does OPT not create the fast tokenizer on the fly from the slow one @ArthurZucker ? This seems like abug.",
"It is indeed a bug and people seem to be confused. IMO we should add a warning when `use_fast` is set to `True` but a fast tokenizer does not exists. Will have a look at why OPT does not create the fast tokenizer π ",
"If you have to use a warnings in this situation it's a sign that API needs to be improved. Warnings rarely work as there are dozens/hundreds of them emitted by most applications and a user is unlikely to notice it. That's just my experience-based opinion, of course.\r\n\r\nIf the old name can't be deprecated, I'd leave it alone and update the doc as a I suggested in the OP and add a new arg `require_fast=True` which would assert if the requirement can't be met. So the first one is preference, the second one is a requirement. That would make for an unambiguous yet flexible API.\r\n\r\n> Unrelated: why does OPT not create the fast tokenizer on the fly from the slow one @ArthurZucker ? This seems like abug.\r\n\r\nsome of the OPT models do and some don't, you can see in the OP both examples are OPT models.",
"Agreed, the problem is now the inconsistency between two models. If it is only `OPT` related we can leave it as is, otherwise will have a look",
"It is indeed a bug, the `facebook/opt-1.3b` tokenizer config is missing the `tokenizer_type` variable. And the use_fast argument is not passed down properly in that case. The fix is here #20823 ",
"so where are we with this Issue, @ArthurZucker? Thank you!\r\n\r\nAs it will get closed by the stale bot.",
"I think the doc has been updated and the OPT model where there was a problem has been fixed, so the issue is ready to be closed no?",
"Yes, I re-opened it because I thought we should probably raise and error if the tokenizer is not fast, but feel free to close. ",
"As was said before here, either raising an error or renaming the argument would be too much of a breaking change for something that has been around for three years."
] | 1,671
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
This issue is about `AutoTokenizer` not enforcing `use_fast=True`.
This works:
```
$ python -c "from transformers import AutoTokenizer; t=AutoTokenizer.from_pretrained('facebook/opt-13b', use_fast=True); \
assert t.is_fast, 'tokenizer is not fast'; print('Success')"
Success
```
now the same code, but a different model 'facebook/opt-1.3b' that doesn't have a fast optimizer:
```
$ python -c "from transformers import AutoTokenizer; t=AutoTokenizer.from_pretrained('facebook/opt-1.3b', use_fast=True); \
assert t.is_fast, 'tokenizer is not fast'; print('Success')"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: tokenizer is not fast
```
now the doc says:
```
use_fast (bool, optional, defaults to True) β Whether or not to try to load the fast version of the tokenizer.
```
so it sort of hints with "try to load" that it won't enforce it. But would you be open to a less ambiguous definition? something like:
```
use_fast (bool, optional, defaults to True) β Will try to load the fast version of the tokenizer if there is one and
will quietly fallback onto the normal (slower) tokenizer if the model doesn't provide a fast one.
```
I think the `use_fast` arg name is ambiguous - I'd have renamed it to `try_to_use_fast` since currently if one must use the fast tokenizer one has to additionally check that that `AutoTokenizer.from_pretrained` returned the slow version.
not sure, open to suggestions.
context: in m4 the codebase currently requires a fast tokenizer.
Thank you!
cc: @ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20817/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20816
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20816/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20816/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20816/events
|
https://github.com/huggingface/transformers/pull/20816
| 1,501,954,483
|
PR_kwDOCUB6oc5FuwIW
| 20,816
|
Add visual prompt to processor of CLIPSeg model
|
{
"login": "idilsulo",
"id": 19615018,
"node_id": "MDQ6VXNlcjE5NjE1MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19615018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idilsulo",
"html_url": "https://github.com/idilsulo",
"followers_url": "https://api.github.com/users/idilsulo/followers",
"following_url": "https://api.github.com/users/idilsulo/following{/other_user}",
"gists_url": "https://api.github.com/users/idilsulo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idilsulo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idilsulo/subscriptions",
"organizations_url": "https://api.github.com/users/idilsulo/orgs",
"repos_url": "https://api.github.com/users/idilsulo/repos",
"events_url": "https://api.github.com/users/idilsulo/events{/privacy}",
"received_events_url": "https://api.github.com/users/idilsulo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @alaradirik, thanks for the review! Added a test to [test_processor_clipseg.py](https://github.com/huggingface/transformers/blob/main/tests/models/clipseg/test_processor_clipseg.py) as well.",
"> \r\n\r\nHello @sgugger - I am aware that the argument can be passed at the end, but this also opens ways for faulty usage to users who do not know how CLIPSeg model processes their input.\r\n\r\nLet's see a working example:\r\n\r\n```\r\nimport torch\r\nfrom transformers import CLIPSegProcessor, CLIPSegForImageSegmentation\r\nprocessor = CLIPSegProcessor.from_pretrained(\"CIDAS/clipseg-rd64-refined\")\r\nmodel = CLIPSegForImageSegmentation.from_pretrained(\"CIDAS/clipseg-rd64-refined\")\r\n\r\nfrom PIL import Image\r\nimport requests\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\ntext = [\"background\", \"cat\"]\r\nimages = [image]*2\r\n```\r\n\r\n```\r\ninputs = processor(text, images, return_tensors=\"pt\") # the processor also returns the text embedding (which should not be used)\r\nwith torch.no_grad():\r\n outputs = model(**inputs, conditional_pixel_values=inputs.pixel_values)\r\n```\r\n\r\nWhat did the model process in the above line? Is it visual prompt + image or text prompt + image? It seems like it is still processing the textual prompt + image pair. \r\n\r\nWhy? Let's try to fail it:\r\n\r\n```\r\ninputs = processor(text, images, return_tensors=\"pt\")\r\nvisual_prompt_input = processor(images=[image], return_tensors=\"pt\") # Additional prompt with length 1\r\nwith torch.no_grad():\r\n outputs = model(**inputs, conditional_pixel_values=visual_prompt_input.pixel_values)\r\n```\r\n\r\nHere first processor computes `text` and `images` arguments with length 2. Second one, however, only takes a single image. This does not fail the model as it still processes a text prompt + image pair rather than visual prompt (the one passed via `conditional_pixel_values`).\r\n\r\n**Side note:** The [processor of OWL-ViT](https://github.com/huggingface/transformers/blob/main/src/transformers/models/owlvit/processing_owlvit.py) also has an additional argument (i.e. `query_images`) in addition to `images` and `text`. An idea might be to add `visual_prompt` as the third argument (as done in OWL-ViT) so that it would not break anything as @NielsRogge suggested.\r\n\r\nThanks for taking your time!"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently, integrated CLIPSeg model only supports textual prompts. However, a main advantage of CLIPSeg is that one can provide visual prompts instead of textual prompts in order to do semantic segmentation. For further details, you can refer to the original _Image Segmentation Using Text and Image Prompts (CVPR 2022)_ paper [here](https://openaccess.thecvf.com/content/CVPR2022/html/Luddecke_Image_Segmentation_Using_Text_and_Image_Prompts_CVPR_2022_paper.html).
This change can easily be adapted to current `CLIPSegProcessor` by just providing an additional parameter which processes the visual prompt via image processor and returns the embedding with an additional key, i.e. `conditional_pixel_values`.
This PR complements the work done in [this](https://github.com/huggingface/transformers/pull/20066) previous pull request.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. -> Not discussed, but only requires a minor change to fully support CLIPSeg model.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests? -> Previous tokenizer and image processor tests apply.
## Who can review?
Anyone in the community is free to review the PR.
Feel free to tag members/contributors who may be interested in your PR. @NielsRogge @sgugger @alaradirik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20816/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20816",
"html_url": "https://github.com/huggingface/transformers/pull/20816",
"diff_url": "https://github.com/huggingface/transformers/pull/20816.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20816.patch",
"merged_at": 1671625425000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20815
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20815/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20815/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20815/events
|
https://github.com/huggingface/transformers/issues/20815
| 1,501,884,120
|
I_kwDOCUB6oc5ZhO7Y
| 20,815
|
Cannot export Deberta to TorchScript
|
{
"login": "SohamTamba",
"id": 11683616,
"node_id": "MDQ6VXNlcjExNjgzNjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/11683616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SohamTamba",
"html_url": "https://github.com/SohamTamba",
"followers_url": "https://api.github.com/users/SohamTamba/followers",
"following_url": "https://api.github.com/users/SohamTamba/following{/other_user}",
"gists_url": "https://api.github.com/users/SohamTamba/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SohamTamba/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SohamTamba/subscriptions",
"organizations_url": "https://api.github.com/users/SohamTamba/orgs",
"repos_url": "https://api.github.com/users/SohamTamba/repos",
"events_url": "https://api.github.com/users/SohamTamba/events{/privacy}",
"received_events_url": "https://api.github.com/users/SohamTamba/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] |
open
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"Yes, this model is not compatible with torchscript, cc @ArthurZucker ",
"Thanks, will take that into account when refactoring",
"Go away stalebot",
"Any update here?",
"Just started working on this! π ",
"Sorry! Seem like I had to postpone this! If anyone want to take over feel free to do it, otherwise will be my priority once #23909 is merge! ",
"More delays given the recent sprints! But I think it should calm down during this summer! π ",
"Any update on this?",
"I'll add a good first issue tag on this! Slightly more involved than a documentation change, but should be possible by someone willing to put a cycle into it.",
"Thanks a ton! ",
"i would like to work on this",
"Go ahead @riyasachdeva04, feel free to open a PR",
"Can I take a crack at it too? @LysandreJik \r\n",
"Hey, I have noticed this issue has been silent for a while. I would like to start contributing, can I have a look at it? @LysandreJik @ArthurZucker ",
"First come first served, please feel free to open a PR and link this issue and we'll be happy to review!",
"How is everyone doing with this? Is it worth it for me to give it a shot?"
] | 1,671
| 1,706
| null |
NONE
| null |
### System Info
`transformers-cli env`
```
- `transformers` version: 4.10.2
- Platform: Linux-3.10.0-1127.18.2.el7.x86_64-x86_64-with-glibc2.23
- Python version: 3.9.13
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to convert the Deberta Model to TorchScript using the instructions provided in the [HF tutorial](https://huggingface.co/docs/transformers/torchscript).
`Code:`
```
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base")
model = AutoModel.from_pretrained("microsoft/deberta-base", torchscript=True)
tokenized_dict = tokenizer(
["Is this working",], ["Not yet",],
return_tensors="pt"
)
input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'])
traced_model = torch.jit.trace(model, input_tuple)
torch.jit.save(traced_model, "compiled_deberta.pt")
```
`Error Message:`
From torch.jit.save:
```
Could not export Python function call 'XSoftmax'. Remove calls to Python functions before export. Did you forget to add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__:
```
### Expected behavior
The Traced model should be successfully saved. After loading, it should have the same functional behavior as the model it was traced from.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20815/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20815/timeline
|
reopened
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20814
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20814/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20814/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20814/events
|
https://github.com/huggingface/transformers/issues/20814
| 1,501,841,267
|
I_kwDOCUB6oc5ZhEdz
| 20,814
|
`min_new_tokens` argument in generate() implementation
|
{
"login": "kotikkonstantin",
"id": 22777646,
"node_id": "MDQ6VXNlcjIyNzc3NjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/22777646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kotikkonstantin",
"html_url": "https://github.com/kotikkonstantin",
"followers_url": "https://api.github.com/users/kotikkonstantin/followers",
"following_url": "https://api.github.com/users/kotikkonstantin/following{/other_user}",
"gists_url": "https://api.github.com/users/kotikkonstantin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kotikkonstantin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kotikkonstantin/subscriptions",
"organizations_url": "https://api.github.com/users/kotikkonstantin/orgs",
"repos_url": "https://api.github.com/users/kotikkonstantin/repos",
"events_url": "https://api.github.com/users/kotikkonstantin/events{/privacy}",
"received_events_url": "https://api.github.com/users/kotikkonstantin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I have made a PR #20819 to add this augment to the `generate()` implementation. r",
"Hey @kotikkonstantin π Can you open a PR with your proposed `MinNewTokensLengthLogitsProcessor`? It looks good to me, except for one detail -- it shouldn't inherit from `MinLengthLogitsProcessor`, as our long run goal is to deprecate it :)\r\n\r\nAfter we merge `MinNewTokensLengthLogitsProcessor`, we can integrate it with `generate` with @silverriver's PR!",
"Hi @gante ! Sure) I'm making it for a couple of days. Thank you very much for the feedback!",
"Hey @gante π A PR is ready ",
"Actually, this is not done yet, as #20819 needs to be merged to be usable with `generate` :)",
"Hi, I have closed my original PR #20819 and made a new one #21044 to avoid messing with a bunch of other commits when I tried to rebase my commit.\r\n\r\n#21044 is implemented based on the new `MinNewTokensLengthLogitsProcessor`. @gante Please have a look.",
"(should be done now :) )"
] | 1,671
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
### Feature request
As many said:
- [link1](https://github.com/huggingface/transformers/issues/20614) cc @silverriver
- [link2](https://github.com/huggingface/transformers/issues/20756) cc @gonced8
A new parameter `min_new_tokens` to the `.generate()` method to limit the length of newly generated tokens. The current parameter `min_length` limits the length of `prompt + newly generated tokens`, not the length of newly generated tokens.
I've come up with a solution by creating a new logits processor:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers.generation_utils import MinLengthLogitsProcessor
class MinNewTokensLengthLogitsProcessor(MinLengthLogitsProcessor):
r"""
[`MinLengthLogitsProcessor`] enforcing a min-length of new tokens by setting EOS probability to 0.
Args:
min_length (`int`):
The minimum length below which the score of `eos_token_id` is set to `-float("Inf")`.
eos_token_id (`int`):
The id of the *end-of-sequence* token.
"""
def __init__(self, min_length: int, eos_token_id: int):
super().__init__(min_length, eos_token_id)
self.prompt_length_to_skip = None
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
if self.prompt_length_to_skip is None:
self.prompt_length_to_skip = input_ids.shape[-1]
current_length = input_ids.shape[-1] - self.prompt_length_to_skip
if current_length < self.min_length:
scores[:, self.eos_token_id] = -float("inf")
return scores
if __name__ == '__main__':
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "inkoziev/rugpt_chitchat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.add_special_tokens({'bos_token': '<s>', 'eos_token': '</s>', 'pad_token': '<pad>'})
model = AutoModelForCausalLM.from_pretrained(model_name)
model.to(device)
model.eval()
input_text = """<s>- ΠΡΠΈΠ²Π΅Ρ! Π§ΡΠΎ Π΄Π΅Π»Π°Π΅ΡΡ?
- ΠΡΠΈΠ²Π΅Ρ :) Π ΡΠ°ΠΊΡΠΈ Π΅Π΄Ρ
-"""
encoded_prompt = tokenizer.encode(input_text, add_special_tokens=False, return_tensors="pt").to(device)
output_sequences = model.generate(input_ids=encoded_prompt,
logits_processor=[MinNewTokensLengthLogitsProcessor(16, tokenizer.eos_token_id)],
max_length=100, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id
)
text = tokenizer.decode(output_sequences[0].tolist(), clean_up_tokenization_spaces=True)[len(input_text)+1:]
text = text[: text.find('</s>')]
print(f"Length of generated text W/ MinNewTokensLengthLogitsProcessor: {len(text)}")
print(text)
output_sequences = model.generate(input_ids=encoded_prompt, min_length=16,
max_length=100, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id
)
text = tokenizer.decode(output_sequences[0].tolist(), clean_up_tokenization_spaces=True)[len(input_text) + 1:]
text = text[: text.find('</s>')]
print(f"Length of generated text W/O MinNewTokensLengthLogitsProcessor: {len(text)}")
print(text)
```
**outcome of script executing**:
<img width="1575" alt="image" src="https://user-images.githubusercontent.com/22777646/208291420-9874c4ad-e63e-4ef8-bf8a-ec249748c50f.png">
Used transformers package version: 4.24.0
**But I'd recommend to do it simpler, by requested `min_new_tokens` argument in `.generate()`**
### Motivation
**The motivation is to control min length of the newly generated replica**
### Your contribution
by submitting a PR
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20814/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20814/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20813
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20813/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20813/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20813/events
|
https://github.com/huggingface/transformers/issues/20813
| 1,501,819,040
|
I_kwDOCUB6oc5Zg_Cg
| 20,813
|
Relative path causes error when calling push_to_hub to upload a custom model
|
{
"login": "lazyhope",
"id": 78585060,
"node_id": "MDQ6VXNlcjc4NTg1MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/78585060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lazyhope",
"html_url": "https://github.com/lazyhope",
"followers_url": "https://api.github.com/users/lazyhope/followers",
"following_url": "https://api.github.com/users/lazyhope/following{/other_user}",
"gists_url": "https://api.github.com/users/lazyhope/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lazyhope/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lazyhope/subscriptions",
"organizations_url": "https://api.github.com/users/lazyhope/orgs",
"repos_url": "https://api.github.com/users/lazyhope/repos",
"events_url": "https://api.github.com/users/lazyhope/events{/privacy}",
"received_events_url": "https://api.github.com/users/lazyhope/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I'm not too sure I understand your use case. The code sample you provide is indeed not supported as you are just re-using the code of the library, so you should just remove the line registering your new model. When defining a custom model, the modeling file will be exported in the repo and shouldn't indeed contain any relative imports (you'll need to convert them to regular imports)",
"> I'm not too sure I understand your use case. The code sample you provide is indeed not supported as you are just re-using the code of the library, so you should just remove the line registering your new model. When defining a custom model, the modeling file will be exported in the repo and shouldn't indeed contain any relative imports (you'll need to convert them to regular imports)\r\n\r\nI have a custom RobertaRBERT model stored in my local directory `custom/robertarbert.py`, previously I use the following code to load the checkpoint:\r\n```\r\nconfig = AutoConfig.from_pretrained(PLM, use_auth_token=access_token)\r\nconfig.dropout_rate = 0.1\r\nmodel = RobertaRBERT.from_pretrained(PLM, config=config, use_auth_token=access_token)\r\n```\r\nNow I want to upload this local `RobertaRBERT` class definition to the hub, and hopefully use `AutoModel.from_pretained` to load it directly without loading config again, so I followed this document: https://huggingface.co/docs/transformers/custom_models, and running the following code:\r\n```\r\nfrom custom.robertarbert import RobertaRBERT\r\nfrom transformers import AutoConfig\r\n\r\nconfig = AutoConfig.from_pretrained(\"Lazyhope/python-clone-detection\")\r\nmodel = RobertaRBERT.from_pretrained(\"Lazyhope/python-clone-detection\", config=config)\r\nconfig.register_for_auto_class()\r\nmodel.register_for_auto_class(\"AutoModel\")\r\n\r\nmodel.push_to_hub(\"Lazyhope/new_model\", use_auth_token = access_token)\r\n``` \r\ncaused the error I mentioned above, is it because my code was incorrect?",
"Like I said, the custom modeling file shouldn't contain any relative imports.",
"> Like I said, the custom modeling file shouldn't contain any relative imports.\r\n\r\nHere is my custom model file:\r\n```\r\nimport torch.nn as nn\r\nfrom transformers import (\r\n RobertaPreTrainedModel,\r\n RobertaModel,\r\n)\r\nfrom transformers.modeling_outputs import SequenceClassifierOutput\r\n\r\n\r\nclass RobertaRBERT(RobertaPreTrainedModel):\r\n```\r\nI think it doesn't contain any relative imports",
"@sgugger You could also reproduce the error by calling \r\n`get_relative_import_files('src/transformers/models/roberta/configuration_roberta.py')`\r\nwhich is defined in https://github.com/huggingface/transformers/blob/f76518e56a5ef0836a780630de6f5b4456e9aa4a/src/transformers/dynamic_module_utils.py#L81\r\n\r\nI suppose there is a bug in the relative import extracting function?",
"It's a known limitation: the relative imports are only permitted at one level and no more for custom models.",
"> It's a known limitation: the relative imports are only permitted at one level and no more for custom models.\r\n\r\nIs there a workaround? As you can see above my custom model file doesn't contain any relative import but it still doesn't work.",
"Was a little bit confused by the doc, turns out I just need to use the following code:\r\n```\r\nCloneDetectionModel.register_for_auto_class(\"AutoModel\")\r\ncustom_model = CloneDetectionModel.from_pretrained(\"<PLM>\", config=config)\r\ncustom_model.push_to_hub(\"<DIR>\")\r\n```"
] | 1,671
| 1,671
| 1,671
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
By running the following code and fill `<YOUR-REPO>` and `<YOUR-TOKEN>`:
```
from transformers import AutoConfig, AutoModel
config = AutoConfig.from_pretrained("Lazyhope/python-clone-detection")
model = AutoModel.from_pretrained("Lazyhope/python-clone-detection", config=config)
config.register_for_auto_class()
model.register_for_auto_class("AutoModel")
model.push_to_hub("<YOUR-REPO>", use_auth_token = "<YOUR-TOKEN>")
```
### Expected behavior
Hi, when I was trying to upload a custom model which inherent from transformers.RobertaPreTrainedModel, the following error occurs:
```
FileNotFoundError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
[Errno 2] No such file or directory: '/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/models/roberta/.bert.configuration_bert.py'
File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/dynamic_module_utils.py", line 70, in get_relative_imports
with open(module_file, "r", encoding="utf-8") as f:
File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/dynamic_module_utils.py", line 97, in get_relative_import_files
new_imports.extend(get_relative_imports(f))
File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/dynamic_module_utils.py", line 439, in custom_object_save
for needed_file in get_relative_import_files(object_file):
File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/configuration_utils.py", line 441, in save_pretrained
custom_object_save(self, save_directory, config=self)
File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1579, in save_pretrained
model_to_save.config.save_pretrained(save_directory)
File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/utils/hub.py", line 790, in push_to_hub
self.save_pretrained(work_dir, max_shard_size=max_shard_size)
File "/Users/rino/Desktop/RepoAnalysis/huggingface/register.py", line 9, in <module>
model.push_to_hub("Lazyhope/python-clone-detection", user_auth_token=<Hidden>)
```
It seems that relative imports like https://github.com/huggingface/transformers/blob/7032e0203262ebb2ebf55da8d2e01f873973e835/src/transformers/models/roberta/modeling_roberta.py#L27 were directly turned into paths like `/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/models/roberta/..modeling_outputs.py` and opening this kind of path in https://github.com/huggingface/transformers/blob/7032e0203262ebb2ebf55da8d2e01f873973e835/src/transformers/dynamic_module_utils.py#L70 would caused a FileNotFound error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20813/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20812
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20812/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20812/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20812/events
|
https://github.com/huggingface/transformers/pull/20812
| 1,501,641,046
|
PR_kwDOCUB6oc5FtvQM
| 20,812
|
Add visual prompt to clipseg processor
|
{
"login": "idilsulo",
"id": 19615018,
"node_id": "MDQ6VXNlcjE5NjE1MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19615018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idilsulo",
"html_url": "https://github.com/idilsulo",
"followers_url": "https://api.github.com/users/idilsulo/followers",
"following_url": "https://api.github.com/users/idilsulo/following{/other_user}",
"gists_url": "https://api.github.com/users/idilsulo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idilsulo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idilsulo/subscriptions",
"organizations_url": "https://api.github.com/users/idilsulo/orgs",
"repos_url": "https://api.github.com/users/idilsulo/repos",
"events_url": "https://api.github.com/users/idilsulo/events{/privacy}",
"received_events_url": "https://api.github.com/users/idilsulo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently, integrated CLIPSeg model only supports textual prompts. However, a main advantage of CLIPSeg is that one can provide visual prompts instead of textual prompts in order to do semantic segmentation. For further details, you can refer to the original _Image Segmentation Using Text and Image Prompts (CVPR 2022)_ paper [here](https://openaccess.thecvf.com/content/CVPR2022/html/Luddecke_Image_Segmentation_Using_Text_and_Image_Prompts_CVPR_2022_paper.html).
This change can easily be adapted to current `CLIPSegProcessor` by just providing an additional parameter which processes the visual prompt via image processor and returns the embedding with an additional key, i.e. `conditional_pixel_values`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. -> Not discussed, but only requires a minor change to fully support CLIPSeg model.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests? -> Previous tokenizer and image processor tests apply.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20812/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20812",
"html_url": "https://github.com/huggingface/transformers/pull/20812",
"diff_url": "https://github.com/huggingface/transformers/pull/20812.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20812.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20811
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20811/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20811/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20811/events
|
https://github.com/huggingface/transformers/pull/20811
| 1,501,631,373
|
PR_kwDOCUB6oc5FttRk
| 20,811
|
Add visual prompt to processor
|
{
"login": "idilsulo",
"id": 19615018,
"node_id": "MDQ6VXNlcjE5NjE1MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19615018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idilsulo",
"html_url": "https://github.com/idilsulo",
"followers_url": "https://api.github.com/users/idilsulo/followers",
"following_url": "https://api.github.com/users/idilsulo/following{/other_user}",
"gists_url": "https://api.github.com/users/idilsulo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idilsulo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idilsulo/subscriptions",
"organizations_url": "https://api.github.com/users/idilsulo/orgs",
"repos_url": "https://api.github.com/users/idilsulo/repos",
"events_url": "https://api.github.com/users/idilsulo/events{/privacy}",
"received_events_url": "https://api.github.com/users/idilsulo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently, integrated CLIPSeg model only supports textual prompts. However, a main advantage of CLIPSeg is that one can provide visual prompts instead of textual prompts in order to do semantic segmentation. For further details, you can refer to the original _Image Segmentation Using Text and Image Prompts (CVPR 2022)_ paper [here](https://openaccess.thecvf.com/content/CVPR2022/html/Luddecke_Image_Segmentation_Using_Text_and_Image_Prompts_CVPR_2022_paper.html).
This change can easily be adapted to current `CLIPSegProcessor` by just providing an additional parameter which processes the visual prompt via image processor and returns the embedding with an additional key, i.e. `conditional_pixel_values`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. -> Not discussed, but only requires a minor change to fully support CLIPSeg model.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests? -> Previous tokenizer and image processor tests apply.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20811/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20811",
"html_url": "https://github.com/huggingface/transformers/pull/20811",
"diff_url": "https://github.com/huggingface/transformers/pull/20811.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20811.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20810
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20810/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20810/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20810/events
|
https://github.com/huggingface/transformers/issues/20810
| 1,501,604,736
|
I_kwDOCUB6oc5ZgKuA
| 20,810
|
group_by_length in Seq2SeqTrainer
|
{
"login": "maximzubkov",
"id": 47659865,
"node_id": "MDQ6VXNlcjQ3NjU5ODY1",
"avatar_url": "https://avatars.githubusercontent.com/u/47659865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maximzubkov",
"html_url": "https://github.com/maximzubkov",
"followers_url": "https://api.github.com/users/maximzubkov/followers",
"following_url": "https://api.github.com/users/maximzubkov/following{/other_user}",
"gists_url": "https://api.github.com/users/maximzubkov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maximzubkov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maximzubkov/subscriptions",
"organizations_url": "https://api.github.com/users/maximzubkov/orgs",
"repos_url": "https://api.github.com/users/maximzubkov/repos",
"events_url": "https://api.github.com/users/maximzubkov/events{/privacy}",
"received_events_url": "https://api.github.com/users/maximzubkov/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"\n\nSpiking loss example ",
"I am not too sure where the bugs lies here. The training loss with samples of varying length will always be noisy. If we had a shuffle you will see random spikes instead of regular ones, but it won't make this noisy behavior disappear.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I met the same issue. I think we should reopen this issue"
] | 1,671
| 1,683
| 1,674
|
NONE
| null |
### System Info
Hey, huggingface team!
- `transformers` version: 4.24.0
- Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, but error is does not depend on it
- Using distributed or parallel set-up in script?: Yes, but error is does not depend on it
### Who can help?
@ArthurZucker @younesbelkada @sg
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I was working on a project and tried `group_by_length` in [Seq2SeqTrainer](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainer) and observed spikes in the loss function. I attribute it to the implementation of the `get_length_grouped_indices` function. This function groups samples in `megabatches` of size `mega_batch_mult` and reorders samples w.r.t. to length inside of the `megabatche`. Here is an example of why it may not be the right way to do that:
```
import numpy as np
import torch
from transformers.trainer_pt_utils import get_length_grouped_indices
lengths = np.random.permutation(list(range(20))).tolist()
batch_size = 2
ids = get_length_grouped_indices(lengths=lengths, mega_batch_mult=3, batch_size=batch_size)
[lengths[i] for i in ids]
```
The output is like
```
[19, 14, 11, 10, 4, 2, 17, 13, 12, 8, 7, 5, 18, 16, 9, 6, 1, 0, 15, 3]
```
And after sequential batching:
```
batches = [
[19, 14],
[11, 10],
[4, 2],
[17, 13],
[12, 8],
[7, 5],
[18, 16],
[9, 6],
[1, 0],
[15, 3]
]
```
So after tokenization, we will have a very spiky `max_length` of the batch. In the example: `[19, 11, 4, 17, 12, 7, 18, 9, 1, 15]`. On a bigger scale, it will result in spikes in the loss function (`mega_batch_mult` has a default value of 50, so `max_length` of batch gradually decreases for 50 steps, and so the spike happens every 50 steps, example in the comments)
### Expected behavior
Maybe one more additional shuffling inside of the `megabatch` is required
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20810/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20809
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20809/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20809/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20809/events
|
https://github.com/huggingface/transformers/pull/20809
| 1,501,594,062
|
PR_kwDOCUB6oc5FtlvQ
| 20,809
|
[WIP] RWKV4Neo the RNN and GPT Hybrid Model
|
{
"login": "ArEnSc",
"id": 6252325,
"node_id": "MDQ6VXNlcjYyNTIzMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6252325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArEnSc",
"html_url": "https://github.com/ArEnSc",
"followers_url": "https://api.github.com/users/ArEnSc/followers",
"following_url": "https://api.github.com/users/ArEnSc/following{/other_user}",
"gists_url": "https://api.github.com/users/ArEnSc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArEnSc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArEnSc/subscriptions",
"organizations_url": "https://api.github.com/users/ArEnSc/orgs",
"repos_url": "https://api.github.com/users/ArEnSc/repos",
"events_url": "https://api.github.com/users/ArEnSc/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArEnSc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @ArEnSc !\r\nThanks for starting over the PR πͺ \r\nLet us know whenever you need help with @ArthurZucker !\r\n",
"> Hi @ArEnSc ! Thanks for starting over the PR πͺ Let us know whenever you need help with @ArthurZucker !\r\n\r\nWill do still doing some research, just figured out how the training notebook works, model executes in notebook so that's a positive",
"Update: tracing the model and came up with a state based api for the RNN inference mode on my own code base to experiment with",
"Thanks a lot for the status update! Feel free to ping whenever you need help",
"Sometimes I look at working on this a little. Here are my notes and possible tasks, started 2023-01-16.\r\n- The template appears to be from a T5 style model. The RWKV state could be the encoder hidden state (a little intuitive) and/or the past key values (normative generation). It will take some algebra and tests to add input state to the GPT training form from the RNN inference form.\r\n- [ ] The tensorflow loading code appears complicating to me. I might move it out to another file for now.\r\n- [ ] The embeddings can likely be adjusted to reflect parts \"i\" and \"ii\" of the high level outline below\r\n- [ ] It could be helpful to organize the file to retain layout similarity with blinkdlβs files.\r\n- [ ] For below outline, next step is reviewing timemix.\r\n Draft of architecture (maybe leave out optional parts to start).\r\n\r\n High level:\r\n 1. word embeddings `emb`\r\n 2. layernorm `ln0`\r\n - optional 2-axis trained position embeddings seen in training code for image modeling `pos_emb_x` `pos_emb_y`. this is converted to 1-axis `pos_emb` and used prior to ln0 in inference.\r\n 3. layers of blocks\r\n 1. layernorm `ln1`\r\n 2. timemix self attention `time_mix_k`, `time_mix_v`, `time_mix_r`, `time_first`, `time_decay`, `key`, `value`, `receptance`, `output`. `time_first` and `time_decay` are kept as float32 in inference.\r\n 3. layernorm `ln2`\r\n 4. feedforward channelmix `time_mix_k`, `time_mix_r`, `key`, `value`, `receptance` (see channelmix section below)\r\n - timemix self attention optionally replaced with feedforward channelmix for block 0 in training code\r\n - for one optional block, tiny attention `tiny_ln`, `tiny_q`, `tiny_k`, `tiny_v`, `tiny_mask` seen in training code, inference code in development\r\n - optionally inference code uses what looks like a numeric stability trick to extract a factor of 2 from the weights every 6 layere\r\n 7. layernorm `ln_out`\r\n - optional \"copy\" attention `head_q`, `head_k`, `copy_mask` then summed to head in training code, inference code in development\r\n 8. linear language modeling `head`\r\n - for training loss, blink presently has a function after cross entropy called `L2Wrap` to reduce magnitudes\r\n\r\n GPT(training) and RNN (inference) equivalence:\r\n - i think special training initialization values may be used in timemix, channelmix\r\n - for inference `time_decay` = -exp(time_decay) is factored out when loaded, but for training this is done in the forward pass.\r\n - 5 state elements per layer:\r\n - 0 = ChannelMix/FF `xx`\r\n - 1 = TimeMix/SA `xx`\r\n - 2 = `aa`\r\n - 3 = `bb`\r\n - 4 = `pp` in inference, `o` in training\r\n\r\n TimeMix: \r\n 1. the previous state is shifted into the `x` vector to make `xx`. in training this is done by \"time shifting\" with `nn.ZeroPad2d((0, 0, 1, -1))`; in single token inference it is passed as state element 1, which is then replaced by `x`.\r\n 2. linear interpolation between the old state xx and the new state x, weighting `x` by a ratio of `time_mix_k`, `time_mix_v`, and `time_mix_r` to make `xk`, `xv`, and `xr` respectivly.\r\n 3. k = key @ xk\r\n 4. v = value @ xv\r\n 5. sr = sigmoid(receptance @ xr) # called simply `r` in inference code\r\n - the GPT training form of this is now handed off to a hand-written cuda kernel, compiled on first run, from cuda/wkv_cuda.cu\r\n - kernel parameters: `B` = batchsize; `T` = sequence length; `C` = channel count; `_w` = `time_decay`; `_u` = `time_first`; `_k` = `k`; `_v` = `v`; `_y` = `wkv`.\r\n - i think this used to be a convolution; i'm not sure whether it still is\r\n - `o` and `no` appear to be running values for magnitude management in exponential space, initialized to -1e38; p and q are initialized to 0\r\n - `k` and `v` are indexed by thread so the `token` offset may represent different subregions. i'm not quite clear on that and should test or ask.\r\n 1. no = max(o, time_first[channel] + k[token])\r\n 2. A = exp(o - no) # this is e1 in the RNN form\r\n 3. B = exp(time_first[channel] + k[token] - no) # this is e2 in RNN\r\n 4. wkv[token] = (A * p + B * v[token]) / (A * q + B)\r\n 5. no = max(time_decay[channel] + o, k[token])\r\n 6. A = exp(time_decay[channel] + o - no)\r\n 7. B = exp(k[token] - no)\r\n 8. p = A * p + B * v[token]\r\n 9. q = A * q + B\r\n 10. o = no; token += 1\r\n - ... here would be the remaining core algebra and code inspection\r\n - WIP unified summary of wkv kernel between inference and training:\r\n 1. ww = time_first + k[token]\r\n 2. next_pp = max(pp, ww)\r\n 3. A = exp(pp - next_pp ...\r\n - rwkv = sr * wkv\r\n - return output @ rwkv\r\n\r\n ChannelMix:\r\n 1. the previous state is shifted into the `x` vector to make `xx`. in training this is done by \"time shifting\" with `nn.ZeroPad2d((0, 0, 1, -1))`; in single token inference it is passed as state element 0, which is then replaced by `x`.\r\n 3. linear interpolation between the old state xx and the new state x, weighting `x` by a ratio of `time_mix_k` and `time_mix_r` to make `xk` and `xr` respectivly.\r\n 4. r = sigmoid(receptance @ xr)\r\n 5. k = square(relu(key @ xk))\r\n 7. kv = value @ k\r\n 8. rkv = r * kv\r\n 9. return rkv\r\n \r\n- [ ] review or improve model file further\r\n",
"@ArEnSc do you need any help?",
"> @ArEnSc do you need any help?\r\n\r\nif you want to help pm me! on discord, otherwise I should have something end of week minor update",
"Hi @ArEnSc,\r\nCan you share with us your discord handle? Thanks!",
"> Hi @ArEnSc, Can you share with us your discord handle? Thanks!\r\n\r\nARENSC#5905\r\nyeah still working on it haha it will be a while ",
"Working on having GPT Encoder to generate the context and RNN mode inference and sharing weights",
"Deleted a bunch of not needed stuff",
"Added the [WIP] Label to prevent the bot from coming back π ",
"@ArEnSc Please let us know if you won't have time to finish this PR. The model is heavily requested as you may see from the linked issue, do you want us to take over this PR and finish this?",
"> @ArEnSc Please let us know if you won't have time to finish this PR. The model is heavily requested as you may see from the linked issue, do you want us to take over this PR and finish this?\r\n\r\nSure yes, sorry been busy at the hospital these days! I think it's probably important that you guys take this on =)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,684
| 1,684
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds the model from issue
Fixes # (https://github.com/huggingface/transformers/issues/20737)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@younesbelkada
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20809/reactions",
"total_count": 11,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 7,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20809/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20809",
"html_url": "https://github.com/huggingface/transformers/pull/20809",
"diff_url": "https://github.com/huggingface/transformers/pull/20809.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20809.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20808
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20808/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20808/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20808/events
|
https://github.com/huggingface/transformers/issues/20808
| 1,501,476,595
|
I_kwDOCUB6oc5Zfrbz
| 20,808
|
KeyError: overflow_to_sample_mapping - LayoutLMv3
|
{
"login": "Marcel1805",
"id": 12809547,
"node_id": "MDQ6VXNlcjEyODA5NTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/12809547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Marcel1805",
"html_url": "https://github.com/Marcel1805",
"followers_url": "https://api.github.com/users/Marcel1805/followers",
"following_url": "https://api.github.com/users/Marcel1805/following{/other_user}",
"gists_url": "https://api.github.com/users/Marcel1805/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Marcel1805/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Marcel1805/subscriptions",
"organizations_url": "https://api.github.com/users/Marcel1805/orgs",
"repos_url": "https://api.github.com/users/Marcel1805/repos",
"events_url": "https://api.github.com/users/Marcel1805/events{/privacy}",
"received_events_url": "https://api.github.com/users/Marcel1805/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"@ArthurZucker Hi Arthur, I just saw you self-assigned this ticket, were you already able to reproduce this error? Do you need any further information/ context?\r\nThank you very much!\r\nBest regards, \r\nMarcel",
"Hey, I didn't have the chance to do so yet! If you could provide me with a minimal reproducing script, it would be really great! ",
"Hey @ArthurZucker, yes sure, please find a google colab notebook here: https://colab.research.google.com/drive/1Ce4H6r7PecaLbohqiGr8Uw-7zBn0-CVr?usp=share_link\r\n\r\nThis is a public notebook from @rajshah4 which I modified a little bit to implement the sliding window approach, which runs into the same error \"overflow_to_sample_mapping\". The example dataset CORD doesnΒ΄t contain documents longer than 512 token, thatΒ΄s why I set the max_length=100 to artificially create the need for a sliding window approach.\r\n\r\n(I cannot share the original notebook of mine because it contains sensible data.) \r\n\r\nThank you very much! ",
"Hi @ArthurZucker \r\nwas the script for reproducing the error helpful? \r\nIf you need any further information just let me know, thank you very much! ",
"Hi @Marcel1805 \r\n\r\nWhen you activate the sliding window approach a new key is added to the endcoding-dict (overflow_to_sample_mapping)\r\n\r\nA simple `encoding.pop('overflow_to_sample_mapping', None)` should do it",
"Hi @makra89 \r\nperfect, it works now as intended, thank you so much! π ",
"Thanks @makra89 π ",
"Hi, just to clarify since it took me a bit to understand where the new line is needed. The \"encoding-dict\" is what you get after applying the processor to your data, something like:\r\n\r\n```\r\n encoding = processor(\r\n images,\r\n words,\r\n boxes=boxes,\r\n word_labels=ner_tags,\r\n truncation=True,\r\n padding=\"max_length\",\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=None,\r\n stride=51,\r\n )\r\n encoding.pop(\"overflow_to_sample_mapping\")\r\n```\r\n"
] | 1,671
| 1,682
| 1,673
|
NONE
| null |
### System Info
Running on private server with no public internet access
transformers version: 4.25.1
Platform: RHEL 7.9
Python version: 3.8.12
Huggingface_hub version: 0.11.1
Torch version: 1.13.0
nvidia-cublas-cu11: 11.10.3.66
nvidia-cuda-nvrtc-cu11: 11.7.99
nvidia-cuda-runtime-cu11: 11.7.99
nvidia-cudnn-cu11: 8.5.0.96
Tensorflow version (GPU?): 2.8.2 (True)
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @NielsRogge for LayoutLMv3
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Details regarding the dataset which is used for token classification:

**Code snippet:**
```
processor = AutoProcessor.from_pretrained("<dir-to-layoutlmv3-base>", apply_ocr=False)
processor
```
**Output**
LayoutLMv3Processor:
- feature_extractor: LayoutLMv3ImageProcessor {
"apply_ocr": false,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"feature_extractor_type": "LayoutLMv3FeatureExtractor",
"image_mean": [
0.5,
0.5,
0.5
],
"image_processor_type": "LayoutLMv3ImageProcessor",
"image_std": [
0.5,
0.5,
0.5
],
"ocr_lang": null,
"resample": 2,
"rescale_factor": 0.00392156862745098,
"size": {
"height": 224,
"width": 224
},
"tesseract_config": ""
}
- tokenizer: PreTrainedTokenizerFast(name_or_path='dir-to-layoutlmv3-base', vocab_size=50265, model_max_len=512, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("< s >", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("< /s >", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'sep_token': AddedToken("< /s >", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'cls_token': AddedToken(" < s > ", rstrip=False, lstrip=False, single_word=False, normalized=True), 'mask_token': AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=True)})
```
def prepare_examples(examples):
images = examples["image"]
words = examples["value"]
boxes = examples["bbox"]
word_labels = examples["label"]
processor_kwargs = {"return_offsets_mapping": False, "return_overflowing_tokens": True, "stride": 100, "max_length": 512}
encoding = processor(images, words, boxes=boxes, word_labels=word_labels,
truncation=True, padding="max_length", **processor_kwargs)
return encoding
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64'))
})
eval_dataset = globaldataset["test"].map(
prepare_examples,
batched=True,
remove_columns=column_names,
features=features,
)
```
**Traceback:**
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[215], line 1
----> 1 eval_dataset = globaldataset["test"].map(
2 prepare_examples,
3 batched=True,
4 remove_columns=column_names,
5 features=features,
6 )
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:2585, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2582 disable_tqdm = not logging.is_progress_bar_enabled()
2584 if num_proc is None or num_proc == 1:
-> 2585 return self._map_single(
2586 function=function,
2587 with_indices=with_indices,
2588 with_rank=with_rank,
2589 input_columns=input_columns,
2590 batched=batched,
2591 batch_size=batch_size,
2592 drop_last_batch=drop_last_batch,
2593 remove_columns=remove_columns,
2594 keep_in_memory=keep_in_memory,
2595 load_from_cache_file=load_from_cache_file,
2596 cache_file_name=cache_file_name,
2597 writer_batch_size=writer_batch_size,
2598 features=features,
2599 disable_nullable=disable_nullable,
2600 fn_kwargs=fn_kwargs,
2601 new_fingerprint=new_fingerprint,
2602 disable_tqdm=disable_tqdm,
2603 desc=desc,
2604 )
2605 else:
2607 def format_cache_file_name(cache_file_name, rank):
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:585, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
583 self: "Dataset" = kwargs.pop("self")
584 # apply actual function
--> 585 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
586 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
587 for dataset in datasets:
588 # Remove task templates if a column mapping of the template is no longer valid
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:552, in transmit_format.<locals>.wrapper(*args, **kwargs)
545 self_format = {
546 "type": self._format_type,
547 "format_kwargs": self._format_kwargs,
548 "columns": self._format_columns,
549 "output_all_columns": self._output_all_columns,
550 }
551 # apply actual function
--> 552 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
553 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
554 # re-apply format to the output
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:2999, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2997 writer.write_table(batch)
2998 else:
-> 2999 writer.write_batch(batch)
3000 if update_data and writer is not None:
3001 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_writer.py:533, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)
526 cols = (
527 [col for col in self.schema.names if col in batch_examples]
528 + [col for col in batch_examples.keys() if col not in self.schema.names]
529 if self.schema
530 else batch_examples.keys()
531 )
532 for col in cols:
--> 533 col_type = features[col] if features else None
534 col_try_type = try_features[col] if try_features is not None and col in try_features else None
535 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
KeyError: 'overflow_to_sample_mapping
**Changed return_offsets_mapping to True with everything else unchanged returns another error:**
```
processor_kwargs = {"return_offsets_mapping": True, "return_overflowing_tokens": True, "stride": 100, "max_length": 512}
#Erstellen eines Processors-Objektes
encoding = processor(images, words, boxes=boxes, word_labels=word_labels,
truncation=True, padding="max_length", **processor_kwargs)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[219], line 1
----> 1 eval_dataset = globaldataset["test"].map(
2 prepare_examples,
3 batched=True,
4 remove_columns=column_names,
5 features=features,
6 )
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:2585, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2582 disable_tqdm = not logging.is_progress_bar_enabled()
2584 if num_proc is None or num_proc == 1:
-> 2585 return self._map_single(
2586 function=function,
2587 with_indices=with_indices,
2588 with_rank=with_rank,
2589 input_columns=input_columns,
2590 batched=batched,
2591 batch_size=batch_size,
2592 drop_last_batch=drop_last_batch,
2593 remove_columns=remove_columns,
2594 keep_in_memory=keep_in_memory,
2595 load_from_cache_file=load_from_cache_file,
2596 cache_file_name=cache_file_name,
2597 writer_batch_size=writer_batch_size,
2598 features=features,
2599 disable_nullable=disable_nullable,
2600 fn_kwargs=fn_kwargs,
2601 new_fingerprint=new_fingerprint,
2602 disable_tqdm=disable_tqdm,
2603 desc=desc,
2604 )
2605 else:
2607 def format_cache_file_name(cache_file_name, rank):
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:585, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
583 self: "Dataset" = kwargs.pop("self")
584 # apply actual function
--> 585 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
586 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
587 for dataset in datasets:
588 # Remove task templates if a column mapping of the template is no longer valid
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:552, in transmit_format.<locals>.wrapper(*args, **kwargs)
545 self_format = {
546 "type": self._format_type,
547 "format_kwargs": self._format_kwargs,
548 "columns": self._format_columns,
549 "output_all_columns": self._output_all_columns,
550 }
551 # apply actual function
--> 552 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
553 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
554 # re-apply format to the output
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
482 # Update fingerprint of in-place transforms + update in-place history of transforms
484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:2999, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2997 writer.write_table(batch)
2998 else:
-> 2999 writer.write_batch(batch)
3000 if update_data and writer is not None:
3001 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_writer.py:533, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size)
526 cols = (
527 [col for col in self.schema.names if col in batch_examples]
528 + [col for col in batch_examples.keys() if col not in self.schema.names]
529 if self.schema
530 else batch_examples.keys()
531 )
532 for col in cols:
--> 533 col_type = features[col] if features else None
534 col_try_type = try_features[col] if try_features is not None and col in try_features else None
535 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
KeyError: 'offset_mapping'
**Changing the value of the truncation-parameter to False does not alter the KeyErrors.**
### Expected behavior
IΒ΄d expect a dataset with few more entries than before, because for every entry which has more than 512 token a second entry with the overflowing token + stride-overlap would have been created with the sliding window approach.
Regarding this KeyError overflow_to_sample_mapping I found one Git Issue https://github.com/huggingface/transformers/issues/18726, which reports a bug for the LayoutXLM with the non-fast tokenizer. But in my case as shown in the code snippet the processor loads the PreTrainedTokenizerFast for the LayoutLMv3.
Thank you very much!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20808/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20808/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20807
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20807/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20807/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20807/events
|
https://github.com/huggingface/transformers/issues/20807
| 1,501,221,981
|
I_kwDOCUB6oc5ZetRd
| 20,807
|
How to finetune MBART on an single language?
|
{
"login": "BakingBrains",
"id": 51019420,
"node_id": "MDQ6VXNlcjUxMDE5NDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/51019420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BakingBrains",
"html_url": "https://github.com/BakingBrains",
"followers_url": "https://api.github.com/users/BakingBrains/followers",
"following_url": "https://api.github.com/users/BakingBrains/following{/other_user}",
"gists_url": "https://api.github.com/users/BakingBrains/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BakingBrains/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BakingBrains/subscriptions",
"organizations_url": "https://api.github.com/users/BakingBrains/orgs",
"repos_url": "https://api.github.com/users/BakingBrains/repos",
"events_url": "https://api.github.com/users/BakingBrains/events{/privacy}",
"received_events_url": "https://api.github.com/users/BakingBrains/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) for question like this, the whole community will be there to help! We keep issues for bugs and feature requests only :-)",
"@sgugger Thank you\r\nI asked the same in forum, but no response there. If anyone know here, can suggest.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
Hello,
Can anyone please suggest, how can I finetune MBART for a specific language.
I found this Asian Bert repo. https://github.com/hyunwoongko/asian-bart
where they used MBART (using [mBart](https://arxiv.org/abs/2001.08210) by embedding layer pruning) for single language.
I want to do the same.
I am unable find any good resource on this. Any suggestions?
Thanks and Regards.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20807/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20806
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20806/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20806/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20806/events
|
https://github.com/huggingface/transformers/pull/20806
| 1,501,044,049
|
PR_kwDOCUB6oc5Fr2l7
| 20,806
|
Add AWS Neuron torchrun support
|
{
"login": "jeffhataws",
"id": 56947987,
"node_id": "MDQ6VXNlcjU2OTQ3OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/56947987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffhataws",
"html_url": "https://github.com/jeffhataws",
"followers_url": "https://api.github.com/users/jeffhataws/followers",
"following_url": "https://api.github.com/users/jeffhataws/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffhataws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffhataws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffhataws/subscriptions",
"organizations_url": "https://api.github.com/users/jeffhataws/orgs",
"repos_url": "https://api.github.com/users/jeffhataws/repos",
"events_url": "https://api.github.com/users/jeffhataws/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffhataws/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@jeffhataws could you maybe please explain a bit more about how users would benefit from that? I quickly checked the [HF tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html) and with the change you propose users would still need to modify the scripts, e.g., for \r\n```python\r\n# Fixup to enable distributed training with XLA\r\nfrom packaging import version\r\nfrom transformers import __version__\r\nif version.parse(__version__) < version.parse(\"4.20.0\"):\r\n Trainer._wrap_model = lambda self, model, training=True: model\r\nelse:\r\n Trainer._wrap_model = lambda self, model, training=True, dataloader=None: model\r\n\r\n# Workaround for NaNs seen with transformers version >= 4.21.0\r\n# https://github.com/aws-neuron/aws-neuron-sdk/issues/593\r\nif os.environ.get(\"XLA_USE_BF16\") or os.environ.get(\"XLA_DOWNCAST_BF16\"):\r\n transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16\r\n```\r\n",
"> Thanks for adding this new integration. The test won't be run on our CI since `torch_neuroncore` is not installed. Is it possible to install it in regular images or do we need to be on an AWS instance>\r\n\r\nYes for this test we will need Trainium instance. Over time, once https://github.com/pytorch/xla/pull/3609 is released, we can make it more generic for GPU/XLA. For now, Neuron team will test this. Test is currently passing on Trainium instance.",
"> @jeffhataws could you maybe please explain a bit more about how users would benefit from that? I quickly checked the [HF tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html) and with the change you propose users would still need to modify the scripts, e.g., for\r\n> \r\n> ```python\r\n> # Fixup to enable distributed training with XLA\r\n> from packaging import version\r\n> from transformers import __version__\r\n> if version.parse(__version__) < version.parse(\"4.20.0\"):\r\n> Trainer._wrap_model = lambda self, model, training=True: model\r\n> else:\r\n> Trainer._wrap_model = lambda self, model, training=True, dataloader=None: model\r\n> \r\n> # Workaround for NaNs seen with transformers version >= 4.21.0\r\n> # https://github.com/aws-neuron/aws-neuron-sdk/issues/593\r\n> if os.environ.get(\"XLA_USE_BF16\") or os.environ.get(\"XLA_DOWNCAST_BF16\"):\r\n> transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16\r\n> ```\r\n\r\nThe first workaround is for missing DDP support which will be available in Neuron's PyTorch-XLA version 1.13 (future release). The second workaround is already fixed in transformers==4.25.1 by https://github.com/huggingface/transformers/pull/20562.",
"Thanks for the precisions. Let's wait until the release of Neuron's PyTorch-XLA version 1.13 to merge this, then?",
"> Thanks for the precisions. Let's wait until the release of Neuron's PyTorch-XLA version 1.13 to merge this, then?\r\n\r\n@sgugger since we already have a workaround for DDP wrapper by overwriting the _wrap_model function, we can actually merge this first. The reason is that 1) we want it in for next transformer release ahead of 1.13, and 2) I will need this change to post another PR for the default compiler flag for transformer model type. Let me know if this is acceptable.",
"Thanks for your patience on this."
] | 1,671
| 1,674
| 1,674
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds support for torchrun for AWS Neuron SDK.
Existing [HF tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html) for Neuron SDK requires users to modify the HF example script (ie run_glue.py). This change will help minimize the changes required.
This change will require future AWS Neuron PyTorch 1.13 support.
This is an update to https://github.com/huggingface/transformers/pull/19907 .
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20806/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20806",
"html_url": "https://github.com/huggingface/transformers/pull/20806",
"diff_url": "https://github.com/huggingface/transformers/pull/20806.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20806.patch",
"merged_at": 1674058880000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20805
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20805/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20805/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20805/events
|
https://github.com/huggingface/transformers/issues/20805
| 1,500,478,390
|
I_kwDOCUB6oc5Zb3u2
| 20,805
|
Add Object Detection task tutorial to the transformers documentation
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This looks great to me, looking forward to seeing the guide and let me know if there is anything I can help with! π \r\n\r\nI would recommend using DETR since it is a lot more popular than YOLOS (263k downloads versus 28.4k).",
"@stevhliu That's a good reason to go with DETR, thanks for the tip! ",
"The issue is fixed with https://github.com/huggingface/transformers/pull/20925"
] | 1,671
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
Currently, the transformers documentation has two how-to guides for CV tasks - image classification and semantic segmentation. Transformers support other CV tasks, such as object detection. This issue describes a proposal to add a how-to guide for object detection similar in structure to the existing guides to help the community members get started with object detection on their own data using the transformers library.
Hereβs an approximate outline for the page:
1. Intro: what object detection is, including the video from https://huggingface.co/tasks/object-detection
2. Fine-tuning either DETR or YOLOS on a new dataset (TODO: decide which one to pick).
β¨2.1. Loading a datasetβ¨
β¨2.2. Preprocessing the datasetβ¨
β¨2.3. Setting up an evaluation metric
β¨2.4. Training and pushing a model to the hub
3. Using the fine-tuned model for inference.
Some existing notebooks for reference:
- [Fine-tuning YOLOS on a custom dataset for object detection](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/YOLOS/Fine_tuning_YOLOS_for_object_detection_on_custom_dataset_(balloon).ipynb)
- [Fine-tuning DETR on a custom dataset for object detection](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb)
Related [WIP] PR: https://github.com/huggingface/transformers/pull/20874
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20805/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20805/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20804
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20804/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20804/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20804/events
|
https://github.com/huggingface/transformers/pull/20804
| 1,500,465,420
|
PR_kwDOCUB6oc5Fp4y4
| 20,804
|
Generate: post-generate config doctest fix
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Just a question, so if users do things in the old way like\r\n\r\n```\r\nmodel.config.pad_token_id = model.config.eos_token_id\r\n```\r\nthey might get different results before/after the generation config PR ..?",
"_The documentation is not available anymore as the PR was closed or merged._",
"> Just a question, so if users do things in the old way like\r\n> \r\n> ```\r\n> model.config.pad_token_id = model.config.eos_token_id\r\n> ```\r\n> \r\n> they might get different results before/after the generation config PR ..?\r\n\r\n`generate()` supports control from ad hoc model config changes (it has an extra check and handles differences [here](https://github.com/huggingface/transformers/blob/26dd041c6e45379141302e2d293ab4cd9cf805d4/src/transformers/generation/utils.py#L1131)), for retrocompatibility and ease of use. It is deprecated and will be removed soon.\r\n\r\nIndividual generation methods do not have this check, so they are not supporting control from ad hoc model config changes. The doctests in `main` are failing for this reason. It means that fixing the doctests can be done in two ways:\r\n1. Add the same check to all individual generation methods\r\n2. [current implementation in the PR] Change the doctest itself so they don't rely on ad hoc model config changes\r\n\r\nI decided to follow 2. since we are deprecating it soon anyways AND calling the methods directly is an advanced feature (users should not be relying on side effects from the model config to support advanced functionality, it's a recipe for disaster π ). \r\n\r\nWDYT? (cc @sgugger)",
"Agreed with option 2!"
] | 1,671
| 1,671
| 1,671
|
MEMBER
| null |
# What does this PR do?
Fixes doctests that were broken as a result of the `generation_config` PR merge.
Note: the failing pipeline test was fixing by adding a missing field to `gpt2`'s `generate_config.json` (which was created before these recent `generation_config` changes). See [this hub commit](https://huggingface.co/gpt2/commit/e7da7f221d5bf496a48136c0cd264e630fe9fcc8).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20804/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20804",
"html_url": "https://github.com/huggingface/transformers/pull/20804",
"diff_url": "https://github.com/huggingface/transformers/pull/20804.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20804.patch",
"merged_at": 1671650326000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20803
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20803/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20803/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20803/events
|
https://github.com/huggingface/transformers/pull/20803
| 1,500,351,067
|
PR_kwDOCUB6oc5Fpfot
| 20,803
|
[`Vision`] [Refactor] Initialize weights on the correct place
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,677
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR forces some modules to be initialised on the correct place (i.e. on the `_init_weights` method).
With more vision models being added, contributors are copying the practice to initialise some weights outside `_init_weights`. I think that we should centralize weights initialisation on the `_init_weights` method, by applying this on most-copied / downloaded models.
Related:
- https://github.com/huggingface/transformers/pull/20716#discussion_r1049764368
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20803/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20803",
"html_url": "https://github.com/huggingface/transformers/pull/20803",
"diff_url": "https://github.com/huggingface/transformers/pull/20803.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20803.patch",
"merged_at": 1671442635000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20802
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20802/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20802/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20802/events
|
https://github.com/huggingface/transformers/pull/20802
| 1,500,211,825
|
PR_kwDOCUB6oc5FpBFr
| 20,802
|
Use `baddbmm` to reduce the number of kernel calls when running T5
|
{
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Arf turns out this doesn't work without having extra copies. Closing as I'm not sure if it's worth it."
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Reduce the number of kernel calls by using `baddbmm` and built-in `F.softmax(..., dtype=torch.float))`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20802/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20802",
"html_url": "https://github.com/huggingface/transformers/pull/20802",
"diff_url": "https://github.com/huggingface/transformers/pull/20802.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20802.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/20801
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20801/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20801/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20801/events
|
https://github.com/huggingface/transformers/pull/20801
| 1,500,207,885
|
PR_kwDOCUB6oc5FpAOA
| 20,801
|
Add script to convert T5X T5 (v1.0 and v1.1) checkpoints to PyTorch
|
{
"login": "bastings",
"id": 154337,
"node_id": "MDQ6VXNlcjE1NDMzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/154337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bastings",
"html_url": "https://github.com/bastings",
"followers_url": "https://api.github.com/users/bastings/followers",
"following_url": "https://api.github.com/users/bastings/following{/other_user}",
"gists_url": "https://api.github.com/users/bastings/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bastings/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bastings/subscriptions",
"organizations_url": "https://api.github.com/users/bastings/orgs",
"repos_url": "https://api.github.com/users/bastings/repos",
"events_url": "https://api.github.com/users/bastings/events{/privacy}",
"received_events_url": "https://api.github.com/users/bastings/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I could use some clarification on the following: I'm missing a configuration option for T5 for the 1.0/original T5 checkpoints to have an `lm_head` that shares parameters with the token embeddings.\r\n\r\nCurrently there is `T5Model` (which returns hidden states) and `T5ForConditionalGeneration` (which returns logits, used for T5 v1.1 models among others). The latter assumes there is an `lm_head` layer, but for the 1.0 checkpoints there is no such thing, it reuses the embedding matrix to map to the vocab space.",
"Hey @bastings, when there is no `lm_head` you have to set the `tie_word_embeddings` to `True` ",
"I added the instructions to the top docstring. Maybe it's ready? :-)",
"A last nit and we can merge! Thanks a lot for bearing with me π ",
"Thanks! Committed your suggestion :)",
"Once the quality tests are green (requires `make fixup`) we can merge!",
"Oh looks like the suggestion made it fail ;)",
"Ah, sorry then ahha, I guess the ` make style`will correct that π
",
"> Ah, sorry then ahha, I guess the ` make style`will correct that π
\r\n\r\nFixed! :)"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds a script that can convert Google T5X (Flax) T5 and T5-v1.1 checkpoints into PyTorch checkpoints.
This allows users to convert non-standard checkpoints that have been trained with T5X and use them with the Transformers library in PyTorch.
Usage:
- In case you don't have `gsutil`, install according to https://cloud.google.com/storage/docs/gsutil_install
- Native T5X checkpoints are at https://github.com/google-research/t5x/blob/main/docs/models.md#t5-11-checkpoints. Example:
`gsutil -m cp -r gs://t5-data/pretrained_models/t5x/t5_1_1_small $HOME/`
- Create a corresponding `config.json` for the downloaded checkpoint. Often one already exists, e.g. here we can use https://huggingface.co/google/t5-v1_1-small/blob/main/config.json
- Finally `python3 convert_t5x_checkpoint_to_pytorch.py --t5x_checkpoint_path=$HOME/t5_1_1_small --config_file=config.json --pytorch_dump_path=$HOME/t5_1_1_small_pt`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Discussed with @thomwolf .
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? The code is tested but not part of this PR, since the test requires manually downloading the T5X checkpoints from a cloud bucket.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
@sanchit-gandhi
@ArthurZucker
@younesbelkada
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20801/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20801",
"html_url": "https://github.com/huggingface/transformers/pull/20801",
"diff_url": "https://github.com/huggingface/transformers/pull/20801.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20801.patch",
"merged_at": 1671802607000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20800
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20800/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20800/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20800/events
|
https://github.com/huggingface/transformers/pull/20800
| 1,500,172,380
|
PR_kwDOCUB6oc5Fo4fN
| 20,800
|
Fix whisper export
|
{
"login": "mht-sharma",
"id": 21088122,
"node_id": "MDQ6VXNlcjIxMDg4MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mht-sharma",
"html_url": "https://github.com/mht-sharma",
"followers_url": "https://api.github.com/users/mht-sharma/followers",
"following_url": "https://api.github.com/users/mht-sharma/following{/other_user}",
"gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions",
"organizations_url": "https://api.github.com/users/mht-sharma/orgs",
"repos_url": "https://api.github.com/users/mht-sharma/repos",
"events_url": "https://api.github.com/users/mht-sharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/mht-sharma/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @ArthurZucker ",
"Hey, would you mind adding a bit more context? What is the issue related to current export? ",
"> Hey, would you mind adding a bit more context? What is the issue related to current export?\r\n\r\nHi @ArthurZucker I have updated the PR description.",
"@ArthurZucker could you please merge this. I do not have the permissions. Thanks!"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix the export for the whisper model
The current export for whisper fails with error `Invalid Feed Input Name:past_key_values.3.encoder.value`, because the cross attention key values are not exported as input in the ONNX model after the new condition introduced in the [transformers@97a51](https://github.com/huggingface/transformers/commit/97a51b0c7d483cdf13ea878a987f9aa1c9eecc91).
The error occurs due to incorrect dummy input generation for cross attention key values for export. The PR fixes the same.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@lewtun
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20800/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20800",
"html_url": "https://github.com/huggingface/transformers/pull/20800",
"diff_url": "https://github.com/huggingface/transformers/pull/20800.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20800.patch",
"merged_at": 1671636523000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20799
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20799/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20799/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20799/events
|
https://github.com/huggingface/transformers/issues/20799
| 1,500,044,974
|
I_kwDOCUB6oc5ZaN6u
| 20,799
|
ImportError: cannot import name 'AutoModelForMaskedLM' from 'transformers' (unknown location)
|
{
"login": "hailong23-jin",
"id": 35919144,
"node_id": "MDQ6VXNlcjM1OTE5MTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/35919144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hailong23-jin",
"html_url": "https://github.com/hailong23-jin",
"followers_url": "https://api.github.com/users/hailong23-jin/followers",
"following_url": "https://api.github.com/users/hailong23-jin/following{/other_user}",
"gists_url": "https://api.github.com/users/hailong23-jin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hailong23-jin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hailong23-jin/subscriptions",
"organizations_url": "https://api.github.com/users/hailong23-jin/orgs",
"repos_url": "https://api.github.com/users/hailong23-jin/repos",
"events_url": "https://api.github.com/users/hailong23-jin/events{/privacy}",
"received_events_url": "https://api.github.com/users/hailong23-jin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @medlen \r\nThanks for raising the issue! \r\nYour installation might be broken, as I managed to import `AutoModelForMaskedLM` with `transformers==4.4.2` using the same hardware specs as you:\r\n```\r\n>>> import transformers\r\n>>> transformers.__version__\r\n'4.4.2'\r\n>>> from transformers import AutoModelForMaskedLM\r\n>>>\r\n```\r\nCan you double check your `transformers` version and let us know?",
"Thanks for your quick reply. This is my `transformers` version :\r\n\r\n`pip show transformers`\r\n\r\n```\r\nName: transformers\r\nVersion: 4.4.2\r\nSummary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch\r\nHome-page: https://github.com/huggingface/transformers\r\nAuthor: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Sam Shleifer, Patrick von Platen, Sylvain Gugger, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors\r\nAuthor-email: thomas@huggingface.co\r\nLicense: Apache\r\nLocation: /home/jinhl/anaconda3/envs/py38/lib/python3.8/site-packages\r\nRequires: tokenizers, packaging, filelock, tqdm, numpy, regex, sacremoses, requests\r\nRequired-by: simpletransformers\r\n```\r\n\r\n```\r\nPython 3.8.5 (default, Sep 4 2020, 07:30:14) \r\n[GCC 7.3.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import transformers\r\n>>> transformers.__version__\r\n'4.4.2'\r\n>>> from transformers import AutoModelForMaskedLM\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nImportError: cannot import name 'AutoModelForMaskedLM' from 'transformers' (unknown location)\r\n>>> \r\n```\r\n\r\nAdditionally, my GPU device is GeForce 4090.",
"Are you using conda or python venv? \r\ncan you run `which python` and `which pip`?",
"I am using conda. These are their locations:\r\n\r\n```\r\nwhich conda \r\n/home/jinhl/anaconda3/condabin/conda\r\nwhich pip\r\n/home/jinhl/anaconda3/envs/py38/bin/pip\r\nwhich python\r\n/home/jinhl/anaconda3/envs/py38/bin/python\r\n```",
"There is maybe something wrong with my python environment. \r\nI create a new conda environment with python=3.8, and install `transformers==4.4.2`. \r\nRepeating the above steps.\r\nIt raise a new version invalid error:\r\n\r\n```\r\n>>> import transformers\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/jinhl/PythonWorkspace/transformers/__init__.py\", line 43, in <module>\r\n from . import dependency_versions_check\r\n File \"/home/jinhl/PythonWorkspace/transformers/dependency_versions_check.py\", line 40, in <module>\r\n require_version_core(deps[pkg])\r\n File \"/home/jinhl/PythonWorkspace/transformers/utils/versions.py\", line 94, in require_version_core\r\n return require_version(requirement, hint)\r\n File \"/home/jinhl/PythonWorkspace/transformers/utils/versions.py\", line 85, in require_version\r\n if want_ver is not None and not ops[op](version.parse(got_ver), version.parse(want_ver)):\r\n File \"/home/jinhl/anaconda3/envs/py38-test/lib/python3.8/site-packages/packaging/version.py\", line 52, in parse\r\n return Version(version)\r\n File \"/home/jinhl/anaconda3/envs/py38-test/lib/python3.8/site-packages/packaging/version.py\", line 197, in __init__\r\n raise InvalidVersion(f\"Invalid version: '{version}'\")\r\npackaging.version.InvalidVersion: Invalid version: '0.10.1,<0.11'\r\n>>> \r\n```\r\n\r\nafter my test, this is because tokenizers version invalid. \r\n\r\nin: https://github.com/huggingface/transformers/blob/v4.4.2/src/transformers/utils/versions.py#L85 \r\n```\r\nif want_ver is not None and not ops[op](version.parse(got_ver), version.parse(want_ver)):\r\n raise pkg_resources.VersionConflict(\r\n f\"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}\"\r\n )\r\n```\r\nfor `tokenizers` , `got_ver=0.10.3` `want_ver=0.10.1,<0.11`, the later cause the error. I am not sure it is a special case for my machine or a bug.\r\n\r\nAfter I comment out the version check code, `AutoModelForMaskedLM ` can be correctly imported.\r\n\r\n",
"+1",
"+1",
"+1\r\n",
"You can keep posting `+1`s without any useful information, it will definitely help us fix the issue.",
"I encountered the same error: `packaging.version.InvalidVersion: Invalid version: '0.10.1,<0.11'`\r\n\r\nThis error seems to be caused by a problem in handling multiple requirements, which has been resolved by a change in [this PR](https://github.com/huggingface/transformers/pull/11110). Therefore, the problem can be resolved by changing the version of `transfomers` to 4.6 or higher.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,676
| 1,676
|
NONE
| null |
### System Info
Environment:
```
python 3.8
transformers==4.4.2
ubuntu 20.04
cuda 11.3
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_May__3_19:15:13_PDT_2021
Cuda compilation tools, release 11.3, V11.3.109
Build cuda_11.3.r11.3/compiler.29920130_0
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11 Driver Version: 525.60.11 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | Off |
| 36% 57C P2 252W / 450W | 18490MiB / 24564MiB | 91% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1021 G /usr/lib/xorg/Xorg 35MiB |
| 0 N/A N/A 1775 G /usr/lib/xorg/Xorg 72MiB |
| 0 N/A N/A 1924 G /usr/bin/gnome-shell 185MiB |
| 0 N/A N/A 14033 C python 18176MiB |
+-----------------------------------------------------------------------------+
```
`pip install transformers==4.4.2`
```
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoModelForMaskedLM
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'AutoModelForMaskedLM' from 'transformers' (unknown location)
>>>
```
Is this a bug? When I install the latest version `pip install transformers==4.25.1 `, it shows:
```
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoModelForMaskedLM
2022-12-16 19:08:29.243002: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/lib/
2022-12-16 19:08:29.243021: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
```
why the latest version need cuda 10.1 ?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
step 1: `pip install transformers==4.4.2`
step 2: open the terminal
step 3: python
step 4: from transformers import AutoModelForMaskedLM
### Expected behavior
I want to know how to deal with this problem, I can not use the `transformers` package and its functions.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20799/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20798
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20798/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20798/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20798/events
|
https://github.com/huggingface/transformers/pull/20798
| 1,499,917,399
|
PR_kwDOCUB6oc5Fn_7V
| 20,798
|
Fix object detection2
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks, looks great!\r\n> \r\n> Since when is LayoutLM supported by the object detection pipeline? :D\r\n\r\nhttps://github.com/huggingface/transformers/pull/20143"
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/pull/20776 better.
- Reverts the previous PR.
- The previous model was using a mix of LayoutLM and LayoutLMv2 leading to a bit of madness. The previous fix was wrong
because it made other pipelines try to load the feature extractor which might not have existed :(.
This fixes differently, by using another new models, and fixing it's config too.
`Narsil/layoutlmv3-finetuned-funsd` is a fork of `nielsr/layoutlmv3-finetuned-funsd` with the `tokenizer_config.json` fixed (to not use a Roberta tokenizer, but it's proper LayoutLMv3Tokenizer).
Also modified the README.md to include examples in the widget, and force the pipeline_tag to be `object-detection`.
https://huggingface.co/Narsil/layoutlmv3-finetuned-funsd

<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20798/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20798",
"html_url": "https://github.com/huggingface/transformers/pull/20798",
"diff_url": "https://github.com/huggingface/transformers/pull/20798.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20798.patch",
"merged_at": 1671193537000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20797
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20797/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20797/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20797/events
|
https://github.com/huggingface/transformers/issues/20797
| 1,499,907,483
|
I_kwDOCUB6oc5ZZsWb
| 20,797
|
MaskedLM models doesn't output CLS and weights not initialized in MaskedLM models
|
{
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! This is expected, you should be using the following : \r\n```python \r\nfrom transformers import pipeline\r\nunmasker = pipeline('fill-mask', model='bert-base-uncased')\r\nunmasker(\"Hello my name [MASK] Jhon, how can I [MASK] you?\")\r\n```\r\n\r\n```python \r\n[[{'score': 0.8831212520599365,\r\n 'sequence': '[CLS] Hello my name is Jhon, how can I [MASK] you? [SEP]',\r\n 'token': 1110,\r\n 'token_str': 'is'},\r\n {'score': 0.03171379491686821,\r\n 'sequence': '[CLS] Hello my name, Jhon, how can I [MASK] you? [SEP]',\r\n 'token': 117,\r\n 'token_str': ','},\r\n {'score': 0.020678386092185974,\r\n 'sequence': '[CLS] Hello my name? Jhon, how can I [MASK] you? [SEP]',\r\n 'token': 136,\r\n 'token_str': '?'},\r\n {'score': 0.013670953921973705,\r\n 'sequence': '[CLS] Hello my name am Jhon, how can I [MASK] you? [SEP]',\r\n 'token': 1821,\r\n 'token_str': 'am'},\r\n {'score': 0.009090826846659184,\r\n 'sequence': '[CLS] Hello my name was Jhon, how can I [MASK] you? [SEP]',\r\n 'token': 1108,\r\n 'token_str': 'was'}],\r\n [{'score': 0.9777076244354248,\r\n 'sequence': '[CLS] Hello my name [MASK] Jhon, how can I help you? [SEP]',\r\n 'token': 1494,\r\n 'token_str': 'help'},\r\n {'score': 0.006017779931426048,\r\n 'sequence': '[CLS] Hello my name [MASK] Jhon, how can I meet you? [SEP]',\r\n 'token': 2283,\r\n 'token_str': 'meet'},\r\n {'score': 0.00487362127751112,\r\n 'sequence': '[CLS] Hello my name [MASK] Jhon, how can I reach you? [SEP]',\r\n 'token': 2519,\r\n 'token_str': 'reach'},\r\n {'score': 0.0022672810591757298,\r\n 'sequence': '[CLS] Hello my name [MASK] Jhon, how can I be you? [SEP]',\r\n 'token': 1129,\r\n 'token_str': 'be'},\r\n {'score': 0.0018145894864574075,\r\n 'sequence': '[CLS] Hello my name [MASK] Jhon, how can I call you? [SEP]',\r\n 'token': 1840,\r\n 'token_str': 'call'}]]\r\n```\r\nWhere you have a list of length `2`, with the predictions and their different scores. \r\nThis can be seen in the model cards' how to use. ",
"@ArthurZucker Thanks for your reply, but my goal here is not to extract the predicted tokens, I would like to extract the raw model logits to calculate loss value with respect to the correct labels, for example using `torch.nn.CrossEntropyLoss`, how can I achieve this without incurring in those wrong logits? \r\n\r\nIf I use the bare model without a pipeline I get an unexpected loss value, so maybe I'm using this wrong:\r\n```python\r\ntext_original = \"Hello my name is Jhon, how can I help you?\"\r\ntext = \"Hello my name [MASK] Jhon, how can I [MASK] you?\"\r\ninputs = tokenizer(text, return_tensors='pt')\r\nlabels = tokenizer(text_original, return_tensors='pt')['input_ids']\r\nout = model(**inputs, labels=labels)\r\nout.loss\r\n```\r\n```\r\ntensor(3.2894, grad_fn=<NllLossBackward0>)\r\n```\r\nThanks ",
"Any suggestion on how to use the model to extract the correct loss value? Thanks",
"Hey, as mentioned in the [model's documentation](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertForMaskedLM.forward.example), you should be using the following : \r\n```python\r\n>>> labels = tokenizer(\"The capital of France is Paris.\", return_tensors=\"pt\")[\"input_ids\"]\r\n# mask labels of non-[MASK] tokens\r\n>>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)\r\n\r\n>>> outputs = model(**inputs, labels=labels)\r\n>>> round(outputs.loss.item(), 2)\r\n0.81\r\n```\r\nTell me if this is not fixing the issue π ",
"Thanks a lot, I also used to not mask all the non-masked tokens in the labels during training, this would give me losses in the range of 0.10 - 0.20 while training my MaskedLM, I don't how this was affecting my models, anyway I updated the code accordingly to your suggestion, leaving only the masked tokens in the labels, now the loss is up to 1.6ish during training, so maybe this will produce better gradients to train the network? I also was able to evaluate my models correctly."
] | 1,671
| 1,671
| 1,671
|
NONE
| null |
### System Info
- `transformers` version: 4.25.1
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.31
- Python version: 3.9.15
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
When I try to use a Masked LM model some CLS weights are not initialized, thus when running a text through the model CLS and other special tokens as SEP will not be predicted correctly, producing misleading loss values when evaluating the models.
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
model_name = 'bert-base-cased'
model = AutoModelForMaskedLM.from_pretrained(model_name)
```
Gives:
> Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
> - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
> - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
>
So then when I run:
```python
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "Hello my name [MASK] Jhon, how can I [MASK] you?"
inputs = tokenizer(text, return_tensors='pt')
out_argmaxes = model(**inputs).logits[0].argmax(-1)
print(inputs['input_ids'])
print(out_argmaxes)
print(tokenizer.decode(out_argmaxes))
```
I got:
```
tensor([[ 101, 8667, 1139, 1271, 103, 147, 8613, 117, 1293, 1169, 146, 103, 1128, 136, 102]])
tensor([ 119, 119, 1139, 1271, 1110, 147, 8613, 117, 1293, 1169, 146, 1494, 1128, 136, 119])
'.. my name is Jhon, how can I help you?.'
```
This happens with different models, not just **bert-base-uncased**, the only case where this is not happening is with a custom **Roberta MaskedLM** model trained with a **custom tokenizer** where CLS and other special tokens are mapped as the first ids in the tokenizer, such as PAD: 0 / <mask>: 1 / CLS: 2 ecc.
### Expected behavior
I would expect that the model would output logits with correctly predicted CLS and SEP tokens as the first and last tokens of the output, as they should be so that they can be evaluated producing a correct loss value.
```
'[CLS]Hello my name is Jhon, how can I help you?[SEP]'
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20797/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20796
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20796/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20796/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20796/events
|
https://github.com/huggingface/transformers/pull/20796
| 1,499,878,858
|
PR_kwDOCUB6oc5Fn3dg
| 20,796
|
lazy import torch._softmax_backward_data for better compatibility
|
{
"login": "daquexian",
"id": 11607199,
"node_id": "MDQ6VXNlcjExNjA3MTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/11607199?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daquexian",
"html_url": "https://github.com/daquexian",
"followers_url": "https://api.github.com/users/daquexian/followers",
"following_url": "https://api.github.com/users/daquexian/following{/other_user}",
"gists_url": "https://api.github.com/users/daquexian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daquexian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daquexian/subscriptions",
"organizations_url": "https://api.github.com/users/daquexian/orgs",
"repos_url": "https://api.github.com/users/daquexian/repos",
"events_url": "https://api.github.com/users/daquexian/events{/privacy}",
"received_events_url": "https://api.github.com/users/daquexian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger thanks for your review! Tests all pass now"
] | 1,671
| 1,676
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Dear huggingface team,
Thanks for the great library! I'm from [OneFlow](https://github.com/Oneflow-Inc/oneflow), a deep learning framework with PyTorch-compatible APIs and better performance. We want OneFlow users to run 3rd libraries by simply replacing all `torch` with `oneflow`. For `transformers` library, a blocker is the import of internal API `torch._softmax_backward_data` (OneFlow doesn't have the same internal APIs with PyTorch). This PR moves the import from the global scope into the function `softmax_backward_data`, so in most cases it will not be triggered.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@fxmarty @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20796/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20796",
"html_url": "https://github.com/huggingface/transformers/pull/20796",
"diff_url": "https://github.com/huggingface/transformers/pull/20796.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20796.patch",
"merged_at": 1671439040000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20795
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20795/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20795/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20795/events
|
https://github.com/huggingface/transformers/pull/20795
| 1,499,854,725
|
PR_kwDOCUB6oc5FnyMt
| 20,795
|
Install `sentencepiece` in `DeepSpeed` CI image
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"It's done :-) as suggested"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
Install `sentencepiece` in `DeepSpeed` CI image.
- The new base image has no `sentencepiece` pre-installed, but it's required for DeepSpeed CI tests
- With this PR, the tests all pass (on single GPU runner)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20795/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20795",
"html_url": "https://github.com/huggingface/transformers/pull/20795",
"diff_url": "https://github.com/huggingface/transformers/pull/20795.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20795.patch",
"merged_at": 1671211427000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20794
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20794/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20794/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20794/events
|
https://github.com/huggingface/transformers/issues/20794
| 1,499,853,359
|
I_kwDOCUB6oc5ZZfIv
| 20,794
|
When I use the following code on tpuvm and use model.generate() to infer, the speed is very slow. It seems that the tpu is not used. What is the problem?
|
{
"login": "joytianya",
"id": 17909715,
"node_id": "MDQ6VXNlcjE3OTA5NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/17909715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joytianya",
"html_url": "https://github.com/joytianya",
"followers_url": "https://api.github.com/users/joytianya/followers",
"following_url": "https://api.github.com/users/joytianya/following{/other_user}",
"gists_url": "https://api.github.com/users/joytianya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joytianya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joytianya/subscriptions",
"organizations_url": "https://api.github.com/users/joytianya/orgs",
"repos_url": "https://api.github.com/users/joytianya/repos",
"events_url": "https://api.github.com/users/joytianya/events{/privacy}",
"received_events_url": "https://api.github.com/users/joytianya/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante and @sanchit-gandhi ",
"Hey @joytianya! Sorry about the late reply here! Cool to see that you're using the Flax MT5 model!\r\n\r\nThe big speed-up from using JAX on TPU comes from JIT compiling a function: https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html. It's worth reading this guide to get a feel for how JAX + XLA + TPU work in combination to give you fast kernel execution.\r\n\r\nI've written an ipynb notebook that demonstrates how you can JIT compile the generate method: https://github.com/sanchit-gandhi/codesnippets/blob/main/benchmark_flaxmt5_jit_generate.ipynb\r\n\r\nRunning this using a 'tiny' version of the Flax MT5 model on CPU, I get a 75x speed-up JIT compiling the generate function vs the vanilla generate function! That's fast right!\r\n\r\nYou can adapt the script for the `mt5-small` checkpoint as you require π€ You'll need to pass any additional args that use boolean control flow in the generate method under `static_argnames` (as done with `max_length`, `top_k`, `do_sample`).\r\n\r\nLet me know if you have any other questions, happy to help!",
"Thank you very much for your reply, I tried it, it is indeed effective\r\nIn addition, It reports OOM on the V3-8TPU to use MT5-XXL. do you have any suggestions? Make me can inference MT5-XXL with v3-8 TPU \r\n```shell\r\njax._src.traceback_util.UnfilteredStackTrace: jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Attempting to reserve 320.03M at the bottom of memory. That was not possible. There are 1.20G free, 0B reserved, and 196.31M reservable. If fragmentation is eliminated, the maximum reservable bytes would be 1.20G, so compaction will enable this reservation. The nearest obstacle is at 196.31M from the bottom with size 160.00M.\r\n\r\n```",
"Hey @joytianya! Glad to hear that JIT'ing the generate function worked well! \r\n\r\nThe MT5-XXL checkpoint is 13 billion params (2.33GB) - this is pretty significant! We have to get pretty advanced to fit such a big model on a single TPU v3-8.\r\n\r\nThere are two things that you can try:\r\n1. Half-precision inference: set the computation dtype and model parameters to bfloat16 (half) precision. This will save a significant amount of memory vs float32 (full) precision and should get you numerically equivalent results\r\n2. Model partitioning: use [`pjit`](https://jax.readthedocs.io/en/latest/jax-101/08-pjit.html) for model parallelism\r\n\r\n1 is quite straightforward! 2 is very involved π
. Let's start with 1!\r\n\r\nHere's a code snippet on how you can achieve 1: https://github.com/sanchit-gandhi/codesnippets/blob/main/flaxmt5_inference_half_precision.ipynb\r\n\r\nFor pjit, you'll need to modify the code for Flax MT5 to add the sharing annotations. You can see an example for Flax BLOOM here: https://github.com/huggingface/bloom-jax-inference/blob/2a04aa519d262729d54adef3d19d63879f81ea89/bloom_inference/modeling_bloom/modeling_bloom.py#L200-L202 This is pretty advanced stuff! I can explain how it works a bit more if you really need to use pjit.\r\n\r\nBest of luck! Hope these answers provide some pointers as to how you can fit the XXL model on a v3-8!",
"One other thing I forgot! If you're running inference on _batches_ of data, using [`pmap`](https://jax.readthedocs.io/en/latest/jax-101/06-parallelism.html) for data parallelism across TPU devices is by far your best shout. \r\n\r\nYou can do this easily using the example script [run_clm_flax.py](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_clm_flax.py) with the `--do_eval` flag. This example wraps up the model loading, data loading and data parallelisation using pmap into one script, so you can run it using a single command:\r\n\r\n```\r\npython run_clm_flax.py \\\r\n --output_dir=\"./eval-out\" \\\r\n --model_name_or_path=\"google/mt5-small\" \\\r\n --dataset_name=\"oscar\" \\\r\n --dataset_config_name=\"unshuffled_deduplicated_no\" \\\r\n --do_eval \\\r\n --per_device_eval_batch_size=\"64\" \\\r\n --overwrite_output_dir \\\r\n```\r\n\r\nCurrently, the evaluation step will only return the eval loss. You can modify it to also return the logits to get the actual predictions as well:\r\nhttps://github.com/huggingface/transformers/blob/9edf37583411f892cea9ae7d98156c85d7c087b1/examples/flax/language-modeling/run_clm_flax.py#L711\r\n\r\nIf nothing else, you can use the run_clm_flax.py script as an example of how we can pmap to effectively parallelise across TPU devices.",
"great! Thank you very much for your suggestion. I will try it next",
"Put together a quick codesnippet that isolates `pmap`: https://github.com/sanchit-gandhi/codesnippets/blob/main/pmap_flaxmt5_generate.ipynb\r\n\r\nThis doesn't require any optimiser initialisation so should be much more memory efficient than using the previous suggestion of [run_clm_flax.py](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_clm_flax.py).",
"ok, Does this method also support the XXL model on the TPU V3-8?",
"The methodology remains the same for any checkpoint. As to whether the XXL model fits in memory you'll have to experiment for yourself! Definitely worth trying converting the model params to half-precision and running the computations in bf16 for this size model (as done in this code snippet: https://github.com/sanchit-gandhi/codesnippets/blob/main/flaxmt5_inference_half_precision.ipynb)",
"ok, I am very grateful for your suggestion, I plan to try and experiment further",
"When I load it with this model\"ClueAI/ChatYuan-large-v1\", the following error will occur. How to solve this problem?\r\n```shell\r\nSome weights of the model checkpoint at ClueAI/ChatYuan-large-v1 were not used when initializing FlaxT5ForConditionalGeneration: {('decoder', 'embed_tokens', 'kernel'), ('encoder', 'embed_tokens', 'kernel')}\r\n- This IS expected if you are initializing FlaxT5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing FlaxT5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n[<ipython-input-16-6177c268ed70>](https://localhost:8080/#) in <module>\r\n 1 model_name = \"ClueAI/ChatYuan-large-v1\"\r\n 2 #model, params = FlaxMT5ForConditionalGeneration.from_pretrained(model_name, _do_init=False)\r\n----> 3 model, params = FlaxT5ForConditionalGeneration.from_pretrained(model_name, from_pt=True)\r\n 4 \r\n 5 tokenizer = T5Tokenizer.from_pretrained(model_name)\r\n\r\nTypeError: cannot unpack non-iterable FlaxT5ForConditionalGeneration object\r\n\r\n```\r\n\r\n```python\r\n\r\nmodel, params = FlaxT5ForConditionalGeneration.from_pretrained(\"ClueAI/ChatYuan-large-v1\", from_pt=True)\r\n\r\n```",
"Hey @joytianya! It's not possible to use `from_pt=True` with `_do_init=False`. Currently, you need to load PyTorch weights with `_do_init=True`:\r\n```python\r\nmodel = FlaxT5ForConditionalGeneration.from_pretrained(\"ClueAI/ChatYuan-large-v1\", from_pt=True)\r\nparams = model.params\r\n```\r\nOr directly load Flax weights **if they are saved in the repo**. If you want to load the model instance and weights separately, you can set `_do_init=False` (see https://github.com/huggingface/transformers/pull/16148#issue-1168756524):\r\n```python\r\nmodel, params = FlaxT5ForConditionalGeneration.from_pretrained(\"ClueAI/ChatYuan-large-v1\", _do_init=False)\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"> #16148 (comment)\r\nwhile i try, error occur, How to solve this problem?\r\n```python\r\nmodel, params = FlaxT5ForConditionalGeneration.from_pretrained(\"ClueAI/ChatYuan-large-v1\", _do_init=False)\r\n\r\n```\r\n\r\n```python\r\nOSError: ClueAI/ChatYuan-large-v1 does not appear to have a file named flax_model.msgpack but there is a file for PyTorch weights. Use `from_pt=True` to load this model from those weights.\r\n````",
"i try it, and How to configure \"max_length\", \"top_k\", \"do_sample\" and other parameters with this ?\r\n\r\nhttps://github.com/sanchit-gandhi/codesnippets/blob/main/pmap_flaxmt5_generate.ipynb\r\n\r\n\r\n",
"outputs = jit_generate(input_ids=input_ids, max_new_tokens=512, top_k=30, do_sample=True, temperature=0.7).sequences\r\nI found that the generated shape is max_new_tokens ,\r\nWhether the end character can be reached and terminated , so as to save time\r\nWhat shall I do?",
"I found that the results of each run are the same, but do_ Sample=True, how to configure it to generate randomly",
"hi, @sanchit-gandhi I look forward to your reply",
"Hey @joytianya! Answering your questions sequentially:\r\n1. `_do_init=False` is only supported when we directly load Flax weights. The error message we're getting is telling us that the model only has PyTorch weights available. Let's first load the model in PyTorch on CPU, save it as a Flax model, then re-load in on TPU:\r\n```python\r\nimport jax\r\nfrom transformers import FlaxMT5ForConditionalGeneration\r\n\r\nSAVE_DIR = \"/path/to/save/dir\" # change this to where you want the model to be saved\r\n\r\nwith jax.default_device(jax.devices(\"cpu\")[0]):\r\n model = FlaxMT5ForConditionalGeneration.from_pretrained(\"ClueAI/ChatYuan-large-v1\", from_pt=True)\r\n model.save_pretrained(SAVE_DIR)\r\n```\r\nNow the next time you load the model, you can do so with `_do_init=False` and the default TPU device:\r\n```python\r\nmodel, params = FlaxT5ForConditionalGeneration.from_pretrained(SAVE_DIR, _do_init=False)\r\n```\r\n\r\n2. Can you try using `static_broadcasted_argnums` and passing the argument indices of the variables you want to control:\r\n```python\r\npmap_generate = jax.pmap(model.generate, \"batch\", static_broadcasted_argnums =[ <PUT A LIST OF THE ARGNUMS YOU WANT TO PASS>])\r\n```\r\nSee https://jax.readthedocs.io/en/latest/_autosummary/jax.pmap.html for details.\r\n\r\n3. > Whether the end character can be reached and terminated , so as to save time \r\n\r\nThe model will stop generating when the EOS token is reached. Make sure you have configured your tokenizer correctly: https://huggingface.co/docs/transformers/model_doc/mt5#transformers.T5Tokenizer\r\n\r\n4. > I found that the results of each run are the same, but do_ Sample=True, how to configure it to generate randomly\r\n\r\nDo you have a codesnippet you could share that demonstrates this? Thanks!",
"In order to explain the problem 3 and 4 in detail, I wrote this code and after execution.\r\nFor 4. The result of each generation is exactly the same\r\nFor 3. Different from max_length, time is very different. Time and max_length are proportional. It doesnβt seem to end early\r\n\r\n```python\r\nfrom transformers import T5Tokenizer, FlaxMT5ForConditionalGeneration\r\nimport jax\r\nmodel = FlaxMT5ForConditionalGeneration.from_pretrained(\"google/mt5-small\", from_pt=True)\r\ntokenizer = T5Tokenizer.from_pretrained(\"google/mt5-small\")\r\n# vanilla generate -> JIT generate \r\njit_generate = jax.jit(model.generate, static_argnames=[\"max_length\", \"top_k\", \"do_sample\"])\r\n\r\n\r\ndef answer(max_length):\r\n input_context = [\"The dog is\", \"The cat is\"]\r\n input_ids = tokenizer(input_context, return_tensors=\"np\").input_ids\r\n outputs = jit_generate(input_ids=input_ids, max_length=max_length, top_k=30, do_sample=True).sequences\r\n res = tokenizer.batch_decode(outputs, skip_special_tokens=True)\r\n\r\n print(outputs)\r\n print(res)\r\n return res\r\n\r\nanswer(20)\r\n\r\nimport time\r\nstart_time = time.time()\r\nfor i in range(10):\r\n answer(20)\r\nprint(time.time() - start_time)\r\n\r\n\r\nanswer(1024)\r\n\r\nimport time\r\nstart_time = time.time()\r\nfor i in range(10):\r\n answer(1024)\r\nprint(time.time() - start_time)\r\n```\r\n\r\n```python\r\nfrom transformers import T5Tokenizer, FlaxMT5ForConditionalGeneration\r\nimport jax\r\nimport jax.numpy as jnp\r\nmodel = FlaxMT5ForConditionalGeneration.from_pretrained(\"ClueAI/ChatYuan-large-v1\", from_pt=True, dtype=jnp.bfloat16)\r\nmodel.params = model.to_bf16(model.params)\r\ntokenizer = T5Tokenizer.from_pretrained(\"ClueAI/ChatYuan-large-v1\")\r\n# copy (replicate) the params across your TPU devices\r\n#params = jax_utils.replicate(params)\r\n# pmap generate (like jit, but replicated across our JAX devices)\r\njit_generate = jax.jit(model.generate, static_argnames=[\"max_length\", \"max_new_tokens\", \"top_k\", \"do_sample\", \"temperature\", \"eos_token_id\"])\r\n\r\ndef answer(max_length):\r\n input_context = [\"The dog is\", \"The cat is\"]\r\n input_ids = tokenizer(input_context, return_tensors=\"np\").input_ids\r\n outputs = jit_generate(input_ids=input_ids, max_length=max_length, top_k=30, do_sample=True).sequences\r\n res = tokenizer.batch_decode(outputs, skip_special_tokens=True)\r\n\r\n print(outputs)\r\n print(res)\r\n return res\r\n\r\nanswer(256)\r\n\r\nimport time\r\nstart_time = time.time()\r\nfor i in range(10):\r\n answer(256)\r\nprint(time.time() - start_time)\r\n\r\n\r\nanswer(1024)\r\n\r\nimport time\r\nstart_time = time.time()\r\nfor i in range(10):\r\n answer(1024)\r\nprint(time.time() - start_time)\r\n```",
"for 2, Is this correct?\r\n\r\n```python\r\npmap_generate = jax.pmap(model.generate, \"batch\", static_broadcasted_argnums = [ 2, 3, 4, 5, 6])\r\noutputs = pmap_generate(input_ids, attention_mask=attention_mask, max_new_tokens=max_new_tokens, top_k=30, do_sample=True, temperature=0.7, params=params).sequences\r\n```\r\n\r\nerror occur:\r\n```python\r\n outputs = pmap_generate(input_ids, attention_mask=attention_mask, max_new_tokens=max_new_tokens, top_k=30, do_sample=True, temperature=0.7, params=params).sequences\r\nValueError: pmapped function has static_broadcasted_argnums=(2, 3, 4, 5, 6) but was called with only 1 positional argument. All static broadcasted arguments must be passed positionally.\r\n\r\n```",
"hi, @sanchit-gandhi I look forward to your reply",
"Hey @joytianya, \r\n\r\nIf you don't want to change the generation params in `.generate`, you can just fix them like this:\r\n```python\r\nfrom flax.training.common_utils shard\r\n\r\ndef generate(params, batch):\r\n outputs = model.generate(batch[\"input_ids\"], attention_mask=batch[\"attention_mask\"], max_new_tokens=128, top_k=30, do_sample=True, temperature=0.7, params=params).sequences # anything that does not depend on `batch` is fixed\r\n return outputs\r\n\r\np_generate = jax.pmap(generate, \"batch\")\r\n\r\ninput_context = [\"The dog is\" for _ in range(8)] #Β batch size needs to be a multiple of the number of TPU devices\r\n\r\nbatch = tokenizer(input_context, return_tensors=\"np\")\r\nbatch = shard(batch)\r\n\r\n# slow - we're compiling\r\noutputs = p_generate(batch)\r\n\r\n# fast!\r\noutputs = p_generate(batch)\r\n```\r\n\r\n\r\n",
"> In order to explain the problem 3 and 4 in detail, I wrote this code and after execution.\n> \n> For 4. The result of each generation is exactly the same\n> \n> For 3. Different from max_length, time is very different. Time and max_length are proportional. It doesnβt seem to end early\n> \n> \n> \n> ```python\n> \n> from transformers import T5Tokenizer, FlaxMT5ForConditionalGeneration\n> \n> import jax\n> \n> model = FlaxMT5ForConditionalGeneration.from_pretrained(\"google/mt5-small\", from_pt=True)\n> \n> tokenizer = T5Tokenizer.from_pretrained(\"google/mt5-small\")\n> \n> # vanilla generate -> JIT generate \n> \n> jit_generate = jax.jit(model.generate, static_argnames=[\"max_length\", \"top_k\", \"do_sample\"])\n> \n> \n> \n> \n> \n> def answer(max_length):\n> \n> input_context = [\"The dog is\", \"The cat is\"]\n> \n> input_ids = tokenizer(input_context, return_tensors=\"np\").input_ids\n> \n> outputs = jit_generate(input_ids=input_ids, max_length=max_length, top_k=30, do_sample=True).sequences\n> \n> res = tokenizer.batch_decode(outputs, skip_special_tokens=True)\n> \n> \n> \n> print(outputs)\n> \n> print(res)\n> \n> return res\n> \n> \n> \n> answer(20)\n> \n> \n> \n> import time\n> \n> start_time = time.time()\n> \n> for i in range(10):\n> \n> answer(20)\n> \n> print(time.time() - start_time)\n> \n> \n> \n> \n> \n> answer(1024)\n> \n> \n> \n> import time\n> \n> start_time = time.time()\n> \n> for i in range(10):\n> \n> answer(1024)\n> \n> print(time.time() - start_time)\n> \n> ```\n> \n> \n> \n> ```python\n> \n> from transformers import T5Tokenizer, FlaxMT5ForConditionalGeneration\n> \n> import jax\n> \n> import jax.numpy as jnp\n> \n> model = FlaxMT5ForConditionalGeneration.from_pretrained(\"ClueAI/ChatYuan-large-v1\", from_pt=True, dtype=jnp.bfloat16)\n> \n> model.params = model.to_bf16(model.params)\n> \n> tokenizer = T5Tokenizer.from_pretrained(\"ClueAI/ChatYuan-large-v1\")\n> \n> # copy (replicate) the params across your TPU devices\n> \n> #params = jax_utils.replicate(params)\n> \n> # pmap generate (like jit, but replicated across our JAX devices)\n> \n> jit_generate = jax.jit(model.generate, static_argnames=[\"max_length\", \"max_new_tokens\", \"top_k\", \"do_sample\", \"temperature\", \"eos_token_id\"])\n> \n> \n> \n> def answer(max_length):\n> \n> input_context = [\"The dog is\", \"The cat is\"]\n> \n> input_ids = tokenizer(input_context, return_tensors=\"np\").input_ids\n> \n> outputs = jit_generate(input_ids=input_ids, max_length=max_length, top_k=30, do_sample=True).sequences\n> \n> res = tokenizer.batch_decode(outputs, skip_special_tokens=True)\n> \n> \n> \n> print(outputs)\n> \n> print(res)\n> \n> return res\n> \n> \n> \n> answer(256)\n> \n> \n> \n> import time\n> \n> start_time = time.time()\n> \n> for i in range(10):\n> \n> answer(256)\n> \n> print(time.time() - start_time)\n> \n> \n> \n> \n> \n> answer(1024)\n> \n> \n> \n> import time\n> \n> start_time = time.time()\n> \n> for i in range(10):\n> \n> answer(1024)\n> \n> print(time.time() - start_time)\n> \n> ```\n\nIs this phenomenon correct?",
"Hey @joytianya\r\n\r\n> The result of each generation is exactly the same\r\n\r\nWe can't really rely on the outputs of the model since it's only been pre-trained, not fine-tuned, so it's bound to output gibberish regardless of what we give it (see https://huggingface.co/google/mt5-small for details). You can try using a fine-tuned checkpoint if you want to look at the actual token predictions.\r\n\r\n> Different from max_length, time is very different. Time and max_length are proportional. It doesnβt seem to end early\r\n\r\nThis is because the model has only been pre-trained (not fine-tuned): the model never hits the end-of-sequence token, it generates random outputs until it hits max length. Therefore, it always generates to max length and never terminates early. So if you increase max length, the model generates more tokens, and so decoding takes longer.",
"hey @sanchit-gandhi ,\r\n1. I can try using a fine-tuned checkpoint ClueAI/ChatYuan-large-v1, The phenomenon is the same. I used sample sampling. With the same code, when I use GPU, the results of each run are different. But the results on TPU are still the same.\r\n2. Additionally, you can see that the length of the generated sentence is much smaller than the max length of tokens, so it should have already hit the end-of-sequence token.\r\nHope you can give it a try.\r\n```python\r\n\r\nfrom transformers import T5Tokenizer, FlaxMT5ForConditionalGeneration\r\n\r\nimport jax\r\n\r\nimport jax.numpy as jnp\r\n\r\nmodel = FlaxMT5ForConditionalGeneration.from_pretrained(\"ClueAI/ChatYuan-large-v1\", from_pt=True, dtype=jnp.bfloat16)\r\n\r\nmodel.params = model.to_bf16(model.params)\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"ClueAI/ChatYuan-large-v1\")\r\n\r\n# copy (replicate) the params across your TPU devices\r\n\r\n#params = jax_utils.replicate(params)\r\n\r\n# pmap generate (like jit, but replicated across our JAX devices)\r\n\r\njit_generate = jax.jit(model.generate, static_argnames=[\"max_length\", \"max_new_tokens\", \"top_k\", \"do_sample\", \"temperature\", \"eos_token_id\"])\r\n\r\n\r\n\r\ndef answer(max_length):\r\n\r\n input_context = [\"The dog is\", \"The cat is\"]\r\n\r\n input_ids = tokenizer(input_context, return_tensors=\"np\").input_ids\r\n\r\n outputs = jit_generate(input_ids=input_ids, max_length=max_length, top_k=30, do_sample=True).sequences\r\n\r\n res = tokenizer.batch_decode(outputs, skip_special_tokens=True)\r\n\r\n\r\n\r\n print(outputs)\r\n\r\n print(res)\r\n\r\n return res\r\n\r\n\r\n\r\nanswer(256)\r\n\r\n\r\n\r\nimport time\r\n\r\nstart_time = time.time()\r\n\r\nfor i in range(10):\r\n\r\n answer(256)\r\n\r\nprint(time.time() - start_time)\r\n\r\n\r\n\r\n\r\n\r\nanswer(1024)\r\n\r\n\r\n\r\nimport time\r\n\r\nstart_time = time.time()\r\n\r\nfor i in range(10):\r\n\r\n answer(1024)\r\n\r\nprint(time.time() - start_time)\r\n```",
"Hey @joytianya - if running this on a GPU gives one answer and running it on a TPU another, I'm not really sure this is a transformers based issue but probably a JAX or Flax one.\r\n\r\nCould you try re-running the code-snippet under the highest JAX matmul precision? We should then get equivalence on CPU/GPU/TPU. See https://github.com/huggingface/transformers/issues/15754#issuecomment-1048163411 for details.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,684
| 1,684
|
NONE
| null |
### System Info
When I use the following code on tpuvm and use model.generate() to infer, the speed is very slow. It seems that the tpu is not used. What is the problem?
jax device is exist
```python
import jax
num_devices = jax.device_count()
device_type = jax.devices()[0].device_kind
assert "TPU" in device_type
from transformers import AutoTokenizer, FlaxAutoModelForCausalLM
model = FlaxMT5ForConditionalGeneration.from_pretrained("google/mt5-small")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-small")
input_context = "The dog"
# encode input context
input_ids = tokenizer(input_context, return_tensors="np").input_ids
# generate candidates using sampling
outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)
print(outputs)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import jax
num_devices = jax.device_count()
device_type = jax.devices()[0].device_kind
assert "TPU" in device_type
from transformers import AutoTokenizer, FlaxAutoModelForCausalLM
model = FlaxMT5ForConditionalGeneration.from_pretrained("google/mt5-small")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-small")
input_context = "The dog"
# encode input context
input_ids = tokenizer(input_context, return_tensors="np").input_ids
# generate candidates using sampling
outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)
print(outputs)
```
### Expected behavior
Expect it to be fast
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20794/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20793
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20793/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20793/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20793/events
|
https://github.com/huggingface/transformers/issues/20793
| 1,499,813,284
|
I_kwDOCUB6oc5ZZVWk
| 20,793
|
is:issue is:open Parameters which did not receive grad for rank 5: encoder.block.0.layer.0.TransientGlobalSelfAttention.global_relative_attention_bias.weight
|
{
"login": "Arij-Aladel",
"id": 68355048,
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arij-Aladel",
"html_url": "https://github.com/Arij-Aladel",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@sgugger Hi!\r\nIam trying to use longt5 for summarizing task \r\n\r\nI am using this [script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization#with-accelerate) \r\n\r\nand this [model](https://huggingface.co/google/long-t5-tglobal-base)\r\n\r\n\r\nI am getting this error\r\n\r\n> Traceback (most recent call last):\r\n> File \"/cephfs/home/arij/Memory-transformer-with-hierarchical-attention_MLM/Summarization/run_summarization_notrainer-Copy1.py\", line 947, in <module>\r\n> main()\r\n> File \"/cephfs/home/arij/Summarization/run_summarization_notrainer-Copy1.py\", line 821, in main\r\n> outputs = model(**batch)\r\n> File \"/home/arij/anaconda3/envs/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1190, in _call_impl\r\n> return forward_call(*input, **kwargs)\r\n> File \"/home/arij/anaconda3/envs/lib/python3.9/site-packages/torch/nn/parallel/distributed.py\", line 1026, in forward\r\n> if torch.is_grad_enabled() and self.reducer._rebuild_buckets():\r\n> RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by \r\n> making sure all `forward` function outputs participate in calculating loss. \r\n> If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).\r\n> Parameters which did not receive grad for rank 1: encoder.block.0.layer.0.TransientGlobalSelfAttention.global_relative_attention_bias.weight\r\n> Parameter indices which did not receive grad for rank 1: 6\r\n\r\n\r\n I have tried many times to use it, but it does not work any hints?"
] | 1,671
| 1,675
| 1,674
|
NONE
| null |
### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.31
- Python version: 3.9.13
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
simply using the official example https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py
1. using
2. export CUDA_LAUNCH_BLOCKING=1, TORCH_DISTRIBUTED_DEBUG=INFO
accelerate launch --config_file='./accelerate.yaml' run_summarization_notrainer.py --seed=42 --preprocessing_num_workers=1 --weight_decay='0.001' --output_dir="arxiv_summarization/longt5/5_beam/" --per_device_train_batch_size=1 --per_device_eval_batch_size=1 --dataset_name='ccdv/arxiv-summarization' --num_train_epochs=10 --model_name_or_path='google/long-t5-tglobal-base' --tokenizer_name='google/long-t5-tglobal-base' --num_beams=5 --with_tracking --report_to='wandb' --checkpointing_steps='epoch'
runing the script I got this error after runing over 12 examples
Parameters which did not receive grad for rank 5: encoder.block.0.layer.0.TransientGlobalSelfAttention.global_relative_attention_bias.weight
What is interesting that when I put the number of processes equal to 1 in accelerate.yaml
> compute_environment: LOCAL_MACHINE
> deepspeed_config: {}
> distributed_type: MULTI_GPU
> fsdp_config: {}
> machine_rank: 0
> main_process_ip: null
> main_process_port: null
> main_training_function: main
> mixed_precision: 'no'
> num_machines: 1
> num_processes: 1
> use_cpu: false
The script runs normally but when I put it equal to 8 then I got this error after runing over 12 examples
### Expected behavior
script run normally
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20793/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20792
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20792/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20792/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20792/events
|
https://github.com/huggingface/transformers/pull/20792
| 1,499,674,633
|
PR_kwDOCUB6oc5FnLLt
| 20,792
|
Add Mask2Former
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Gently pinging @sgugger here for a final review.",
"> Thanks for adding this new model! I have a couple of nits but nothing major, this is very clean already!\r\n\r\n@sgugger Thanks for the review! I have resolved all comments from yesterday's review. Please do let me know if the code needs any further changes/improvements. Would be happy to take them up! ",
"So it seems there are 2 todo's left:\r\n\r\n- [x] leverage AutoImageProcessor instead of adding a new one\r\n- [x] make sure slow integration tests of Donut and Swin are still passing, possibly using `MaskFormerSwin` as backbone",
"> So it seems there are 2 todo's left:\r\n> \r\n> * [x] leverage AutoImageProcessor instead of adding a new one\r\n> * [x] make sure slow integration tests of Donut and Swin are still passing, possibly using `MaskFormerSwin` as backbone\r\n\r\nSure I'll connect with @alaradirik and we'll fix these shortly and update you.",
"@NielsRogge Just wanted to update that backbone for Mask2Former has been switched to `MaskFormerSwin`.\r\nChanges to modeling_swin.py and modeling_donut_swin.py have been reverted so slow integration tests of Donut and Swin are passing now.\r\n\r\nConversion of all 30 checkpoints from [Mask2Former model zoo](https://github.com/facebookresearch/Mask2Former/blob/main/MODEL_ZOO.md) using swin backbone corresponding to all 4 datasets and segmentation tasks is done and are available on the Hub. I just need to update the model cards. Will finish that shortly too. ",
"Thank you! \r\n\r\nI'm just wondering why the issue was occurring only on Swin-base on one specific dataset. It would definitely be nice to clear that up, does it have to do with the image resolution?\r\n\r\nFor instance for UperNet (at #20648) I was able to perfectly convert all checkpoints that leverage Swin-base by using our `SwinBackbone`. This one was ported from the mmsegmentation library whose Swin implementation is [here](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/backbones/swin.py#L166). So it's a bit strange. Might it be that we were just \"lucky\" with UperNet and OneFormer?"
] | 1,671
| 1,673
| 1,673
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds Mask2Former to transformers.
Original repo: https://github.com/facebookresearch/Mask2Former/
Paper: https://arxiv.org/abs/2112.01527
Co-authored with @shivalikasingh95
To Do:
- [x] Fix model tests (hidden state shapes, loading the config)
- [X] Test model, visualize outputs
- [X] Update model cards
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20792/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20792/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20792",
"html_url": "https://github.com/huggingface/transformers/pull/20792",
"diff_url": "https://github.com/huggingface/transformers/pull/20792.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20792.patch",
"merged_at": 1673890628000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20791
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20791/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20791/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20791/events
|
https://github.com/huggingface/transformers/pull/20791
| 1,499,258,502
|
PR_kwDOCUB6oc5FlxQv
| 20,791
|
Embed circle packing chart for model summary
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I think the number of downloads dictates the size, right?\r\n\r\nImpressive! I like how visual it is. Makes it understandable straight away. I guess a potential improvement would be to be able to read the subcategories without first clicking on a major category, but I don't know how feasible that is. Thanks for working on this! Really cool.",
"Thanks for the feedback!\r\n\r\nI think it might make the visual more difficult to read if we also included the subcategories (modality) in addition to the main category. For example, the decoders bubble is already quite small, and adding more text might make it more cluttered (same for some of the smaller encoder-decoder bubbles). "
] | 1,671
| 1,671
| 1,671
|
MEMBER
| null |
This PR embeds an interactive chart of the most popular models by modality so users have a nice high-level visual overview of the π€ Transformers modelscape.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20791/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20791",
"html_url": "https://github.com/huggingface/transformers/pull/20791",
"diff_url": "https://github.com/huggingface/transformers/pull/20791.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20791.patch",
"merged_at": 1671560812000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20790
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20790/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20790/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20790/events
|
https://github.com/huggingface/transformers/pull/20790
| 1,499,183,949
|
PR_kwDOCUB6oc5FlgyR
| 20,790
|
[Pipeline] skip feature extraction test if in `IMAGE_PROCESSOR_MAPPING`
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks\r\nI am not sure about how do we want to approach that exactly, but I think that's the plan at some point cc @amyeroberts :D ",
"> Thanks I am not sure about how do we want to approach that exactly, but I think that's the plan at some point cc @amyeroberts :D\r\n\r\nI think it's good to disambiguate Audio from Vision (both are currently named `FeatureExtractor` I think).\r\n\r\nIn that regard I'd like to stress to include no-code (or almost none) into `Processor` the general class that encapsulates `Tokenizer`, `FeatureExtractor` and `ImageProcessor` . It's great for demos and quick hacks, but it's much more cumbersome to reason about within a lib, as it's impossible to know what it should be able to do since it's by definition not standard. (It doesn't have any invariant).\r\n\r\nFor instance `Tokenizer.encode(text)` is always going to be a valid call and will return ids (that's an invariant)."
] | 1,671
| 1,671
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the [following failing test](https://app.circleci.com/pipelines/github/huggingface/transformers/53884/workflows/8df76bfb-b6d2-493e-afdf-257b59672b02/jobs/648580)
## Context:
Currently `FeatureExtractionPipelineTests` are skipped for multi-modal models by checking if the model config is in `FEATURE_EXTRACTOR_MAPPING`. The check is done [on this line](https://github.com/huggingface/transformers/blob/1543cee7c8c95ef47f832b1f37625ba2923c4994/tests/pipelines/test_pipelines_feature_extraction.py#L181)
Recent vision and multimodal models will deprecate the usage of `xxxFeatureExtractor` in favor of `xxxImageProcessors`. For [Blip](https://github.com/huggingface/transformers/pull/20716), the test fails because `BlipFeatureExtractor` is not implemented at all in favor of `BlipImageProcessor`.
## Why this fix is relevant?
Blip seems to be the first multimodal model that relies on `xxxImageProcessor` only.
cc @Narsil @amyeroberts @NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20790/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20790",
"html_url": "https://github.com/huggingface/transformers/pull/20790",
"diff_url": "https://github.com/huggingface/transformers/pull/20790.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20790.patch",
"merged_at": 1671191219000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20789
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20789/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20789/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20789/events
|
https://github.com/huggingface/transformers/issues/20789
| 1,498,893,029
|
I_kwDOCUB6oc5ZV0rl
| 20,789
|
ImportError while trying to get the OS and software versions
|
{
"login": "vaibhav-k",
"id": 25487984,
"node_id": "MDQ6VXNlcjI1NDg3OTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/25487984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vaibhav-k",
"html_url": "https://github.com/vaibhav-k",
"followers_url": "https://api.github.com/users/vaibhav-k/followers",
"following_url": "https://api.github.com/users/vaibhav-k/following{/other_user}",
"gists_url": "https://api.github.com/users/vaibhav-k/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vaibhav-k/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vaibhav-k/subscriptions",
"organizations_url": "https://api.github.com/users/vaibhav-k/orgs",
"repos_url": "https://api.github.com/users/vaibhav-k/repos",
"events_url": "https://api.github.com/users/vaibhav-k/events{/privacy}",
"received_events_url": "https://api.github.com/users/vaibhav-k/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The CLI is not supported this way, you should run `transformers-cli env`.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,671
| 1,674
| 1,674
|
NONE
| null |
### System Info
OS: Ubuntu 22.04; Python version: Python 3.10.6
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Simply run the command `python3 src/transformers/commands/transformers_cli.py env`.
### Expected behavior
I wanted to get the OS and software versions but instead got the error
```
Traceback (most recent call last):
File "/home/skywalker/Downloads/transformers/transformers/src/transformers/commands/transformers_cli.py", line 18, in <module>
from .add_new_model import AddNewModelCommand
ImportError: attempted relative import with no known parent package
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20789/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/20788
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20788/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20788/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20788/events
|
https://github.com/huggingface/transformers/pull/20788
| 1,498,850,705
|
PR_kwDOCUB6oc5FkXgo
| 20,788
|
Recompile `apex` in `DeepSpeed` CI image
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
COLLABORATOR
| null |
# What does this PR do?
The based image ships with a version of `apex`. We need to recompile the new version for torch 1.13 though.
This should fix some CI failures, but not all - we will check again the CI report in the next run, if this is ok for you @stas00. Otherwise I can run the full suite locally, and discuss with you how to fix all of them before merge.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20788/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20788/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20788",
"html_url": "https://github.com/huggingface/transformers/pull/20788",
"diff_url": "https://github.com/huggingface/transformers/pull/20788.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20788.patch",
"merged_at": 1671136528000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20787
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20787/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20787/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20787/events
|
https://github.com/huggingface/transformers/pull/20787
| 1,498,833,937
|
PR_kwDOCUB6oc5FkT0l
| 20,787
|
[S2T, Whisper] Add copied from statements
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,687
| 1,671
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds 'copied from MBart' statements to Speech2TextEncoderLayer and Speech2TextDecoderLayer.
Since the WhisperEncoderLayer and WhisperDecoderLayer are copied from Speech2Text, these classes are updated with 'copied from MBart' statements to minimise the chain of 'copied from' statements.
Previously:
* (mBart -> ) Speech2Text -> Whisper
Updated:
* mBart -> Speech2Text
* mBart -> Whisper
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts and @NielsRogge
- speech models: @sanchit-gandhi
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger and @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20787/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20787",
"html_url": "https://github.com/huggingface/transformers/pull/20787",
"diff_url": "https://github.com/huggingface/transformers/pull/20787.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20787.patch",
"merged_at": 1671560036000
}
|
https://api.github.com/repos/huggingface/transformers/issues/20786
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/20786/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/20786/comments
|
https://api.github.com/repos/huggingface/transformers/issues/20786/events
|
https://github.com/huggingface/transformers/pull/20786
| 1,498,811,347
|
PR_kwDOCUB6oc5FkO4g
| 20,786
|
Stop calling expand_1d on newer TF versions
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,671
| 1,671
| 1,671
|
MEMBER
| null |
Tensorflow changed their default `train_step` in version 2.11 to no longer user `data_adapter.expand_1d`, and also deleted that method. Since we copied that code for our train step, this made our `train_step` stop working in 2.11 when the user was using a non-dummy loss!
This PR resolves the issue by not calling `expand_1d` for TF versions >= 2.11.
Fixes #20750
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/20786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/20786/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/20786",
"html_url": "https://github.com/huggingface/transformers/pull/20786",
"diff_url": "https://github.com/huggingface/transformers/pull/20786.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/20786.patch",
"merged_at": 1671196207000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.