url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/22091
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22091/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22091/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22091/events
|
https://github.com/huggingface/transformers/issues/22091
| 1,619,533,981
|
I_kwDOCUB6oc5giCCd
| 22,091
|
flan-t5-xl and flan-t5-xxl model deployment on Sagemaker fails on deploying from HuggingFace Hub
|
{
"login": "rags1357",
"id": 19560176,
"node_id": "MDQ6VXNlcjE5NTYwMTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/19560176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rags1357",
"html_url": "https://github.com/rags1357",
"followers_url": "https://api.github.com/users/rags1357/followers",
"following_url": "https://api.github.com/users/rags1357/following{/other_user}",
"gists_url": "https://api.github.com/users/rags1357/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rags1357/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rags1357/subscriptions",
"organizations_url": "https://api.github.com/users/rags1357/orgs",
"repos_url": "https://api.github.com/users/rags1357/repos",
"events_url": "https://api.github.com/users/rags1357/events{/privacy}",
"received_events_url": "https://api.github.com/users/rags1357/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@philschmid could you please help here? I've gone though your workaround [here](https://www.philschmid.de/deploy-t5-11b)",
"You need to install a more recent version of Transformers, 4.17.0 won't support sharded checkpoints.",
"@rags1357 you can check out this blog post: [Deploy FLAN-T5 XXL on Amazon SageMaker](https://www.philschmid.de/deploy-flan-t5-sagemaker)\r\n",
"Thank you @sgugger and @philschmid , will try it with the updated Transformers version",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### System Info
Code to replicate from model hub - https://huggingface.co/google/flan-t5-large/tree/main -> Deploy -> Amazon SageMaker endpoint -> AWS
```
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'google/flan-t5-xl',
'HF_TASK':'text2text-generation'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
predictor.predict({
'inputs': "The answer to the universe is"
})
```
The endpoint invocation fails with below error -
`2023-03-10 19:15:14,508 [INFO ] W-google__flan-t5-xl-5-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index', 'flax_model.msgpack'] found in directory /.sagemaker/mms/models/google__flan-t5-xl or `from_tf` and `from_flax` set to False.`
This is possibly because if you look at files under "Files and Versions"( [Link](https://huggingface.co/google/flan-t5-xl/tree/main)) the model has been split up into multiple (pytorch_model-00001-of-00002.bin files because of the size) and the out of the box solution is looking for one pytorch_model.bin file and failing
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Provided code in the issue description
### Expected behavior
Should work out of the box on SageMaker deployment
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22091/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22090
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22090/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22090/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22090/events
|
https://github.com/huggingface/transformers/pull/22090
| 1,619,319,115
|
PR_kwDOCUB6oc5LyP_M
| 22,090
|
Add TF port of BLIP
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The TF port is mostly complete now and tests are passing locally - I just need to go around updating docs and auto classes and so on. The main code should be ready for review!",
"Looks like there are many comments to address for now. Please ping me again when it's ready for second review!",
"Got through a lot of the comments today, but I have a couple of other things to do - will try to finish them tomorrow!",
"The last remaining big issue is that some of the pt-tf equivalence tests fail when weights don't match up between models. This is caused by the cross-attention weights not being built, presumably because those layers aren't being called in the forward pass. I'm working on figuring out why and resolving that!",
"The issue seems to be that in all of our other models, cross-attention layers are only added when `config.add_cross_attention` is True, but in the case of BLIP it only checks `config.is_decoder`. As a result, the PyTorch models often initialize cross-attention layers that aren't used, which causes weight mismatch issues for us in crossloading tests, because TF only creates weights on first use.",
"It's coming, don't worry! This cross-attention behaviour is just very odd and I'm trying to track it down first",
"Hi all! I've addressed all comments and local tests look good. The remaining issues are:\r\n\r\n- Converting checkpoints so the tests don't need `from_pt`\r\n- Maybe adding more auto classes\r\n\r\nI'm not sure about the auto classes, though - they're missing in the original PT version of the model as well, so this didn't seem like the right PR to add them. ",
"cc @sgugger - I think this is ready for a final review at last!",
"Got it, I'll figure out some way to re-enable those tests, or override them with versions that do work!",
"@sgugger this should be ready for review with all comments addressed! The failing test is in an unrelated model",
"@sgugger Sorry for the confusion - that equivalence test is present in both the `test_modeling_tf_blip` and `test_modeling_blip` file. Do we want to keep it in both?",
"Yes we do.",
"Going to leave the `pt-to-tf` changes in this PR rather than making a separate one, since they're needed for proper BLIP conversion!"
] | 1,678
| 1,680
| 1,680
|
MEMBER
| null |
Work in progress right now, will update this when it's closer to being ready!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22090/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22090/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22090",
"html_url": "https://github.com/huggingface/transformers/pull/22090",
"diff_url": "https://github.com/huggingface/transformers/pull/22090.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22090.patch",
"merged_at": 1680620723000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22089
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22089/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22089/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22089/events
|
https://github.com/huggingface/transformers/pull/22089
| 1,619,060,543
|
PR_kwDOCUB6oc5LxYST
| 22,089
|
[`Gpt-neo-x`] Fix gpt neo-x multi gpu training
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes that also what I have thought after opening the PR, I'll find a workaroud to set everything on the correct device"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR attempts to solve some issues users may encounter when training gpt-neo-x in multiple GPUs
Related: https://github.com/lvwerra/trl/pull/210
This PR might be not needed so putting it as draft for now
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22089/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22089",
"html_url": "https://github.com/huggingface/transformers/pull/22089",
"diff_url": "https://github.com/huggingface/transformers/pull/22089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22089.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22088
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22088/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22088/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22088/events
|
https://github.com/huggingface/transformers/issues/22088
| 1,619,035,296
|
I_kwDOCUB6oc5ggISg
| 22,088
|
Key error during Training
|
{
"login": "kashalakarthik",
"id": 90444415,
"node_id": "MDQ6VXNlcjkwNDQ0NDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/90444415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashalakarthik",
"html_url": "https://github.com/kashalakarthik",
"followers_url": "https://api.github.com/users/kashalakarthik/followers",
"following_url": "https://api.github.com/users/kashalakarthik/following{/other_user}",
"gists_url": "https://api.github.com/users/kashalakarthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashalakarthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashalakarthik/subscriptions",
"organizations_url": "https://api.github.com/users/kashalakarthik/orgs",
"repos_url": "https://api.github.com/users/kashalakarthik/repos",
"events_url": "https://api.github.com/users/kashalakarthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashalakarthik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Without seeing the code you run or the whole traceback, there is nothing we can do to help you.",
"The below is the code link, you can access it:\r\n[https://colab.research.google.com/drive/1ySZS1VifUxbccI81y9dLOHxg_nr9mSc0?usp=sharing](url)",
"> Without seeing the code you run or the whole traceback, there is nothing we can do to help you.\r\n\r\nThe below is the code link, you can access it:\r\n[https://colab.research.google.com/drive/1ySZS1VifUxbccI81y9dLOHxg_nr9mSc0?usp=sharing](https://github.com/huggingface/transformers/issues/url)",
"The link does not point to anything.",
"Sorry about that, Please use the link below mentioned:\r\nhttps://github.com/kashalakarthik/BERT-Hugging-face\r\n\r\nplease copy paste the link in browser if it is not working.\r\nThe links to download the dataset also mentioned in the repository.",
"You are not passing a `dataset` to your Trainer but a pandas dataframe, so this can't work. The Trainer only accepts PyTorch `Dataset` objects.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
We are trying to do text summarization using BERT2BERT model. But facing this error mentioned below:
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
[/usr/local/lib/python3.9/dist-packages/pandas/core/indexes/base.py](https://localhost:8080/#) in get_loc(self, key, method, tolerance)
3361 return self._engine.get_loc(casted_key)
3362 except KeyError as err:
-> 3363 raise KeyError(key) from err
3364
3365 if is_scalar(key) and isna(key) and not self.hasnans:
KeyError: 6868
Link for google collab:

[https://colab.research.google.com/drive/1ySZS1VifUxbccI81y9dLOHxg_nr9mSc0?usp=sharing](url)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22088/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22087
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22087/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22087/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22087/events
|
https://github.com/huggingface/transformers/pull/22087
| 1,619,003,290
|
PR_kwDOCUB6oc5LxL_m
| 22,087
|
Add AutoModelForZeroShotImageClassification
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI tests are failing due to unrelated issues, I will rebase the PR to the main branch once they are fixed."
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds `AutoModelForZeroShotImageClassification` and `TFAutoModelForZeroShotImageClassification` to transformers.
CC @MKhalusova will be adding a task guide in a separate PR
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22087/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22087/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22087",
"html_url": "https://github.com/huggingface/transformers/pull/22087",
"diff_url": "https://github.com/huggingface/transformers/pull/22087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22087.patch",
"merged_at": 1678700775000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22086
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22086/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22086/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22086/events
|
https://github.com/huggingface/transformers/pull/22086
| 1,618,988,019
|
PR_kwDOCUB6oc5LxIvh
| 22,086
|
GPT-J specific half precision on CPU note
|
{
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"My bad, I misunderstood the interaction in the issue. Tried the example, and it does indeed not work. I'll do another pass.",
"Did another take. It should now be clear that half-precision works on CUDA devices only. \r\n\r\nBtw, not sure if this is relevant but I tried this example without explicitly sending to device on a GPU, and it threw the same error. \r\n"
] | 1,678
| 1,699
| 1,678
|
CONTRIBUTOR
| null |
This PR adds a note to the GPT-J model doc indicating that the half precision on CPU example is specific to the model, and doesn't generally apply. Related to https://github.com/huggingface/transformers/issues/21989
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22086/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22086",
"html_url": "https://github.com/huggingface/transformers/pull/22086",
"diff_url": "https://github.com/huggingface/transformers/pull/22086.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22086.patch",
"merged_at": 1678475023000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22085
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22085/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22085/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22085/events
|
https://github.com/huggingface/transformers/issues/22085
| 1,618,948,972
|
I_kwDOCUB6oc5gfzNs
| 22,085
|
Some problems when using vit model
|
{
"login": "vegetablelearning",
"id": 100191632,
"node_id": "U_kgDOBfjNkA",
"avatar_url": "https://avatars.githubusercontent.com/u/100191632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegetablelearning",
"html_url": "https://github.com/vegetablelearning",
"followers_url": "https://api.github.com/users/vegetablelearning/followers",
"following_url": "https://api.github.com/users/vegetablelearning/following{/other_user}",
"gists_url": "https://api.github.com/users/vegetablelearning/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegetablelearning/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegetablelearning/subscriptions",
"organizations_url": "https://api.github.com/users/vegetablelearning/orgs",
"repos_url": "https://api.github.com/users/vegetablelearning/repos",
"events_url": "https://api.github.com/users/vegetablelearning/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegetablelearning/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Better use the [forums](https://discuss.huggingface.co/) for questions like this as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### Feature request
I am a novice in AI. Recently, I am learning the vit model. There are many teaching about nlp in the document. I want to know how to load the vit model downloaded from the hugging face, and how to use the imagenet dataset to train and test the vit model. I can't find the relevant tutorial. My question is really very basic. If you can answer my question with a simple code, I will be grateful.
### Motivation
Better use of the tansforms library.
### Your contribution
No contribution at present.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22085/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22084
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22084/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22084/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22084/events
|
https://github.com/huggingface/transformers/issues/22084
| 1,618,866,515
|
I_kwDOCUB6oc5gffFT
| 22,084
|
input_ids_seq_length is always 1
|
{
"login": "ChrisSpraaklab",
"id": 126086340,
"node_id": "U_kgDOB4PsxA",
"avatar_url": "https://avatars.githubusercontent.com/u/126086340?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChrisSpraaklab",
"html_url": "https://github.com/ChrisSpraaklab",
"followers_url": "https://api.github.com/users/ChrisSpraaklab/followers",
"following_url": "https://api.github.com/users/ChrisSpraaklab/following{/other_user}",
"gists_url": "https://api.github.com/users/ChrisSpraaklab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChrisSpraaklab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChrisSpraaklab/subscriptions",
"organizations_url": "https://api.github.com/users/ChrisSpraaklab/orgs",
"repos_url": "https://api.github.com/users/ChrisSpraaklab/repos",
"events_url": "https://api.github.com/users/ChrisSpraaklab/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChrisSpraaklab/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"Hey @ChrisSpraaklab 👋 In both types of models, `input_ids_seq_length` is relative to the output of the model, which is different for encoder-decoder (does not contain the prompt) and decoder-only models (contains the prompt). I agree that we might benefit from a rework there, for clarity :)\r\n\r\nIn any case, let's sort out your immediate issue! As the argument indicates, `max_new_tokens` will make the model generate up to `max_new_tokens` new tokens. As such, if you want to generate an output equal to the input, you'll have to set `max_new_tokens=input_ids.shape[1]`.\r\n\r\nAlso, bear in mind that encoder-decoder models ALWAYS start the output with a BOS token. As such, the length of the output will be the length of the input + 1.",
"@gante Thanks for your quick response. However, what I mean is that when input_ids_seq_length is set to input_ids.shape[-1], this value is always equal to 1 (as it comes from _prepare_decoder_input_ids_for_generation). \r\n```\r\n# 5. Prepare `input_ids` which will be used for auto-regressive generation\r\n if self.config.is_encoder_decoder:\r\n input_ids = self._prepare_decoder_input_ids_for_generation(\r\n batch_size,\r\n decoder_start_token_id=generation_config.decoder_start_token_id,\r\n bos_token_id=generation_config.bos_token_id,\r\n model_kwargs=model_kwargs,\r\n device=inputs_tensor.device,\r\n )\r\n else:\r\n input_ids = inputs_tensor if model_input_name == \"input_ids\" else model_kwargs.pop(\"input_ids\")\r\n\r\n # 6. Prepare `max_length` depending on other stopping criteria.\r\n input_ids_seq_length = input_ids.shape[-1]\r\n has_default_max_length = kwargs.get(\"max_length\") is None and generation_config.max_length is not None\r\n if has_default_max_length and generation_config.max_new_tokens is None:\r\n warnings.warn(\r\n f\"Using `max_length`'s default ({generation_config.max_length}) to control the generation length. \"\r\n \"This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we\"\r\n \" recommend using `max_new_tokens` to control the maximum length of the generation.\",\r\n UserWarning,\r\n )\r\n elif generation_config.max_new_tokens is not None:\r\n generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length\r\n if not has_default_max_length:\r\n logger.warn(\r\n f\"Both `max_new_tokens` (={generation_config.max_new_tokens}) and `max_length`(=\"\r\n f\"{generation_config.max_length}) seem to have been set. `max_new_tokens` will take precedence. \"\r\n \"Please refer to the documentation for more information. \"\r\n \"(https://huggingface.co/docs/transformers/main/en/main_classes/text_generation)\",\r\n UserWarning,\r\n )\r\n```\r\nIn my understanding, doing as you suggested would make this line equivalent to 1+1, as `max_new_tokens=input_ids.shape[1]` (equal to 1) and `input_ids_seq_length = input_ids.shape[-1]` (equal to 1)\r\n\r\n```\r\ngeneration_config.max_length = generation_config.max_new_tokens + input_ids_seq_length\r\n```",
"@ChrisSpraaklab inside generate, in encoder-decoder models like T5, `input_ids` is related to the decoder input ids. They are not the same as the `input_ids` you feed to `.generate()`, which will be used inside the encoder. Sadly, because `.generate()` is used with many types of models, we have this naming clash :) \r\n\r\nHave you tried running\r\n```py\r\nfrom transformers import AutoTokenizer, T5ForConditionalGeneration, GenerationConfig\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"t5-small\")\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\n\r\ninput_ids = tokenizer(\"summarize: My friends are cool but they eat too many carbs.\", return_tensors=\"pt\").input_ids\r\noutputs = model.generate(input_ids, max_new_tokens=input_ids.shape[1])\r\nprint(tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n```\r\n?",
"Thanks! Your solution does indeed produce the result I was looking for. I was just quite confused about the naming convention and documentation around max_new_tokens. I was under the impression that its value would be added to the length of in the input of the encoder, not the decoder. However, I now understand why it doesn't behave as I expected it to. ",
"So... despite that we input a token sequence `input_ids` in the `generate()` function, the length of this is irrelevant in the encoder-decoder model, and the `max_new_tokens` in `generate()` only refers to the length of the decoder input, which, because of BOS, is [always 1](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L1274) in our case. Yes, this is somewhat confusing indeed. \r\n\r\nAre there ways to motivate `generate()` to be more concise, but still run until EOS is generated, e.g., by setting a prior on the EOS? ",
"Hey @davidavdav -- yeah, you can try using Beam Search (i.e. `num_beams>1`) and pass a NEGATIVE [`length_penalty`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig.length_penalty). This will nudge the output towards shorter outputs!",
"BTW, if you come across better variable names, by all means, please suggest them :) We have so many features on our to-do list (including better docs) that every little help is precious!",
"Ah thanks, @gante---I do appreciate the difficulty of choosing sensible parameter/variable names, the number of times I am refactoring names back and forth in my own code is quite scary!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.5.0 (gpu)
- Jax version: 0.3.13
- JaxLib version: 0.3.10
- Using GPU in script?: yes
I am trying to generate output that is equal in length to the input (partially to avoid hallucinations and repetitions). In src/transformers/generation/utils.py I read how input length is determined: If self.config.is_encoder_decoder (which is the case for me), input_ids_seq_length calculates the length of the input ids coming from _prepare_decoder_input_ids_for_generation, which makes a tensor with dimension (batch_size, 1) filled with start_tokens. This means the input_ids_seq_length is always 1, making it useless for determining the input length (and determining the output length based on that).
### Who can help?
@sgugger
@muellerzr
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The problem arises in a script of my own, but this example also highlights it: (the task I am working on is not summarization but grammar correction, thats why I want the input length to be equal to the output length)
```
from transformers import AutoTokenizer, T5ForConditionalGeneration, GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
config = GenerationConfig(max_new_tokens=0)
input_ids = tokenizer("summarize: My friends are cool but they eat too many carbs.", return_tensors="pt").input_ids
outputs = model.generate(input_ids, generation_config=config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Expected behavior
I would expect the output length to be determined by the input length + max_new_tokens:
generation_config.max_length = generation_config.max_new_tokens + input_ids_seq_length
This is the case, but input_ids_seq_length is (wrongfully) always 1, making the output length independent of the input and equal to max_new_tokens+1.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22084/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22083
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22083/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22083/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22083/events
|
https://github.com/huggingface/transformers/pull/22083
| 1,618,834,836
|
PR_kwDOCUB6oc5Lwn82
| 22,083
|
[Safetensors] Add explicit flag to from pretrained
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
MEMBER
| null |
## Give user more control when loading safetensors
It would be very helpful if we could give the user more control over loading safetensors checkpoints by adding a `use_safetensors` flag to `from_pretrained`.
At the moment, `safetensors` weights are **always** loaded when `safetensors` is installed and are **silently** not loaded if `safetensors` is installed, **but** the model has no `safetensors` weights.
By giving the user the option to do `use_safetensors=True/False` we enable two new use cases:
1.) If a user only wants to load models from safetensors checkpoints, they can now pass `use_safetensors=True` which will lead to an error if no safetensors checkpoints are available => this can then e.g. give the user the guarantee that no pickle is used when loading checkpoints
2.) If a user doesn't want to load `safetensors` checkpoints, but not uninstall the library, they can now pass `use_safetensors=False` which will never load safetensors checkpoitns. This is super helpful for testing as well.
Also this feature would unblock this PR: in `diffusers`. https://github.com/huggingface/diffusers/pull/2123
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22083/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22083/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22083",
"html_url": "https://github.com/huggingface/transformers/pull/22083",
"diff_url": "https://github.com/huggingface/transformers/pull/22083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22083.patch",
"merged_at": 1678739946000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22082
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22082/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22082/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22082/events
|
https://github.com/huggingface/transformers/issues/22082
| 1,618,824,667
|
I_kwDOCUB6oc5gfU3b
| 22,082
|
Inconsistent training steps between Trainer and DeepSpeed
|
{
"login": "fenchri",
"id": 15857706,
"node_id": "MDQ6VXNlcjE1ODU3NzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/15857706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fenchri",
"html_url": "https://github.com/fenchri",
"followers_url": "https://api.github.com/users/fenchri/followers",
"following_url": "https://api.github.com/users/fenchri/following{/other_user}",
"gists_url": "https://api.github.com/users/fenchri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fenchri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fenchri/subscriptions",
"organizations_url": "https://api.github.com/users/fenchri/orgs",
"repos_url": "https://api.github.com/users/fenchri/repos",
"events_url": "https://api.github.com/users/fenchri/events{/privacy}",
"received_events_url": "https://api.github.com/users/fenchri/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thank you for the great and easy to reproduce report, @fenchri\r\n\r\nIndeed, you found a grad accumulation bug in HF Trainer. This is not an bug in DeepSpeed or its integration.\r\n\r\nI did:\r\n\r\n```\r\ndiff --git a/src/transformers/trainer.py b/src/transformers/trainer.py\r\nindex 344523842..a75110ee9 100755\r\n--- a/src/transformers/trainer.py\r\n+++ b/src/transformers/trainer.py\r\n@@ -1886,6 +1886,7 @@ class Trainer:\r\n if step % args.gradient_accumulation_steps == 0:\r\n self.control = self.callback_handler.on_step_begin(args, self.state, self.control)\r\n\r\n+ print(f\"HF STEP {step+1}\")\r\n if (\r\n ((step + 1) % args.gradient_accumulation_steps != 0)\r\n and args.local_rank != -1\r\n```\r\n\r\nand now running w/o deepspeed:\r\n\r\n```\r\npython -m torch.distributed.launch --nproc_per_node 1 --nnodes 1 --node_rank 0 \\\r\n--master_addr localhost --master_port 6000 \\\r\nexamples/pytorch/language-modeling/run_clm.py \\\r\n--model_name_or_path sshleifer/tiny-gpt2 --dataset_name wikitext \\\r\n--dataset_config_name wikitext-103-raw-v1 --per_device_train_batch_size 2 \\\r\n--per_device_eval_batch_size 1 --do_train --block_size 10 --output_dir \\\r\noutput_dir --max_train_samples=148 --gradient_accumulation_steps=16 \\\r\n--overwrite_output_dir --max_steps=10 --logging_steps=1\r\n```\r\n\r\nSince you set `--max_train_samples=148 --gradient_accumulation_steps=16` at step 8->9 the dataset wraps over, but the grad accum counter ignores the wrapping and waits for `((step + 1) % args.gradient_accumulation_steps != 0)` \r\n\r\nso when we run it, we get:\r\n\r\n```\r\n[skipped the first 7 grad acc cycles]\r\n{'loss': 10.823, 'learning_rate': 1.5e-05, 'epoch': 1.65} \r\n 70%|███████████████████████████████████████████████████████████████████████▍ | 7/10 [00:00<00:00, 10.01it/s]\r\nHF STEP 49\r\nHF STEP 50\r\nHF STEP 51\r\nHF STEP 52\r\nHF STEP 53\r\nHF STEP 54\r\nHF STEP 55\r\nHF STEP 56\r\nHF STEP 57\r\nHF STEP 58\r\nHF STEP 59\r\nHF STEP 60\r\nHF STEP 61\r\nHF STEP 62\r\nHF STEP 63\r\nHF STEP 64\r\n{'loss': 10.8266, 'learning_rate': 1e-05, 'epoch': 1.86} \r\n 80%|█████████████████████████████████████████████████████████████████████████████████▌ | 8/10 [00:01<00:00, 10.01it/s]\r\nHF STEP 65\r\nHF STEP 66\r\nHF STEP 67\r\nHF STEP 68\r\nHF STEP 69\r\nHF STEP 70\r\nHF STEP 71\r\nHF STEP 72\r\nHF STEP 73\r\nHF STEP 74\r\nHF STEP 1\r\nHF STEP 2\r\nHF STEP 3\r\nHF STEP 4\r\nHF STEP 5\r\nHF STEP 6\r\nHF STEP 7\r\nHF STEP 8\r\nHF STEP 9\r\nHF STEP 10\r\nHF STEP 11\r\nHF STEP 12\r\nHF STEP 13\r\nHF STEP 14\r\nHF STEP 15\r\nHF STEP 16\r\n{'loss': 17.593, 'learning_rate': 5e-06, 'epoch': 2.22} \r\n 90%|███████████████████████████████████████████████████████████████████████████████████████████▊ | 9/10 [00:01<00:00, 11.05it/s]\r\nHF STEP 17\r\nHF STEP 18\r\nHF STEP 19\r\nHF STEP 20\r\nHF STEP 21\r\nHF STEP 22\r\nHF STEP 23\r\nHF STEP 24\r\nHF STEP 25\r\nHF STEP 26\r\nHF STEP 27\r\nHF STEP 28\r\nHF STEP 29\r\nHF STEP 30\r\nHF STEP 31\r\nHF STEP 32\r\n{'loss': 10.8249, 'learning_rate': 0.0, 'epoch': 2.43} \r\n```\r\n\r\nyou can see that between iteration 8 and 9 there are more than 16 grad accumulation steps happening.\r\n\r\n-------------\r\n\r\nUntil this is fixed, specifically to your needs, @fenchri - as long as you're using deepspeed the grad accumulation is performed correctly since it performs it on its own. But you end up running more than steps than specified.",
"Hmm, actually looking at earlier steps, this appears to be odd as well:\r\n\r\n```\r\n{'loss': 10.8252, 'learning_rate': 3e-05, 'epoch': 0.86} \r\n 40%|████████████████████████████████████████▊ | 4/10 [00:00<00:01, 5.14it/s]\r\nHF STEP 65\r\nHF STEP 66\r\nHF STEP 67\r\nHF STEP 68\r\nHF STEP 69\r\nHF STEP 70\r\nHF STEP 71\r\nHF STEP 72\r\nHF STEP 73\r\nHF STEP 74\r\nHF STEP 75\r\nHF STEP 76\r\nHF STEP 77\r\nHF STEP 78\r\nHF STEP 79\r\nHF STEP 80\r\nHF STEP 81\r\nHF STEP 82\r\nHF STEP 83\r\nHF STEP 84\r\nHF STEP 85\r\nHF STEP 86\r\nHF STEP 87\r\nHF STEP 88\r\nHF STEP 89\r\nHF STEP 90\r\n```\r\n\r\nit did 9 additional dataset pulls here as well (25 instead of 16), and this is not at the grad accum boundary\r\n\r\nedit: ah, it's because bs=2, so it hits the rollover already at step 4->5, that's why.\r\n",
"ok, actually I came up with a fix, will push shortly for you to try\r\n\r\nPlease try https://github.com/huggingface/transformers/pull/22098\r\n",
"Thanks @stas00 for having a look and apologies for the late reply. Indeed, the fix resolves the issue! :tada:\r\n\r\nOn a related note, the computation happening [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1654) seems to chop `num_update_steps_per_epoch` even if even if `drop_last` is False. This results in having `100` training epochs instead of `87`, which then gets printed [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1740).\r\n \r\nNevertheless, with the current fix the training stops at the desired number of steps, so should be fine :) \r\n\r\nI am happy to open another issue related to this though if you think is needed :)\r\n\r\nThank you!",
"> Thanks @stas00 for having a look and apologies for the late reply. Indeed, the fix resolves the issue! tada\r\n\r\nexcellent! Thank you for testing the PR, @fenchri \r\n\r\n> I am happy to open another issue related to this though if you think is needed :)\r\n\r\nyes, please. One Issue at a time."
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.26.0
- Platform: Linux-5.4.0-136-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
DeepSpeed general environment info:
torch install path ............... ['/home/fenia/anaconda3/envs/benchmark/lib/python3.8/site-packages/torch']
torch version .................... 1.12.0+cu113
deepspeed install path ........... ['/home/fenia/anaconda3/envs/benchmark/lib/python3.8/site-packages/deepspeed']
deepspeed info ................... 0.8.1, unknown, unknown
torch cuda version ............... 11.3
torch hip version ................ None
nvcc version ..................... 11.8
deepspeed wheel compiled w. ...... torch 1.12, cuda 11.3
### Who can help?
@stas00
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hello!
There seems to be some incosistency in the number of training steps when using DeepSpeed with HF trainer.
It looks like DeepSpeed is doing things correctly but ends up training more steps in order to match Trainer. They both continue training even after learning rate has dropped to 0.
From the official examples:
```
ds_config_zero2={
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 10,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
```
DISTRIBUTED_ARGS="--nproc_per_node 2 --nnodes 1 --node_rank 0 --master_addr localhost --master_port 6000"
python -m torch.distributed.launch $DISTRIBUTED_ARGS \
run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-103-raw-v1 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 1 \
--do_train \
--output_dir /tmp/test-clm2 \
--max_train_samples=148 \
--gradient_accumulation_steps=16 \
--overwrite_output_dir \
--max_steps=200 \
--logging_steps=10 \
--deepspeed="ds_config_zero2.json"
```
I attach the training output: [output.txt](https://github.com/huggingface/transformers/files/10934213/output.txt)
The same behavior is observed even if training with Trainer+DeepSpeed on a single GPU.
### Expected behavior
Expected number of steps should match between Trainer and DeepSpeed logging.
Thank you very much in advance!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22082/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22134
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22134/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22134/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22134/events
|
https://github.com/huggingface/transformers/issues/22134
| 1,621,516,022
|
I_kwDOCUB6oc5gpl72
| 22,134
|
How can I create a repository automatically when defining the `Trainer`?
|
{
"login": "ahmad-alismail",
"id": 46696930,
"node_id": "MDQ6VXNlcjQ2Njk2OTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/46696930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmad-alismail",
"html_url": "https://github.com/ahmad-alismail",
"followers_url": "https://api.github.com/users/ahmad-alismail/followers",
"following_url": "https://api.github.com/users/ahmad-alismail/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmad-alismail/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmad-alismail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmad-alismail/subscriptions",
"organizations_url": "https://api.github.com/users/ahmad-alismail/orgs",
"repos_url": "https://api.github.com/users/ahmad-alismail/repos",
"events_url": "https://api.github.com/users/ahmad-alismail/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmad-alismail/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] |
[
"Hi @ahmad-alismail , thanks for reporting this. \r\n\r\n~If you don't mind I'll transfer this issue to the `transformers` repo and rename it. A breaking change has being introduced in [`huggingface_hub==0.12.0`](https://github.com/huggingface/huggingface_hub/releases/tag/v0.12.0). Since then, `Repository` do not handle the repo creation if not existing on the Hub.~\r\n\r\n~It seems that the `Trainer` push_to_hub method do not handle the repo creation before calling `Repository` which now fails. This has to be fixed [here](https://github.com/huggingface/transformers/blob/b90fbc7e0ba41dfd6b343e7e2274443f19087f36/src/transformers/trainer.py#L3555). In the meantime, you need to manually create the repo before using `Trainer.push_to_hub` or downgrade to `huggingface_hub==0.11.1`.~\r\n\r\n~@sgugger @ydshieh I'll open a PR today to fix this.~ \r\n\r\n**EDIT:** I cannot transfer the issue to `transformers` (most likely because I'm not a maintainer there) so if someone can do it :pray: \r\n\r\n**EDIT 2:** it seems that the repo creation [is already handled](https://github.com/huggingface/transformers/blob/b90fbc7e0ba41dfd6b343e7e2274443f19087f36/src/transformers/trainer.py#L3430) in the `Trainer` class. @sgugger @ydshieh an idea why the `create_repo` was not called?",
"@ahmad-alismail which version of `transformers` do you have?",
"Yeah, looks the number line of the error in the PR description has a difference of > 1000. Better to know which `transformers` version is used here.",
"Hi @Wauplin @ydshieh, thanks for your reply!\r\nThe version of `transformers` is 4.11.3",
"@ahmad-alismail Could you try to update the `transformers` package to latest release (4.26.1) and re-run your script?\r\nVersion 4.11.3 was released [in September 2021](https://pypi.org/project/transformers/#history) and is therefore outdated.",
"@Wauplin It's working perfectly! I truly appreciate your help – thank you so much!"
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
### Describe the bug
I'm trying to fine-tune XLM-RoBERTa model on a German corpus for NER task. To handle the training loop I'm using the 🤗 Transformers `Trainer`, so first I need to define the training attributes using the `TrainingArguments` class:
````python
from transformers import TrainingArguments
# Set the number of epochs, batch size, and logging steps
num_epochs = 3
batch_size = 24
logging_steps = len(panx_de_encoded["train"]) // batch_size
# Define the model name
model_name = f"{xlmr_model_name}-finetuned-panx-de"
# Define the training arguments for the model
training_args = TrainingArguments(
output_dir=model_name, # Directory to save model checkpoints and outputs
log_level="error", # Logging level
num_train_epochs=num_epochs, # Number of training epochs
per_device_train_batch_size=batch_size, # Batch size per device for training
per_device_eval_batch_size=batch_size, # Batch size per device for evaluation
evaluation_strategy="epoch", # Evaluate model's prediction on the validation set at the end of each epoch
save_steps=1e6, # Save checkpoint every 1000000 steps (i.e., disable checkpointing to speed up training)
weight_decay=0.01, # Weight decay for optimizer
disable_tqdm=False, # Whether to show progress bar during training
logging_steps=logging_steps, # Determines the number of steps between each logging message
push_to_hub=True # Whether to push the model to the Hugging Face model hub
)
````
* Next, I log in to the hugging face hub with `Write` role and define the `Trainer` as follows:
````python
from transformers import Trainer
trainer = Trainer(model_init=model_init, # A function that instantiates the model to be used
args=training_args, # Arguments to tweak for training
data_collator=data_collator,
compute_metrics=compute_metrics,
train_dataset=panx_de_encoded["train"],
eval_dataset=panx_de_encoded["validation"],
tokenizer=xlmr_tokenizer)
````
Unfortunately, I have the following error:
````python
Cloning https://huggingface.co/ahmad1289/xlm-roberta-base-finetuned-panx-de into local empty directory.
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/huggingface_hub/repository.py in clone_from(self, repo_url, token)
691 self.local_dir,
--> 692 env=env,
693 )
/opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_subprocess.py in run_subprocess(command, folder, check, **kwargs)
68 cwd=folder or os.getcwd(),
---> 69 **kwargs,
70 )
/opt/conda/lib/python3.7/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)
511 raise CalledProcessError(retcode, process.args,
--> 512 output=stdout, stderr=stderr)
513 return CompletedProcess(process.args, retcode, stdout, stderr)
CalledProcessError: Command '['git', 'lfs', 'clone', 'https://user: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@huggingface.co/ahmad1289/xlm-roberta-base-finetuned-panx-de', '.']' returned non-zero exit status 2.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/tmp/ipykernel_23/987298996.py in <module>
8 train_dataset=panx_de_encoded["train"],
9 eval_dataset=panx_de_encoded["validation"],
---> 10 tokenizer=xlmr_tokenizer)
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers)
401 # Create clone of distant repo and output directory if needed
402 if self.args.push_to_hub:
--> 403 self.init_git_repo()
404 # In case of pull, we need to make sure every process has the latest.
405 if is_torch_tpu_available():
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in init_git_repo(self)
2551 self.args.output_dir,
2552 clone_from=repo_name,
-> 2553 use_auth_token=use_auth_token,
2554 )
2555 except EnvironmentError:
/opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)
122 )
123
--> 124 return fn(*args, **kwargs)
125
126 return _inner_fn # type: ignore
/opt/conda/lib/python3.7/site-packages/huggingface_hub/repository.py in __init__(self, local_dir, clone_from, repo_type, token, git_user, git_email, revision, skip_lfs_files, client)
516
517 if clone_from is not None:
--> 518 self.clone_from(repo_url=clone_from)
519 else:
520 if is_git_repo(self.local_dir):
/opt/conda/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)
122 )
123
--> 124 return fn(*args, **kwargs)
125
126 return _inner_fn # type: ignore
/opt/conda/lib/python3.7/site-packages/huggingface_hub/repository.py in clone_from(self, repo_url, token)
731
732 except subprocess.CalledProcessError as exc:
--> 733 raise EnvironmentError(exc.stderr)
734
735 def git_config_username_and_email(
OSError: WARNING: 'git lfs clone' is deprecated and will not be updated
with new flags from 'git clone'
'git clone' has been updated in upstream Git to have comparable
speeds to 'git lfs clone'.
Cloning into '.'...
remote: Repository not found
fatal: repository 'https://huggingface.co/ahmad1289/xlm-roberta-base-finetuned-panx-de/' not found
Error(s) during clone:
git clone failed: exit status 128
`````
It appears that the model repository with the name `xlm-roberta-base-finetuned-panx-de` does not currently exist. However, as described in the [Hugging Face course](https://huggingface.co/course/en/chapter4/3?fw=pt), the `push_to_hub()` function (which should be used later in the notebook) handles both the creation of the repository and the push of the model and tokenizer files to that repository.
Is there anything else that I might be missing?
* [Full notebook](https://github.com/ahmad-alismail/NLP-with-Transformers/blob/master/4-nlp-with-transformers-multilingual-ner.ipynb)
### System info
```shell
- huggingface_hub version: 0.12.1
- Platform: Linux-5.15.89+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- Running in iPython ?: Yes
- iPython shell: ZMQInteractiveShell
- Running in notebook ?: Yes
- Running in Google Colab ?: No
- Token path ?: /root/.cache/huggingface/token
- Has saved token ?: False
- Configured git credential helpers:
- FastAI: 2.7.11
- Tensorflow: 2.11.0
- Torch: 1.13.0
- Jinja2: 3.1.2
- Graphviz: 0.8.4
- Pydot: 1.4.2
- Pillow: 9.3.0
- hf_transfer: N/A
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets
- HF_HUB_OFFLINE: False
- HF_TOKEN_PATH: /root/.cache/huggingface/token
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
{'huggingface_hub version': '0.12.1',
'Platform': 'Linux-5.15.89+-x86_64-with-debian-bullseye-sid',
'Python version': '3.7.12',
'Running in iPython ?': 'Yes',
'iPython shell': 'ZMQInteractiveShell',
'Running in notebook ?': 'Yes',
'Running in Google Colab ?': 'No',
'Token path ?': PosixPath('/root/.cache/huggingface/token'),
'Has saved token ?': False,
'Configured git credential helpers': '',
'FastAI': '2.7.11',
'Tensorflow': '2.11.0',
'Torch': '1.13.0',
'Jinja2': '3.1.2',
'Graphviz': '0.8.4',
'Pydot': '1.4.2',
'Pillow': '9.3.0',
'hf_transfer': 'N/A',
'ENDPOINT': 'https://huggingface.co',
'HUGGINGFACE_HUB_CACHE': '/root/.cache/huggingface/hub',
'HUGGINGFACE_ASSETS_CACHE': '/root/.cache/huggingface/assets',
'HF_HUB_OFFLINE': False,
'HF_TOKEN_PATH': '/root/.cache/huggingface/token',
'HF_HUB_DISABLE_PROGRESS_BARS': None,
'HF_HUB_DISABLE_SYMLINKS_WARNING': False,
'HF_HUB_DISABLE_IMPLICIT_TOKEN': False,
'HF_HUB_ENABLE_HF_TRANSFER': False}
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22134/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22081
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22081/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22081/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22081/events
|
https://github.com/huggingface/transformers/pull/22081
| 1,618,798,300
|
PR_kwDOCUB6oc5LwgMZ
| 22,081
|
Fix gradient checkpointing bug in switch transformer
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22081/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22081",
"html_url": "https://github.com/huggingface/transformers/pull/22081",
"diff_url": "https://github.com/huggingface/transformers/pull/22081.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22081.patch",
"merged_at": 1678447868000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22080
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22080/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22080/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22080/events
|
https://github.com/huggingface/transformers/pull/22080
| 1,618,788,225
|
PR_kwDOCUB6oc5LweA8
| 22,080
|
Fix gradient checkpointing bug in Speecht5
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Sorry fixed that",
"_The documentation is not available anymore as the PR was closed or merged._",
"@KMFODA it is possible that this is the wrong branch -- the diff doesn't contain changes to speecht5 :)",
"Yes the diff seems to have changed a bit, can you please double check 🙏 @KMFODA ?",
"Sorry yes I think the branches were mixed up. This should now introduce the changes to SpeechT5 and fix the formatting issues @gante spotted in [modeling_speech_to_text.py](https://github.com/huggingface/transformers/pull/22080/commits/4da5f4baaf4eddb71787ba4dbbbd3791953e481a).",
"Perfect, thank you for the changes @KMFODA 💛 "
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22080/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22080",
"html_url": "https://github.com/huggingface/transformers/pull/22080",
"diff_url": "https://github.com/huggingface/transformers/pull/22080.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22080.patch",
"merged_at": 1678455369000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22079
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22079/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22079/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22079/events
|
https://github.com/huggingface/transformers/pull/22079
| 1,618,772,177
|
PR_kwDOCUB6oc5LwajK
| 22,079
|
Fix gradient checkpointing bug in Speech2Text
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry fixed that",
"@KMFODA your fork probably needs to pull from `main` :p "
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22079/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22079",
"html_url": "https://github.com/huggingface/transformers/pull/22079",
"diff_url": "https://github.com/huggingface/transformers/pull/22079.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22079.patch",
"merged_at": 1678447843000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22078
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22078/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22078/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22078/events
|
https://github.com/huggingface/transformers/pull/22078
| 1,618,662,075
|
PR_kwDOCUB6oc5LwCta
| 22,078
|
Generate - Fix broken documentation links
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,684
| 1,678
|
MEMBER
| null |
# What does this PR do?
Fixes #22077
`main_classes` is not on the same level as `generation_strategies`, hence the broken link.
EDIT: confirmed that it works in the CI docs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22078/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22078",
"html_url": "https://github.com/huggingface/transformers/pull/22078",
"diff_url": "https://github.com/huggingface/transformers/pull/22078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22078.patch",
"merged_at": 1678454911000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22077
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22077/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22077/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22077/events
|
https://github.com/huggingface/transformers/issues/22077
| 1,618,597,527
|
I_kwDOCUB6oc5gedaX
| 22,077
|
Broken link in Documentation
|
{
"login": "datavistics",
"id": 22736772,
"node_id": "MDQ6VXNlcjIyNzM2Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/22736772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datavistics",
"html_url": "https://github.com/datavistics",
"followers_url": "https://api.github.com/users/datavistics/followers",
"following_url": "https://api.github.com/users/datavistics/following{/other_user}",
"gists_url": "https://api.github.com/users/datavistics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datavistics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datavistics/subscriptions",
"organizations_url": "https://api.github.com/users/datavistics/orgs",
"repos_url": "https://api.github.com/users/datavistics/repos",
"events_url": "https://api.github.com/users/datavistics/events{/privacy}",
"received_events_url": "https://api.github.com/users/datavistics/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @datavistics -- the problem is not present on `main`, but rather in the previous release (which we can't fix unless we make a new release) \r\n\r\nSee these docs: https://huggingface.co/docs/transformers/main/en/main_classes/text_generation",
"I still get the broken link here: https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationMixin.contrastive_search\r\n\r\nWhich links to v4.26.1 in one of the links.",
"Should be fixed by #22078 on main."
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
### System Info
@sgugger
Where I started
https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/text_generation#transformers.GenerationMixin.contrastive_search
What doesnt exist
https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/generation_strategies
https://huggingface.co/docs/transformers/main/en/main_classes/generation_strategies
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Go to the links above
### Expected behavior
The pages would exist.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22077/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22076
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22076/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22076/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22076/events
|
https://github.com/huggingface/transformers/pull/22076
| 1,618,518,392
|
PR_kwDOCUB6oc5LvkX8
| 22,076
|
[Time-Series] fix past_observed_mask type
|
{
"login": "elisim",
"id": 17675462,
"node_id": "MDQ6VXNlcjE3Njc1NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/17675462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elisim",
"html_url": "https://github.com/elisim",
"followers_url": "https://api.github.com/users/elisim/followers",
"following_url": "https://api.github.com/users/elisim/following{/other_user}",
"gists_url": "https://api.github.com/users/elisim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elisim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elisim/subscriptions",
"organizations_url": "https://api.github.com/users/elisim/orgs",
"repos_url": "https://api.github.com/users/elisim/repos",
"events_url": "https://api.github.com/users/elisim/events{/privacy}",
"received_events_url": "https://api.github.com/users/elisim/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looked into the falling tests, looks like it's not related to the change (correct me if I'm wrong) 🙂",
"thanks! "
] | 1,678
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
small fix to make `past_observed_mask` bool type in Informer and vanilla tests
@kashif
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22076/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22076",
"html_url": "https://github.com/huggingface/transformers/pull/22076",
"diff_url": "https://github.com/huggingface/transformers/pull/22076.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22076.patch",
"merged_at": 1680527241000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22075
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22075/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22075/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22075/events
|
https://github.com/huggingface/transformers/issues/22075
| 1,618,505,000
|
I_kwDOCUB6oc5geG0o
| 22,075
|
[Time-Series] time-series patching
|
{
"login": "elisim",
"id": 17675462,
"node_id": "MDQ6VXNlcjE3Njc1NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/17675462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elisim",
"html_url": "https://github.com/elisim",
"followers_url": "https://api.github.com/users/elisim/followers",
"following_url": "https://api.github.com/users/elisim/following{/other_user}",
"gists_url": "https://api.github.com/users/elisim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elisim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elisim/subscriptions",
"organizations_url": "https://api.github.com/users/elisim/orgs",
"repos_url": "https://api.github.com/users/elisim/repos",
"events_url": "https://api.github.com/users/elisim/events{/privacy}",
"received_events_url": "https://api.github.com/users/elisim/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"PatchTST is in Gluon, thanks to @kashif . Closing here :) https://github.com/awslabs/gluonts/pull/2748",
"Oh I apologize that I just notice this issue. Somehow I haven't seen it previously... And I appreciate so much for your attention on the PatchTST! Anything I could do for it? \r\nAlso, glad to see the Gluonts application!",
"@yuqinie98 not a problem... i will get PatchTST added to transformers next",
"@yuqinie98 also note that the model on gluonts is not exactly the patchTST implementation as your paper (or the implementations in tsai and neuralforecast):\r\n - it doesn't carry the multivariate time series around but rather each entry in the batch is some random time series from the data\r\n - it is probabilistic \r\n - the model is given the mean and std of the window of each time series as input (a bit like the non-stationary transformer) \r\n - a lag variant (instead of `torch.unfold`) based on the `freq` is also implemented\r\n \r\n the points I raised in my email to you still stand with respect to the paper"
] | 1,678
| 1,692
| 1,686
|
CONTRIBUTOR
| null |
### Model description
"time-series patching" refers to the process of segmentation the series into subseries-level patches which are served as input tokens to the transformer. It's really similar to what's done in ViT, but for time-series. This idea was first propsed in a recent ICLR paper:
[A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/abs/2211.14730)
code: https://github.com/yuqinie98/PatchTST
@kashif @NielsRogge
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
@yuqinie98
Edit: I think that "new model" is not the best label to this issue, maybe there is a better label for this?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22075/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22074
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22074/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22074/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22074/events
|
https://github.com/huggingface/transformers/pull/22074
| 1,618,477,425
|
PR_kwDOCUB6oc5Lvb5U
| 22,074
|
Fix hint in src/transformers/modeling_utils.py
|
{
"login": "J-shang",
"id": 33053116,
"node_id": "MDQ6VXNlcjMzMDUzMTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/33053116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/J-shang",
"html_url": "https://github.com/J-shang",
"followers_url": "https://api.github.com/users/J-shang/followers",
"following_url": "https://api.github.com/users/J-shang/following{/other_user}",
"gists_url": "https://api.github.com/users/J-shang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/J-shang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/J-shang/subscriptions",
"organizations_url": "https://api.github.com/users/J-shang/orgs",
"repos_url": "https://api.github.com/users/J-shang/repos",
"events_url": "https://api.github.com/users/J-shang/events{/privacy}",
"received_events_url": "https://api.github.com/users/J-shang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This type hint is a little strange, should it be `torch.device`?
The reason why I want to change it to `torch.device` is that this hint confused a graph trace tool I used.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22074/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22074/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22074",
"html_url": "https://github.com/huggingface/transformers/pull/22074",
"diff_url": "https://github.com/huggingface/transformers/pull/22074.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22074.patch",
"merged_at": 1678456603000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22073
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22073/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22073/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22073/events
|
https://github.com/huggingface/transformers/pull/22073
| 1,618,455,637
|
PR_kwDOCUB6oc5LvXSL
| 22,073
|
Add TensorFlow Wav2Vec2 for sequence classification
|
{
"login": "nandwalritik",
"id": 48522685,
"node_id": "MDQ6VXNlcjQ4NTIyNjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48522685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nandwalritik",
"html_url": "https://github.com/nandwalritik",
"followers_url": "https://api.github.com/users/nandwalritik/followers",
"following_url": "https://api.github.com/users/nandwalritik/following{/other_user}",
"gists_url": "https://api.github.com/users/nandwalritik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nandwalritik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nandwalritik/subscriptions",
"organizations_url": "https://api.github.com/users/nandwalritik/orgs",
"repos_url": "https://api.github.com/users/nandwalritik/repos",
"events_url": "https://api.github.com/users/nandwalritik/events{/privacy}",
"received_events_url": "https://api.github.com/users/nandwalritik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Kindly ping @sanchit-gandhi and adding @Rocketknight1 for the TensorFlow side.",
"Hi @nandwalritik, and sorry for the extremely long delay in catching this! Ordinarily one of the TF maintainers reviews TF pull requests, but this one slipped through the cracks somehow. If you want to file TF PRs in future, you can directly ping me or @gante to make sure that we don't miss it.\r\n\r\nThis PR actually looks almost perfect, but there are a couple of TF-specific details that are causing some tests to fail. I'll mark them in a code review in just a sec, but they shouldn't take too long to fix. Thanks again for submitting this!",
"> \r\n\r\nfor `serving` and `serving_output` methods I added changes, but now sure they are correct or not.",
"Hi @nandwalritik, I'm seeing the issue when you move it to `build()` - the problem is the weight name, as it usually is in our TensorFlow ports! TF isn't very consistent about the name scope used for weights, and it can differ depending on when the weight is created in the `init`, the `build` or lazily in the `call()`, which makes it tricky because we use the names to match weights between PT and TF models.\r\n\r\nI'll see if I can push a solution to your repo, hang on.",
"Ok",
"Try:\r\n\r\n```\r\nwith tf.name_scope(self._name_scope()):\r\n self.layer_weights = self.add_weight(\r\n shape=(self.num_layers,), initializer=\"ones\", trainable=True, name=\"layer_weights\"\r\n )\r\n```\r\nin the `__init__`, not the `build()`. I know that contradicts what I said earlier, but it turns out to be a bit different for a base model class than a sublayer.\r\n\r\nI also see a couple of other errors - you can see them by clicking the `Details` beside `tests_tf` in the checklist at the bottom of this PR. If you can't figure out what's causing them, ping me over the weekend or on Monday and I'll try to debug them!",
"> Try:\r\n> \r\n> ```\r\n> with tf.name_scope(self._name_scope()):\r\n> self.layer_weights = self.add_weight(\r\n> shape=(self.num_layers,), initializer=\"ones\", trainable=True, name=\"layer_weights\"\r\n> )\r\n> ```\r\n> \r\n> in the `__init__`, not the `build()`. I know that contradicts what I said earlier, but it turns out to be a bit different for a base model class than a sublayer.\r\n> \r\n> I also see a couple of other errors - you can see them by clicking the `Details` beside `tests_tf` in the checklist at the bottom of this PR. If you can't figure out what's causing them, ping me over the weekend or on Monday and I'll try to debug them!\r\n\r\nOk, so after adding this change, the weights are getting loaded without any warning or error, but the output of pytorch and tensorflow model doesn't have `rtol` of `1e-5`.\r\nAlthough I checked shape and absolute sum of tensors of both the models they are almost equal\r\n```\r\nPT model \r\n1,292,768 -> 29877.8750\r\n\r\n\r\n1,292,256 -> 29711.7109\r\n\r\npooled_output\r\n1,256 -> 38.7491\r\n\r\n\r\n\r\nTF model\r\n\r\nhidden_state\r\n1,292,768 -> 29877.879\r\n\r\n1,292,256 -> 29711.715\r\n\r\npooled_output\r\n1,256 -> 38.811996\r\n```\r\nWhat should i try next to satisfy rtol criteria.",
"Hm, those are some fairly large discrepancies! The debugging process we recommend when something like that happens is:\r\n\r\n- Make a test environment and load the PT and TF models with the same weights\r\n- Try to isolate the earliest point where the model outputs diverge. You can use options like `output_hidden_states` to get the model to return all hidden states, not just the final ones.\r\n- Once you find the first point of divergence, try to see if you can dig into the layer where the divergence happened. You can place breakpoints, or extract sublayers and try passing test inputs into them.\r\n- Eventually you will find the single specific place where the divergence creeps in - now you can check what the cause is. Make sure the weights for that operation really do match between the two frameworks, and make sure both frameworks are doing the same thing at that point.\r\n\r\nAs always, if you can't figure it out, let me know! This kind of work can be quite gruelling, but we really appreciate the work you're doing on the model port.",
"Hi @Rocketknight1 I added test cases and fixed the feed forward part, but the CI is failing due to `flax`, I think this might not be related to my changes. Please review the PR and let me know if any more changes are required. ",
"Yep, those flax issues are unrelated, just ignore them. I'll review everything today, but the CI looks good!",
"@sanchit-gandhi @Rocketknight1 let me know if any more changes are required or else can you guys get this pr merged.",
"Just looked over the last few changes - I'm happy to merge it at this point. Thanks again for putting in the work on this!"
] | 1,678
| 1,682
| 1,682
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #21778
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22073/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22073",
"html_url": "https://github.com/huggingface/transformers/pull/22073",
"diff_url": "https://github.com/huggingface/transformers/pull/22073.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22073.patch",
"merged_at": 1682512530000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22072
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22072/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22072/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22072/events
|
https://github.com/huggingface/transformers/pull/22072
| 1,618,440,655
|
PR_kwDOCUB6oc5LvUMH
| 22,072
|
Enable traced model for text-generation task
|
{
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22072). All of your documentation changes will be reflected on that endpoint.",
"@sgugger please help review",
"@gante Could you have a first look at the changes in generate?",
"@gante Hi, Gante. Thanks for your delicate comment, it's reasonable and I agree with it.\r\nHere I have two solutions:\r\n1. For `trace_graph` in the main body of `generate`, we can add a doc to explain `trace_graph` with details, including what it is and how to implement it, and how it helps accelerate inference; For tensor manipulation, the method of preparing input tensors for `trace_graph` is general for text-generation task across all kinds of models. It can also adapt to any task easily with a few changes(it is in progress) instead of a specific use case. We can put this method on utils in general.\r\n2. As you said, we can redefine `prepare_inputs_for_generation` for both inputs and `model.trace_graph` outputs. However, redefining `model.prepare_inputs_for_generation()` is not a general way since different model classes have different functions of `prepare_inputs_for_generation()`, and it is not convenient to inherit different model classes every time we changed the type of model.\r\n\r\nI strongly recommend the first way. There are many ways to optimize `model.forward`, if we can support the attribute `trace_graph` in the main body of `generate`, it will be convenient for users to pass their custom models.\r\n\r\nBTW, you set `return_dict=True` in the main body of generate, so it would not work if I set `return_dict=False` in the `.from_pretrain`. Could I remove this so the users can decide whether or not to return the dictionary by themselves?\r\n\r\nThanks!\r\n",
"@jiqing-feng Thank you for your comment. \r\n\r\nTo clarify my position further, in an attempt to find a solution that pleases us all: from the `transformers` perspective, our current priority is the ease of use and experimentation. We also welcome performance-enhancing solutions like the one in the PR, but they must fulfill one of three requirements: (i) they are commonly requested by the community; (ii) they require minimal changes to existing functionality; (iii) the benefits of the new technique are very big, like int8 quantization. If we don't adhere to these principles, the codebase will quickly be unusable and hard to maintain, as there are many possible strategies to improve the code.\r\n\r\nFrom my perspective, I haven't seen any request for `torch.jit` support in `.generate()`, and I get tagged in pretty much everything `.generate()`-related. This PR also includes a diff of 50 lines to existing functions in `utils.py` and the benefit is up to 20% speedup. This means that, according to the principles stated above, I'm afraid can't support the changes as they are 🤗 \r\n\r\nThis doesn't mean that my perspective is static on the subject! I've suggested above what can be done to showcase `torch.jit` in the example. That is a way to increase the visibility of the technique, which may increase the community demand for it -- and, if this demand does materialize, I'd be more than happy to include the additional logic in `utils.py`.\r\n\r\nI apologize if this is not the answer you'd like to read, but we do have to be picky with the changes we introduce in actively maintained cross-model functionality. I'm also working towards increasing the modularity of `.generate()`, so that use cases like yours can be more easily added!\r\n",
"Just my +1 , generation speed improvement, especially with torch 2.0 is something very nice for make the model production ready",
"Yes, echo. W/ PyTorch 2.0 introduced, suppose we will see more and more performance benefit out of jit for deployment."
] | 1,678
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
@sywangyi
Enable traced model for text-generation task.
I changed beam_search and greedy_search of generation for traced model. If a traced model has been set on the attribute of "trace_graph", then we will use the model.trace_grapg to forward. I also changed the text-generation example and found that model optimized by jit trace performs better on text-generation task. The data running on a A100 is as below:
model: gptj-6b
beam search: input_tokens=32, output_tokens=32, num_beam=4
data type: bf16
original model's latency: 0.96s
jit trace model's latency: 0.72s
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22072/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22072",
"html_url": "https://github.com/huggingface/transformers/pull/22072",
"diff_url": "https://github.com/huggingface/transformers/pull/22072.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22072.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22071
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22071/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22071/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22071/events
|
https://github.com/huggingface/transformers/issues/22071
| 1,618,370,944
|
I_kwDOCUB6oc5gdmGA
| 22,071
|
Why Bloom tokenizer use padding to max_length it will placed the padding tokens to the head ?
|
{
"login": "svjack",
"id": 27874014,
"node_id": "MDQ6VXNlcjI3ODc0MDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svjack",
"html_url": "https://github.com/svjack",
"followers_url": "https://api.github.com/users/svjack/followers",
"following_url": "https://api.github.com/users/svjack/following{/other_user}",
"gists_url": "https://api.github.com/users/svjack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/svjack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/svjack/subscriptions",
"organizations_url": "https://api.github.com/users/svjack/orgs",
"repos_url": "https://api.github.com/users/svjack/repos",
"events_url": "https://api.github.com/users/svjack/events{/privacy}",
"received_events_url": "https://api.github.com/users/svjack/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"That's because BLOOM is a generative models, and when using `generate` padding should go on the left for better results. This is thus the default behavior of its tokenizer.",
"> That's because BLOOM is a generative models, and when using `generate` padding should go on the left for better results. This is thus the default behavior of its tokenizer.\r\n\r\nIs there exist a build in simple api method to change this activity ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
As the title say,
The following snippet produce:
```python
native_tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m",
use_fast=False)
caption = "a bear in the woods."
tokenized_data = native_tokenizer(
caption,
return_tensors="pt",
padding='max_length',
truncation=True,
max_length=56)
tokens = tokenized_data.input_ids[0]
tokens
```
produce:
```python
tensor([ 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 68, 50507, 361, 368,
165526, 17])
```
It pad the pad_token_id "3" to the head, not tail.
This is different with other models.
Why this occurred ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22071/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22070
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22070/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22070/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22070/events
|
https://github.com/huggingface/transformers/issues/22070
| 1,618,309,442
|
I_kwDOCUB6oc5gdXFC
| 22,070
|
Custom dataset builder for multichannel 'float32' hyperspectral images?
|
{
"login": "petteriTeikari",
"id": 1060514,
"node_id": "MDQ6VXNlcjEwNjA1MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1060514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petteriTeikari",
"html_url": "https://github.com/petteriTeikari",
"followers_url": "https://api.github.com/users/petteriTeikari/followers",
"following_url": "https://api.github.com/users/petteriTeikari/following{/other_user}",
"gists_url": "https://api.github.com/users/petteriTeikari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petteriTeikari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petteriTeikari/subscriptions",
"organizations_url": "https://api.github.com/users/petteriTeikari/orgs",
"repos_url": "https://api.github.com/users/petteriTeikari/repos",
"events_url": "https://api.github.com/users/petteriTeikari/events{/privacy}",
"received_events_url": "https://api.github.com/users/petteriTeikari/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This question might be better suited for the [forums](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests only.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### Feature request
Is there an easy way to train e.g. `ViTMAE` using hyperspectral images (more than 3 "color" channels), and could (or is there already) a best practice on how to load all the images with `tifffile` (would return `np.ndarray` 3D cubes per tiff file) instead of the typical `PIL`?
### Motivation
I wanted to test the [ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae) [1,2] (as it allowed _n_ > 3 channels) quickly for an existing hyperspectral datasets (that had 58 channels instead of the typical 3 for RGB, e.g. `np_array.shape = (58, 48, 48)` with low spatial resolution) and bumped into several painpoints.
Like:
1) `dataset = load_dataset("imagefolder", data_dir=base_dir)` is supersimple, but then creates a standard PIL-based dataset, whereas I wanted to use `tifffile` to load my hyperspectral files that give me 3d arrays/tensors instead having to rely on ["multipage hacks" with PILs](https://stackoverflow.com/questions/18602525/python-pil-for-loop-to-work-with-multi-image-tiff)
2) Tried to create a custom "loading script" based on the [`Food101` script](https://huggingface.co/datasets/food101/blob/main/food101.py), and wrote an own `Cube()` class instead of the standard `Image()` class. That pretty much just replaced `image = PIL.Image.open(path)` with `image = tifffile.imread(path)`
and then my `_generate_examples()` returns this `yield abs_file_path, {"image": tifffile.imread(abs_file_path).astype('uint8'), "label": label}`
which results in this warning `TypeError('Unsupported array dtype float64 for image encoding. Only uint8 is supported for multi-channel arrays.')` as PIL prefers the `uint8`types whereas my data is now `float32` as it comes from my a custom preprocessing script.
**Summary**
So could not really find a good example on how to define loaders for new types of data (or is everything going back to PIL always?)
**References**
i.e. have more industry standard implementation of these eventually, or similar:
[1] [Ibañez et al. (2022)](https://doi.org/10.1109/TGRS.2022.3217892): "Masked Auto-Encoding Spectral–Spatial Transformer for Hyperspectral Image Classification"
[2] [Xu et al. (2023)](https://arxiv.org/abs/2212.13805): "Swin MAE: Masked Autoencoders for Small Datasets"
### Your contribution
I don't have a working code for training these and was wondering if there is an easy way even. Like probably need to be careful with some of the `Transforms` if they only support 3 color channels
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22070/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22069
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22069/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22069/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22069/events
|
https://github.com/huggingface/transformers/pull/22069
| 1,618,307,367
|
PR_kwDOCUB6oc5Lu34z
| 22,069
|
Fix position embeddings for GPT-J and CodeGen
|
{
"login": "njhill",
"id": 16958488,
"node_id": "MDQ6VXNlcjE2OTU4NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/16958488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njhill",
"html_url": "https://github.com/njhill",
"followers_url": "https://api.github.com/users/njhill/followers",
"following_url": "https://api.github.com/users/njhill/following{/other_user}",
"gists_url": "https://api.github.com/users/njhill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/njhill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njhill/subscriptions",
"organizations_url": "https://api.github.com/users/njhill/orgs",
"repos_url": "https://api.github.com/users/njhill/repos",
"events_url": "https://api.github.com/users/njhill/events{/privacy}",
"received_events_url": "https://api.github.com/users/njhill/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I have tested this with my own code/usecase but wanted to check that there is interest in the contribution before also updating any applicable unit tests.\r\n\r\nI also wonder whether there should be a universal test applied to all models that just tests the same input with different amounts of padding and makes sure that the output is identical?",
"@njhill and yes, the contribution is deeply appreciated! 🙏 \r\n\r\nBe mindful that this will not result in making the outputs left-padding agnostic. As in all models, the padding is a numerical mask. In FP32, it is almost left-padding agnostic, but in FP16/BF16/INT8 the left-padding may introduce changes :)",
"_The documentation is not available anymore as the PR was closed or merged._",
"@gante I didn't review this PR, but I see this is related to issue #21080, and therefore to PR #21853 indirectly, which was reverted in #22093 due to some unexpected tests failure (PT/TF, PT/Flax).\r\n\r\nSo before merging this PR, it's better to verify the cross tests, as well as the slow tests too (always better). \r\n\r\nThe PR CI for #21853 was green (and also green when merged to `main`), but some tests started to fail in subsequent PRs. It's unclear to us why we didn't catch these in the PR CI though. ",
"Thanks for the heads up @ydshieh! 🙏 \r\n\r\nI'll make sure all related slow tests (and the tests that failed after merging #21853 ) are passing before merging.",
"Thanks @gante ... I'm kind of new to this but will figure out how to verify/update the tests per your request.\r\n\r\nThe main problem I've run into though is newly-failing `torch.fx` tracing [tests](https://app.circleci.com/pipelines/github/huggingface/transformers/59686/workflows/0fe3ef82-316a-482d-802d-d245028b2bf6/jobs/730191/parallel-runs/0/steps/0-111):\r\n```\r\nFAILED tests/models/gptj/test_modeling_gptj.py::GPTJModelTest::test_torch_fx - AssertionError: Couldn't trace module: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors\r\nFAILED tests/models/gptj/test_modeling_gptj.py::GPTJModelTest::test_torch_fx_output_loss - AssertionError: Couldn't trace module: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors\r\n```\r\n\r\nI've tried some different variations to the logic but always end up with similar kind of errors. I think it may stem from the [index_select](https://github.com/huggingface/transformers/pull/22069/files#diff-61155574bf9c9669ccdfdf7dd508a5979b4e4915cc95f7ff4a63fee05a0e2715R209) operation. Any pointers/ideas would be appreciated!",
"Hey @njhill 👋 \r\n\r\nI've tried to fix the issue you mentioned with no success. It seems like we are between a rock and a hard place -- the changes you made, by design, make `sincos` dependent on the values of `position_ids`. In other words, `sincos` becomes a tensor impossible to predict at compile time with `torch.fx`, i.e. dynamic tensor. Ultimately, no matter how we rewrite the code (AFAIK), we will hit this barrier, causing the test to fail.\r\n\r\n@sgugger @fxmarty is there a way we can make `torch.fx` ignore a function? (or do you have suggestions?) The change in this PR makes GPT-J correct in the presence of left-padding, but breaks compatibility with `torch.fx` 🙈 \r\n\r\n(Pastebin containing the code with modifications, working through the exceptions until I got stuck: https://pastebin.com/T0HpD07C)",
"Also cc @michaelbenayoun for torch fx.",
"Thanks @gante, it sounds like you followed a similar path to me w.r.t. trying different arrangements of the logic to get around this. I was guessing this couldn't be the only occurrence of this dynamic tensor issue in the library - is dynamic slicing done elsewhere and if so how does it work with `torch.fx`?",
"Hi @njhill,\r\n\r\nThe issue here (from what I could understand from [this](https://app.circleci.com/pipelines/github/huggingface/transformers/59686/workflows/0fe3ef82-316a-482d-802d-d245028b2bf6/jobs/730191/parallel-runs/0/steps/0-111)), seems to be that during tracing we do not have regular tensors but rather symbolic \"proxies\".\r\n\r\nIn the following code we are trying to call `__iter__` on `sincos` which is symbolic, we do not know its length (again, not 100% sure but guessing).\r\n\r\n```python\r\nsincos = [t.contiguous() for t in sincos]\r\n```\r\n\r\nBut the previous line is :\r\n```python\r\nsincos = torch.split(sincos, sincos.shape[-1] // 2, dim=-1)\r\n```\r\nMeaning that the list has:\r\n\r\n- 2 elements if `sincos.shape[-1]` is an even number\r\n- 3 elements if `sincos.shape[-1]` is an odd number.\r\n\r\nSo could you try this:\r\n\r\n```python\r\nsincos = torch.split(sincos, sincos.shape[-1] // 2, dim=-1)\r\nlen_sincos = 2 + torch.remainder(torch.tensor(sincos.shape[-1], 2))\r\nsincos = [sincos[idx].contiguous() for idx in torch.arange(len_sincos)]\r\n```\r\n\r\nTell me if this works!",
"Thanks @michaelbenayoun. You are right that this seems to be the fact that a symbolic proxy tensor is introduced somewhere, however I think that this stems from the tensor-based indexing here:\r\n```python\r\nsincos = embed_positions[position_ids]\r\n```\r\nThe proxy iterator errors are easy to circumvent but just move the problem until later where (inevitably?) the size of the proxy tensor is used for flow control. I've pushed a couple of small updates to the PR to demonstrate this... you can see the latest error in the tests [here](https://app.circleci.com/pipelines/github/huggingface/transformers/59903/workflows/203a46e5-1bc2-4ab0-9085-4992384db930/jobs/733587). As @gante pointed out above:\r\n> Ultimately, no matter how we rewrite the code (AFAIK), we will hit this barrier, causing the test to fail.\r\n\r\nCould we at least make this path conditional such that it isn't followed in the `torch.fx` case, i.e. declare that variable padding is unsupported in that case?",
"Hey @njhill -- I think the conditional path is a sensible idea, at least for now (we can always revisit it later). #22161 reports a similar problem on another demanded model, so I would like to merge the fix as soon as possible 🤗 \r\n\r\nFor context, other places in the `transformers` do this sort of conditional paths for `torch.fx`. Check [here](https://github.com/huggingface/transformers/blob/f7329751fe5c43365751951502c00df5a4654359/src/transformers/models/t5/modeling_t5.py#L845) for an example.",
"@njhill The HF tracer is supposed to keep track of \"concrete\" metadata during tracing to allow for that. \r\nIn this case, either this does not work with `len`, which is possible (I do not remember tbh), or it means than an op does not support the meta device, hence breaking the concrete metadata accumulation.\r\n\r\nSince in this case you are trying to check the rank of the tensor, could you try replacing `len(tensor.shape)` by `tensor.ndim`?",
"Thanks @michaelbenayoun .. the `len` problem can be avoided by adding `torch.fx.wrap('len')`, which I'd done in the prior commit but removed in this latest commit since it seemed futile (just moving the error slightly later). So I was instead attempting to bypass the position_ids fix in the `torch.fx` case per [this comment](https://github.com/huggingface/transformers/pull/22069#issuecomment-1469690762) (so far unsuccessfully).\r\n\r\nThe problem encountered after working around the `len` problem can be seen [here](https://app.circleci.com/pipelines/github/huggingface/transformers/59903/workflows/203a46e5-1bc2-4ab0-9085-4992384db930/jobs/733587):\r\n```\r\n> if len(tensor.shape) == 5:\r\n\r\nAssertionError: Couldn't trace module: symbolically traced variables cannot be used as inputs to control flow\r\n```\r\nbasically this traced length value is then used in a control flow condition.",
"@gante @michaelbenayoun I've got torch.fx to work with the changes now by using `torch.gather` instead of tensor based indexing and adding a couple of new tensor methods to the metadata tracking in `fx.py`.\r\n\r\nAlso rebased on latest main branch since some other CI tests started to fail I think related to a recently-merged unrelated change.\r\n\r\nI will look into the requested additional tests next when I get a chance.",
"For our future reference, here's a snippet that shows that left-padding is fixed with these changes:\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\n\r\ntok = AutoTokenizer.from_pretrained(\"EleutherAI/gpt-j-6B\", padding_side=\"left\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"EleutherAI/gpt-j-6B\", torch_dtype=torch.bfloat16).to(0)\r\ntok.pad_token = tok.eos_token\r\nmodel.generation_config.pad_token_id = model.generation_config.eos_token_id\r\n\r\ninputs_1 = tok([\"The brown fox\"], return_tensors=\"pt\", padding=True).to(0)\r\nout_1 = model(**inputs_1)\r\nout_2 = model(**inputs_1)\r\n\r\nposition_ids = torch.cumsum(inputs_1.attention_mask, dim=-1) - 1\r\nout_3 = model(**inputs_1, position_ids=position_ids + 8)\r\n\r\ninputs_2 = tok([\"The brown fox\"], return_tensors=\"pt\", padding=\"max_length\", max_length=10).to(0)\r\nout_4 = model(**inputs_2)\r\n\r\nposition_ids = torch.cumsum(inputs_2.attention_mask, dim=-1) - 1\r\nposition_ids.masked_fill_(inputs_2.attention_mask == 0, 1)\r\nout_5 = model(**inputs_2, position_ids=position_ids)\r\n\r\n# calls with the same inputs get the same logits\r\nprint(torch.max(torch.abs(out_1.logits[:, -1, :] - out_2.logits[:, -1, :]))) # tensor(0., device='cuda:0', grad_fn=<MaxBackward1>)\r\n\r\n# changing the position_ids changes the logits\r\nprint(torch.max(torch.abs(out_1.logits[:, -1, :] - out_3.logits[:, -1, :]))) # tensor(0.0625, device='cuda:0', grad_fn=<MaxBackward1>)\r\n\r\n# padding and not passing position ids -> incorrect position ids -> output differences\r\nprint(torch.max(torch.abs(out_1.logits[:, -1, :] - out_4.logits[:, -1, :]))) # tensor(0.0625, device='cuda:0', grad_fn=<MaxBackward1>)\r\n\r\n# left-padding has a much smaller impact (NOTE: setting e.g. `max_length=20` will cause the next diff to be non-zero.\r\n# Numerical masking is not perfect :) )\r\nprint(torch.max(torch.abs(out_1.logits[:, -1, :] - out_5.logits[:, -1, :]))) # tensor(0., device='cuda:0', grad_fn=<MaxBackward1>)\r\n```",
"The failing CI was fixed in [this merged PR](https://github.com/huggingface/transformers/pull/22298), merging.",
"@njhill fantastic work with the `torch.fx`, I really appreciated your effort 🤗 ",
"Thanks @gante, glad I was able to contribute. Thank you for your fast responses and for all the great work you and team do.",
"This PR isn't backward compatible. It breaks with pytorch-1.8:\r\n\r\n```\r\nE File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/gptj/modeling_gptj.py\", line 63, in <module>\r\nE @torch.fx.wrap\r\nE AttributeError: module 'torch' has no attribute 'fx'\r\n```\r\n\r\nnot sure if you want to revert this or have an idea how to overcome this quickly. ",
"> This PR isn't backward compatible. It breaks with pytorch-1.8:\r\n> \r\n> ```\r\n> E File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/gptj/modeling_gptj.py\", line 63, in <module>\r\n> E @torch.fx.wrap\r\n> E AttributeError: module 'torch' has no attribute 'fx'\r\n> ```\r\n> \r\n> not sure if you want to revert this or have an idea how to overcome this quickly.\r\n\r\n@stas00 \r\n\r\nFYI, see #22291, although that PR and this PR is not directly related from the beginning when they are opened.",
"ok, the deepspeed CI is running pt-1.8 - how do we solve that then?",
"> ok, the deepspeed CI is running pt-1.8 - how do we solve that then?\r\n\r\nI just saw\r\n\r\nhttps://github.com/microsoft/DeepSpeed/pull/3082\r\n\r\nopened 2 hours ago. I am not sure what will go, but I will try to follow tomorrow morning.",
"oh, ok, I guess everything is fine then. thank you for the heads up, @ydshieh ",
"it still fails with pt-1.9.1\r\n\r\n1. you need `import torch.fx` (thanks @mrwyattii)\r\n\r\n2. it then fails with:\r\n\r\n```\r\nE File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/gptj/modeling_gptj.py\", line 61, in create_sinusoidal_positions\r\nE return torch.concat((torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)), dim=1)\r\nE AttributeError: module 'torch' has no attribute 'concat'\r\n```",
"Oops, I guess we should use `torch.cat()` instead",
"and it fails w/o `import torch.fx`\r\n\r\n```\r\nE File \"/mnt/nvme0/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_clm.py\", line 412, in main\r\nE model = AutoModelForCausalLM.from_pretrained(\r\nE File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py\", line 470, in from_pretrained\r\nE model_class = _get_model_class(config, cls._model_mapping)\r\nE File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py\", line 360, in _get_model_class\r\nE supported_models = model_mapping[type(config)]\r\nE File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py\", line 602, in __getitem__\r\nE return self._load_attr_from_module(model_type, model_name)\r\nE File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py\", line 616, in _load_attr_from_module\r\nE return getattribute_from_module(self._modules[module_name], attr)\r\nE File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py\", line 561, in getattribute_from_module\r\nE if hasattr(module, attr):\r\nE File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/utils/import_utils.py\", line 1109, in __getattr__\r\nE module = self._get_module(self._class_to_module[name])\r\nE File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/utils/import_utils.py\", line 1121, in _get_module\r\nE raise RuntimeError(\r\nE RuntimeError: Failed to import transformers.models.gptj.modeling_gptj because of the following error (look up to see its traceback):\r\nE module 'torch' has no attribute 'fx'\r\n\r\n```\r\n\r\nso 2 fixes at least. thank you!",
"I confirm that it works with `torch.cat`\r\n\r\nperhaps use `torch.concat` but add an alias:\r\n\r\n```\r\n# bc for pt<1.10\r\nif not getattr(torch, \"concat\"):\r\n torch.concat = torch.cat\r\n``` \r\nstashed somewhere in utils?\r\n",
"`import torch.fx` is a must - even with pt-1.10 it won't work w/o it.",
"@njhill, are you on top of fixing this?\r\n\r\nThis is a bit urgent since Deepspeed CI uses our bleed edge to test deepspeed bleed edge on live CI. and currently their CI breaks because of this breakage."
] | 1,678
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
# What does this PR do?
Identical inputs to GPT-J and CodeGen models will currently generate different outputs if they are padded differently (for example in a batch of variable sequence lengths).
This PR reverts the recent change #21869 that removes GPT-J `position_ids`, and then applies similar changes as were done for GPT-J XLA in #17986.
~One copy of the precomputed position embeddings is shared between all of the layers.~
Related issue: #21080
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22069/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22069/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22069",
"html_url": "https://github.com/huggingface/transformers/pull/22069",
"diff_url": "https://github.com/huggingface/transformers/pull/22069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22069.patch",
"merged_at": 1679483695000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22068
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22068/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22068/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22068/events
|
https://github.com/huggingface/transformers/pull/22068
| 1,618,253,047
|
PR_kwDOCUB6oc5Lusqx
| 22,068
|
Fix small typo in flan-ul2.mdx
|
{
"login": "kevin51jiang",
"id": 33907581,
"node_id": "MDQ6VXNlcjMzOTA3NTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/33907581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevin51jiang",
"html_url": "https://github.com/kevin51jiang",
"followers_url": "https://api.github.com/users/kevin51jiang/followers",
"following_url": "https://api.github.com/users/kevin51jiang/following{/other_user}",
"gists_url": "https://api.github.com/users/kevin51jiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevin51jiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevin51jiang/subscriptions",
"organizations_url": "https://api.github.com/users/kevin51jiang/orgs",
"repos_url": "https://api.github.com/users/kevin51jiang/repos",
"events_url": "https://api.github.com/users/kevin51jiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevin51jiang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Force-merging this without the tests as I'm 99.9% sure everything will be fine, but for future PR, there is an issue with your CircleCI permissions, the tests won't run.\r\nYou will need to refresh your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Saw small typo while reading. Not sure why the last line shows as changed; used GitHub's web UI and just modified "Resources".
Thanks for making thie library!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22068/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22068/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22068",
"html_url": "https://github.com/huggingface/transformers/pull/22068",
"diff_url": "https://github.com/huggingface/transformers/pull/22068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22068.patch",
"merged_at": 1678452286000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22067
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22067/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22067/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22067/events
|
https://github.com/huggingface/transformers/pull/22067
| 1,618,242,501
|
PR_kwDOCUB6oc5Luqdf
| 22,067
|
Fixed docstring formatting
|
{
"login": "koullouros",
"id": 67971682,
"node_id": "MDQ6VXNlcjY3OTcxNjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/67971682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/koullouros",
"html_url": "https://github.com/koullouros",
"followers_url": "https://api.github.com/users/koullouros/followers",
"following_url": "https://api.github.com/users/koullouros/following{/other_user}",
"gists_url": "https://api.github.com/users/koullouros/gists{/gist_id}",
"starred_url": "https://api.github.com/users/koullouros/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/koullouros/subscriptions",
"organizations_url": "https://api.github.com/users/koullouros/orgs",
"repos_url": "https://api.github.com/users/koullouros/repos",
"events_url": "https://api.github.com/users/koullouros/events{/privacy}",
"received_events_url": "https://api.github.com/users/koullouros/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22067). All of your documentation changes will be reflected on that endpoint."
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
# What does this PR do?
Fixes the formatting in docstring for whisper model.
Fixes #22052
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Issue #22052
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22067/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22067",
"html_url": "https://github.com/huggingface/transformers/pull/22067",
"diff_url": "https://github.com/huggingface/transformers/pull/22067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22067.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22066
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22066/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22066/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22066/events
|
https://github.com/huggingface/transformers/issues/22066
| 1,618,098,131
|
I_kwDOCUB6oc5gcjfT
| 22,066
|
Add Canine Model Config to AutoModelForCausalLM
|
{
"login": "itsbvk",
"id": 57064909,
"node_id": "MDQ6VXNlcjU3MDY0OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/57064909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itsbvk",
"html_url": "https://github.com/itsbvk",
"followers_url": "https://api.github.com/users/itsbvk/followers",
"following_url": "https://api.github.com/users/itsbvk/following{/other_user}",
"gists_url": "https://api.github.com/users/itsbvk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itsbvk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itsbvk/subscriptions",
"organizations_url": "https://api.github.com/users/itsbvk/orgs",
"repos_url": "https://api.github.com/users/itsbvk/repos",
"events_url": "https://api.github.com/users/itsbvk/events{/privacy}",
"received_events_url": "https://api.github.com/users/itsbvk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nCANINE doesn't support causal attention. It can only be used as an encoder.",
"Thanks @NielsRogge for pointing that out. Is there then, any pre-trained language model similar to that of canine that processes the tokens at unicode character level. i.e. the tokenizer basically does \r\n```\r\ntokens = [ord(c) for c in string]\r\n```",
"You can leverage the decoder of [ByT5](https://huggingface.co/docs/transformers/model_doc/byt5), which is a byte-based model.",
"@NielsRogge I think ByT5 while it does have the tokenization the way I wanted, it still cannot be used by the VisualEncoderDecoder API of hugging face - using the snippet like shown below:\r\n```\r\nfrom transformers import ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel, ByT5Config # this does not exist\r\n# taken from https://huggingface.co/docs/transformers/v4.26.1/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig.example\r\nconfig_encoder = ViTConfig()\r\nconfig_decoder = ByT5Config() # this is what is desired.\r\nconfig = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)\r\nmodel = **VisionEncoderDecoderModel(config=config)**\r\n```\r\n\r\nTrying something like the following:\r\n\r\n```\r\nfrom transformers import VisionEncoderDecoderModel\r\nved = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(\r\n \"google/vit-base-patch16-224-in21k\", 'google/byt5-small'\r\n)\r\n```\r\nThrows up the following `ValueError`\r\n\r\n```\r\nValueError: Unrecognized configuration class <class 'transformers.models.t5.configuration_t5.T5Config'> for this kind of AutoModel: AutoModelForCausalLM.\r\nModel type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, MarianConfig, MBartConfig, MegatronBertConfig, MvpConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig.\r\n```\r\nDo any of the above models listed have tokenization at Byte level or character level - so that it can be used by the VisualEncoderDecoderModel API provided by 🤗.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @khadiravana-belagavi this is because T5/ByT5 is an encoder-decoder model. You would only need the decoder to combine it with a vision encoder. The vision encoder-decoder framework doesn't work out-of-the-box with T5/ByT5 at the moment as this would require us to define a new class that includes only the decoder + a language modeling head on top. \r\n\r\nHence I'd recommend defining this class yourself and then provide it as decoder argument when instantiating a `VisionEncoderDecoderModel` class. The class could roughly look like this:\r\n``` \r\nfrom transformers.models.t5.modeling_t5 import T5PreTrainedModel, T5Stack\r\n\r\nclass T5DecoderOnlyForCausalLM(T5PreTrainedModel):\r\n\r\n def __init__(self, config):\r\n self.shared = nn.Embedding(config.vocab_size, config.d_model)\r\n self.decoder = T5Stack(config, self.shared)\r\n self.lm_head = nn.Linear(config.d_model, config.vocab_size, bias=False)\r\n```\r\nThen you can instantiate the model as follows:\r\n```\r\nfrom transformers import VisionEncoderDecoderModel, ViTModel\r\n\r\nencoder = ViTModel.from_pretrained(\"google/vit-base-patch16-224\")\r\ndecoder = T5DecoderOnlyForCausalLM.from_pretrained(\"t5-base\")\r\n\r\nmodel = VisionEncoderDecoderModel(encoder=encoder, decoder=decoder)\r\n```\r\nOne would also need to check whether the weights of the decoder are properly instantiated, the draft above probably won't load the weights correctly.",
"Thanks @NielsRogge for the detailed response. However, I think this issue is still relevant, as although Canine is not a CausalLM, Bert is not as well. And the class `BertLMHeadModel` adds the necessary components for finetuning on CLM task. Or is there anything specific to Canine - because canine is also a pretrained on a similar MLM task.",
"@khadiravana-belagavi BERT can be adapted to be used as decoder (by simply using a causal attention mask rather than a bidirectional one). CANINE on the other hand cannot simply be adapted to work as decoder since it uses a different architecture composed of 3 Transformers.",
"Hi @NielsRogge, I have been intending to use ByT5 as decoder too and I am getting the same error. Thanks for providing with the method to so.\r\n`from transformers.models.t5.modeling_t5 import T5PreTrainedModel, T5Stack\r\n\r\nclass T5DecoderOnlyForCausalLM(T5PreTrainedModel):\r\n\r\n def __init__(self, config):\r\n self.shared = nn.Embedding(config.vocab_size, config.d_model)\r\n self.decoder = T5Stack(config, self.shared)\r\n self.lm_head = nn.Linear(config.d_model, config.vocab_size, bias=False)`\r\n\r\n`from transformers import VisionEncoderDecoderModel, ViTModel\r\n\r\nencoder = ViTModel.from_pretrained(\"google/vit-base-patch16-224\")\r\ndecoder = T5DecoderOnlyForCausalLM.from_pretrained(\"t5-base\")\r\n\r\nmodel = VisionEncoderDecoderModel(encoder=encoder, decoder=decoder)`\r\n\r\nCan you provide with the detailed code or any reference so that I can accurately create the complete T5DecoderOnlyForCausalLM class and the weights of the decoder are properly instantiated."
] | 1,678
| 1,695
| 1,681
|
NONE
| null |
### Feature request
Kindly add a class such as https://github.com/huggingface/transformers/blob/a9bd5df16a46356463f2712dd8f6c109fa83d6f9/src/transformers/models/bert/modeling_bert.py#L1161
for the [Canine Model](https://huggingface.co/docs/transformers/model_doc/canine).
Basically, in the list of models available for CausalLM provided [here](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM), the [canine](https://huggingface.co/docs/transformers/model_doc/canine) model isn't listed. Kindly add it.
### Motivation
Currently unable to experiment with CanineConfig LM decoder using [this](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig.example) api.
Snippet of code used:
```
from transformers import ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel, CanineConfig
# taken from https://huggingface.co/docs/transformers/v4.26.1/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig.example
config_encoder = ViTConfig()
config_decoder = CanineConfig()
config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = VisionEncoderDecoderModel(config=config)
```
### Your contribution
Not yet, currently.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22066/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22065
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22065/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22065/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22065/events
|
https://github.com/huggingface/transformers/pull/22065
| 1,618,060,950
|
PR_kwDOCUB6oc5LuEGG
| 22,065
|
Fix imports of TF MobileViT
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"It's only the type hints that are wrong, not the actual init.",
"> It's only the type hints that are wrong, not the actual init.\r\n\r\nYeah, just figure it out after I posted. Deleted it but you are too fast to answer",
"Failures are related to Hub being down, so no blocker to merge."
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Small cleanup in the main init.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22065/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22065",
"html_url": "https://github.com/huggingface/transformers/pull/22065",
"diff_url": "https://github.com/huggingface/transformers/pull/22065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22065.patch",
"merged_at": 1678477595000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22064
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22064/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22064/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22064/events
|
https://github.com/huggingface/transformers/issues/22064
| 1,617,973,622
|
I_kwDOCUB6oc5gcFF2
| 22,064
|
BLIP2 hangs after loading shards, no errors
|
{
"login": "thely",
"id": 4604094,
"node_id": "MDQ6VXNlcjQ2MDQwOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4604094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thely",
"html_url": "https://github.com/thely",
"followers_url": "https://api.github.com/users/thely/followers",
"following_url": "https://api.github.com/users/thely/following{/other_user}",
"gists_url": "https://api.github.com/users/thely/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thely/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thely/subscriptions",
"organizations_url": "https://api.github.com/users/thely/orgs",
"repos_url": "https://api.github.com/users/thely/repos",
"events_url": "https://api.github.com/users/thely/events{/privacy}",
"received_events_url": "https://api.github.com/users/thely/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"hi @thely \r\nThanks for the issue, it might be indeed a CPU related issue but this is hard to tell , I'd give a try by loading a model with `low_cpu_mem_usage=True`:\r\n\r\n```python\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-opt-2.7b\", torch_dtype=torch.float16, cache_dir=cachedir, low_cpu_mem_usage=True)\r\n```\r\n\r\nI would also give it a try with `accelerate` + `8-bit` since it enables loading the model with less memory requirements:\r\nFirst:\r\n```bash\r\npip install accelerate bitsandbytes\r\n```\r\nThen:\r\n```python\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-opt-2.7b\", device_map=\"auto\", load_in_8bit=True)\r\n```",
"@younesbelkada I don't know what about your comment helped me, but it helped me realize the problem wasn't in transformers, or BLIP2.\r\n\r\nIn the initial run of this code on torch 1.10.0, *something* about the config (Pillow? torch? python?) was printing lines regularly as the code progressed. After the change to torch 1.12.0, which changed both the active Python version from 3.8.x to 3.9.x and the Pillow version from 8.x to 9.x, I wasn't shown any print statements until *all* the activity had completed – image loading, running through BLIP, output, etc. So I guess it wasn't hanging, I just didn't get to know that anything was happening until the very end. Not sure if it's something about Python 3.9.x scheduling print statements differently, but I'm leaving this here in case it helps someone else.\r\n\r\nFor my sanity, I fixed it by running through 100 folders at a time.",
"Awesome! Thank you for the update @thely !"
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
### System Info
python: 3.9.13
torch: 1.12.0+cu113
transformers: 4.27.0.dev0
Note: I'm on an HPC, running everything through SLURM. I'm not privy to what kind of CPU I'm using.
CPU: Unknown
GPU: NVIDIA A100
GPU memory: 40GB
### Who can help?
@ArthurZucker @younesbelkada @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The original version of this code was borrowed from the zero-shot inference article on Medium, then expanded for a larger set of images.
* Inference on a single image with BLIP2 runs fine.
* Inferencing a single folder of 84 images also runs fine.
* My actual data set is a folder of about 2k folders with 20-84 images in each, and here's where the problems are happening. I have them separated into folders by company to make it easier to label them all; I'm using this output to feed into Stable Diffusion later.
```
> train2
> > company1
> > > image1.png
> > > image2.png
> > company2
> > > image1.png
> > > image2.png
> > > etc
```
What's happening is that the checkpoint shards for the model will load, and then hang, forever, on this line:
```
Loading checkpoint shards: 100%|██████████| 2/2 [00:25<00:00, 12.90s/it]
```
I'm not training or fine-tuning, just trying to run normal inference. It won't error out, either, nor will I get some kind of OOM error from SLURM. It just stays forever. Running `allocations` (which tracks how many hours I've used for jobs sent via SLURM to the HPC) also isn't incrementing time for these jobs at all, which makes me think there's some error I can't see. (Though if I check `squeue`, the time on the job itself is still ticking up, but that time isn't getting applied to my overall time limit somehow.)
I can't tell if this is because of some secret OOM error, because I'm working with about 7GB of image files. I attempted batch inference a few weeks ago, but it wasn't working at the time.
The single image version of the BLIP2 inference code *is* working correctly, though, and typically finishes before I can even `tail -f` the log file. I have both pieces of code below for reference.
Code that's not working first, inferencing a folder full of folders full of images:
```py
from transformers import AutoProcessor, Blip2ForConditionalGeneration
import torch
from PIL import Image
import glob
import json
print("finished imports")
# big list of brand names, printing keys to make sure anything works before the shards make everything hang
folder = ".../inputs/"
brands = {}
with open(folder + "full_brands.json") as jsonf:
brands = json.load(jsonf)
keys = sorted(brands.keys())
print(keys[0:10])
cachedir = ".../hfcache"
processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b", cache_dir=cachedir)
print("processor good")
try:
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, cache_dir=cachedir)
# neither the below error, nor the else statement will ever print. we hang here.
except err:
print(err)
else:
print("blip ready")
print("model loaded")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
retval = []
# I haven't reached this section of the code in about a day, but it's here for reference in case this is what's making things hang
for slug in keys:
print(slug)
bname = brands[slug]["name"]
image_files = glob.glob(folder + "/train2/" + slug + "/*.png")
images = []
for x in range(len(image_files)):
try:
images.append(Image.open(image_files[x]).convert("RGBA"))
except:
print("image non-functional")
for i in range(len(images)):
print(".", end="")
image = images[i]
prompt = "an image of " + bname + " with"
inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs, max_new_tokens=20)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
desc = prompt + " " + generated_text
retval.append({"file_name": image_files[i], "text": desc})
print(desc)
with open(folder + "blip_output.json", "w") as jsonf:
json.dump(retval, jsonf, indent=2)
```
Code that is working second, inference on a single image:
```py
import requests
from PIL import Image
from transformers import AutoProcessor, Blip2ForConditionalGeneration
import torch
path = ".../newyorker.jpg"
image = Image.open(path).convert('RGBA')
print(image)
cachedir = ".../hfcache"
processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b", cache_dir=cachedir)
print("processor loaded")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, cache_dir=cachedir)
print("model loaded")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
print("cuda invoked")
inputs = processor(image, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs, max_new_tokens=20)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(generated_text)
```
### Expected behavior
I'd think BLIP/transformers would either error out or continue to the rest of the code. I wish I knew what was going on.
As another point of reference, the top chunk of code *was* working yesterday on torch 1.10.0 and transformers 4.26.1, but between then and now, something about torch got updated such that torch 1.10.0 wasn't working with the A100 GPUs. (I was getting the "no binary exists for this device" error.) When I had to move up to torch 1.12.0, `Blip2ForConditionalGeneration` no longer existed, so I had to bump up to transformers 4.27.0.dev0, and here we are now.
But the smaller code *is* still working. So I don't know what the impact of all those images is on the file itself, but since the code never reaches the point where it *could* load the images, I don't understand how this is happening.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22064/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22063
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22063/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22063/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22063/events
|
https://github.com/huggingface/transformers/pull/22063
| 1,617,875,847
|
PR_kwDOCUB6oc5LtcDN
| 22,063
|
Add a new script to check model testers' config
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Request a review from @amyeroberts too - to celebrate her new core maintainer role 🔥 🚀 \r\n",
"> I think that's way too specific\r\n\r\nI agree that `100` is way to specific.\r\n\r\n> and will require lots of exceptions\r\n\r\nNot really a lot of exceptions. So far, there are `245` errors given by this new check.\r\n- `max_position_embeddings`: 78 places --> this is better to be changed to smaller values\r\n- `hidden_size` or `xxx_dim`: 41 places --> also better to be changed\r\n- `xxx_size` (other than `hidden_size`): 39 places --> I guess better to change\r\n- `xxx_token_id`: 19 places --> this could be skipped (they look weird, but it doesn't really matter)\r\n- **we can skip other places for now**\r\n\r\n> It would be more helpful to have something in a PR added a new model that would extract the time the tests took (as reported in the artifacts) and put it somewhere clearly visible, for instance the comment regarding the documentation.\r\n\r\nThe `test_torch` job of PR #20775 (Add BridgeTower model) took `37` minutes, while it took about `33` minutes on nightly run one day before. So yes, this might be a good way to look. However:\r\n- even if we show the timing, **without the timing of the previous (full) run**, no one really knows if we should look into the timing/speed issue, and we/contributor just would not pay attention\r\n - try to grab (automatically) **without the timing of the previous (full) run** seems to me not a super easy task\r\n- we will have to identity PRs that are new model addition PRs (use diff tool?) + we will have to show in PR comments\r\n - This, plus the necessity to grab the previous running time, seems to me requiring much more work than just simply fix things I mentioned in the first part above\r\n\r\nSo let me continue a bit and see what would happen!\r\n\r\n\r\n",
"I also don't think this check is super valuable as it will add another burden on the contributors for something that really only affects us (we're the ones to handle CI, memory, timeouts, etc.).\r\n\r\nI think aiming for tiny tiny models is great, but it's not the end of the world if a few slip through the cracks which we have to correct after.",
"> it will add another burden on the contributors\r\n\r\nYeah, that's a super valid point. I don't feel strong, so more than happy to close the PR.\r\n\r\nBut still give you a rough numbers: \r\n\r\n> if a few slip through the cracks which we have to correct after.\r\n\r\ncurrently there are `245` places this check identified, a few examples\r\n\r\n```bash\r\nconfig[\"max_speech_positions\"] = 4000 which is too large for testing!\r\nconfig[\"num_block_records\"] = 13353718 which is too large for testing!\r\nconfig[\"max_2d_position_embeddings\"] = 1024 which is too large for testing!\r\n```\r\n\r\nWithout the check, more such cases will accumulate (but most of them are not as extreme as `BridgeModelTest` . Also maybe we will only have to deal them after a long long long period of time.\r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"Well, although we are not going to add a check on PR CI, I think it might be handy if we add the script - just in case when we need to perform housekeeping. It's not complex this new script, but always better if it is already there when we need it.\r\n\r\nI also change the check to check just a few major attributes for now.\r\n\r\n@sgugger @LysandreJik Let me know if you are happy with this addition (without it being added to CI workflows)."
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Add a new check to check model testers will give tiny config.
The objective is to ensure no test will run with large configuration values (such as `BridgeTowerModelTest(er)`, except the integration or slow tests.
The check is not added to PR/daily CI workflow: we don't want to add more burden to contributors.
### The effect
<img width="720" alt="Screenshot 2023-03-09 212533" src="https://user-images.githubusercontent.com/2521628/224147644-62fb5f9c-60e1-4801-b257-08d9679cb06b.png">
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22063/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22063",
"html_url": "https://github.com/huggingface/transformers/pull/22063",
"diff_url": "https://github.com/huggingface/transformers/pull/22063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22063.patch",
"merged_at": 1678731079000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22062
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22062/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22062/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22062/events
|
https://github.com/huggingface/transformers/pull/22062
| 1,617,873,655
|
PR_kwDOCUB6oc5LtbjO
| 22,062
|
Add a progress bar for the total download of shards
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm not observing any changes with this branch:\r\n\r\nI tried:\r\n```\r\npython -c 'import sys; from transformers import AutoModel; AutoModel.from_pretrained(sys.argv[1], revision=\"sharded\")' t5-11b\r\n```\r\n\r\nafter deleting the cache and not getting a new overall progress bar. \r\n\r\nAm I doing something wrong or have some wrong dependencies?",
"Woops, wrong check for the file being cached. Can you try again?",
"It works great! Thank you, Sylvain!\r\n\r\nAs you can see it moves the outer progress bar with the inner bar, so it doesn't matter how many shards there are as I was concerned with 72 bloom shards.\r\n\r\n\r\n\r\n\r\n"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
This PR adds a feature requested in #22047 and fixes a small bug I encountered while testing it.
**The feature**: a new progress bar is added when loading a sharded checkpoint that gives the overall progress.
**The bug**: when passing along `force_download=True`, the files were not downloaded if cached, because an early return in `cached_file` returned the cached file.
Fixes #22047
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22062/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22062",
"html_url": "https://github.com/huggingface/transformers/pull/22062",
"diff_url": "https://github.com/huggingface/transformers/pull/22062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22062.patch",
"merged_at": 1678399083000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22061
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22061/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22061/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22061/events
|
https://github.com/huggingface/transformers/issues/22061
| 1,617,746,977
|
I_kwDOCUB6oc5gbNwh
| 22,061
|
Assistance Exporting git-large to ONNX
|
{
"login": "gracemcgrath",
"id": 46832828,
"node_id": "MDQ6VXNlcjQ2ODMyODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/46832828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gracemcgrath",
"html_url": "https://github.com/gracemcgrath",
"followers_url": "https://api.github.com/users/gracemcgrath/followers",
"following_url": "https://api.github.com/users/gracemcgrath/following{/other_user}",
"gists_url": "https://api.github.com/users/gracemcgrath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gracemcgrath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gracemcgrath/subscriptions",
"organizations_url": "https://api.github.com/users/gracemcgrath/orgs",
"repos_url": "https://api.github.com/users/gracemcgrath/repos",
"events_url": "https://api.github.com/users/gracemcgrath/events{/privacy}",
"received_events_url": "https://api.github.com/users/gracemcgrath/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Questions around conversion to ONNX should go in the [optimum repo](https://github.com/huggingface/optimum) as this is where the feature is actually implemented :-)",
"> Questions around conversion to ONNX should go in the [optimum repo](https://github.com/huggingface/optimum) as this is where the feature is actually implemented :-)\r\n\r\nThank you!! I will post this there."
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
Hello! I am looking to export an image captioning Hugging Face model to ONNX (specifically I was playing with the [git-large](https://huggingface.co/microsoft/git-large) model but if anyone knows of one that might be easier to deal with in terms of exporting that is great too)
I'm trying to follow [these](https://huggingface.co/docs/transformers/serialization#exporting-a-model-for-an-unsupported-architecture) instructions for exporting an unsupported architecture, and I am a bit stuck on figuring out what base class to inherit from and how to define the custom ONNX Configuration since I'm not sure what examples to look at (the model card says this is a transformer decoder model, but it looks like i that it has both encoding and decoding so I am a bit confused)
I also found [this](https://github.com/huggingface/notebooks/blob/main/examples/onnx-export.ipynb) notebook but I am again not sure if it would work with this sort of model.
Any comments, advice, or suggestions would be so helpful -- I am feeling a bit stuck with how to proceed in deploying this model in the school capstone project I'm working on. In a worst-case scenario, can I use `from_pretrained` in my application?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22061/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22060
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22060/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22060/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22060/events
|
https://github.com/huggingface/transformers/pull/22060
| 1,617,667,590
|
PR_kwDOCUB6oc5LsuUC
| 22,060
|
Skip 3 tests for `WhisperEncoderModelTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Skip 3 tests for `WhisperEncoderModelTest`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22060/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22060",
"html_url": "https://github.com/huggingface/transformers/pull/22060",
"diff_url": "https://github.com/huggingface/transformers/pull/22060.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22060.patch",
"merged_at": 1678385364000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22059
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22059/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22059/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22059/events
|
https://github.com/huggingface/transformers/issues/22059
| 1,617,602,654
|
I_kwDOCUB6oc5gaqhe
| 22,059
|
Adam Weight Decay Rate does not hear the opinion of tf.stop_gradient
|
{
"login": "arivero",
"id": 43174,
"node_id": "MDQ6VXNlcjQzMTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/43174?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arivero",
"html_url": "https://github.com/arivero",
"followers_url": "https://api.github.com/users/arivero/followers",
"following_url": "https://api.github.com/users/arivero/following{/other_user}",
"gists_url": "https://api.github.com/users/arivero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arivero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arivero/subscriptions",
"organizations_url": "https://api.github.com/users/arivero/orgs",
"repos_url": "https://api.github.com/users/arivero/repos",
"events_url": "https://api.github.com/users/arivero/events{/privacy}",
"received_events_url": "https://api.github.com/users/arivero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is a standard behaviour of all TF optimizers - `tf.stop_gradient()` stops gradient \"flowing\" back through that operation, but the optimizer is not aware of this control step, and so it simply sees weights with a gradient of 0 and applies weight decay to them as normal (in fact, it probably also applies updates to them from the residual momentum in the optimizer).\r\n\r\nIf you want to exclude weights from being updated entirely, you can set `layer.trainable=False`, which you're already doing. However, it sounds like you want to selectively mask different weights in each step, and you definitely can't set properties like `layer.trainable` in the `call()` function.\r\n\r\nIn those circumstances, probably the best solution is to override `train_step()` instead and apply the optimizer step to only unmasked weights at each stage (note that this might prevent XLA compilation!) Alternatively, you could just use a standard optimizer that doesn't have weight decay.\r\n\r\nSoft prompt tuning is definitely something we could investigate adding as a feature, though! What kind of interface do you think would work for it?",
"I see. Was not sure if it was a bug, just as you say, one needs to know what the standard behaviour is. \r\n\r\nShould I open a feature request for soft prompt? It would definitely be an interesting addition to the toolset, and there is a lot of tripwires that one can trigger if made in an ad-hoc way. This was not the only one :-)",
"cc @gante to that one - I believe he's been looking at soft prompting in generation!",
"Hey @arivero 👋 If I got it right, you would be interested in soft-prompting as in passing post-embedding values to the model (and not train on a few masked tokens). \r\n\r\nOur models support an `input_embeds` input, which is mutually exclusive with `input_ids` and is meant as post-embeddings `input_ids`. Would using this input instead solve your problem? I believe you would be able to handle all masking operations outside the model :)"
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.19.0-32-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help?
@gante @Rocketknight1 @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I use a gpt2 variant.
`model = TFGPT2LMHeadModel.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne", from_pt=True)`
where I freeze all the transformer weights except wte
`for i in range(36): model.transformer.h[i].trainable=False`
and then monkey patch the wte layer to stop propagation of gradiente for only some tokens. For instance:
```
import transformers
from transformers.tf_utils import shape_list
def call(self, inputs: tf.Tensor, mode: str = "embedding") -> tf.Tensor:
w= tf.stop_gradient(self.maskEmbeddings * self.weight) + (1-self.maskEmbeddings) * self.weight
if mode == "embedding":
return tf.gather(w, inputs)
elif mode == "linear":
first_dims = shape_list(inputs)[:-1]
x = tf.reshape(inputs, [-1, self.hidden_size])
logits = tf.matmul(x, w, transpose_b=True)
return tf.reshape(logits, first_dims + [self.vocab_size])
else:
raise ValueError(f"mode {mode} is not valid.")
transformers.modeling_tf_utils.TFSharedEmbeddings.call = call
```
key line here is `tf.stop_gradient(self.maskEmbeddings * self.weight`
Now I provide some mask matrix model.transformer.wte.maskEmbeddings = maskEmbeddings and run the training as usual with AdamWeightDecay optimizer. I check the changes with a checksum:
```
def printChecksum(model):
frozenWeights=model.transformer.wte.maskEmbeddings*model.transformer.wte.weight
check=tf.reduce_sum(frozenWeights, axis=0)
print(check)
printChecksum(model)
```
when weight_decay_rate is different of 0.0, the optimizer applies decay to the weights.
### Expected behavior
I would expect the weights that have opted out of gradient updating via the tf.stop_gradient to remain unaltered even if weight_decay_rate is not zero.
On other hand, it is true that the wdr is not part of the gradient, so it should be stopped by other ways. Surely the right path is to implement support for soft prompt training at a higher level.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22059/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22058
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22058/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22058/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22058/events
|
https://github.com/huggingface/transformers/pull/22058
| 1,617,596,879
|
PR_kwDOCUB6oc5Lse8C
| 22,058
|
Update tiny model creation script
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Main changes:
- **Better error message**, including allowing to save the traceback to the report files
- Add `UNCONVERTIBLE_MODEL_ARCHITECTURES`, so for those models could not be converted to tiny versions, the code will skip them (and we have cleaner reports)
- Add a method `build_tiny_model_summary`, which will produce the entries we might want to add `tests/utils/tiny_model_summary.json` (for pipeline testing purpose)
- (well, we might want to remove this file in the future)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22058/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22058",
"html_url": "https://github.com/huggingface/transformers/pull/22058",
"diff_url": "https://github.com/huggingface/transformers/pull/22058.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22058.patch",
"merged_at": 1678388034000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22057
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22057/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22057/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22057/events
|
https://github.com/huggingface/transformers/pull/22057
| 1,617,453,242
|
PR_kwDOCUB6oc5Lr_xS
| 22,057
|
rm $ symbol from code block from contributing.md
|
{
"login": "kamalkraj",
"id": 17096858,
"node_id": "MDQ6VXNlcjE3MDk2ODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kamalkraj",
"html_url": "https://github.com/kamalkraj",
"followers_url": "https://api.github.com/users/kamalkraj/followers",
"following_url": "https://api.github.com/users/kamalkraj/following{/other_user}",
"gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions",
"organizations_url": "https://api.github.com/users/kamalkraj/orgs",
"repos_url": "https://api.github.com/users/kamalkraj/repos",
"events_url": "https://api.github.com/users/kamalkraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/kamalkraj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
Removed the $ symbol from the code block to make copy-pasting easier.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22057/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22057",
"html_url": "https://github.com/huggingface/transformers/pull/22057",
"diff_url": "https://github.com/huggingface/transformers/pull/22057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22057.patch",
"merged_at": 1678378187000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22056
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22056/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22056/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22056/events
|
https://github.com/huggingface/transformers/issues/22056
| 1,617,438,238
|
I_kwDOCUB6oc5gaCYe
| 22,056
|
Difference in the architecture of openai whisper and huggingface whisper
|
{
"login": "hannan72",
"id": 8229163,
"node_id": "MDQ6VXNlcjgyMjkxNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hannan72",
"html_url": "https://github.com/hannan72",
"followers_url": "https://api.github.com/users/hannan72/followers",
"following_url": "https://api.github.com/users/hannan72/following{/other_user}",
"gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hannan72/subscriptions",
"organizations_url": "https://api.github.com/users/hannan72/orgs",
"repos_url": "https://api.github.com/users/hannan72/repos",
"events_url": "https://api.github.com/users/hannan72/events{/privacy}",
"received_events_url": "https://api.github.com/users/hannan72/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @hannan72!\r\n\r\nWe register the activation function here (`activation_fn` in the state dict):\r\nhttps://github.com/huggingface/transformers/blob/fdf84096565b8d2e15de35ac0cd86818c4b12adb/src/transformers/models/whisper/modeling_whisper.py#L468\r\n\r\nAnd then apply it directly after the first feedforward layer (`fc1`):\r\nhttps://github.com/huggingface/transformers/blob/fdf84096565b8d2e15de35ac0cd86818c4b12adb/src/transformers/models/whisper/modeling_whisper.py#L556\r\n\r\nIf you check the config for Whisper, you'll see that this activation function defaults to GELU:\r\nhttps://github.com/huggingface/transformers/blob/fdf84096565b8d2e15de35ac0cd86818c4b12adb/src/transformers/models/whisper/configuration_whisper.py#L106\r\n\r\nSo the two are entirely equivalent 👍 OpenAI just register the GELU in a sequential block, we register it standalone. But both apply it in the same place.",
"Thank you a lot @sanchit-gandhi for your clear answer!"
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
I was comparing the whisper-medium which I got from OpenAI directly (from https://github.com/openai/whisper.git) with the HuggingFace whisper-medium.
In the decoder part of the model, both has 24 decoder blocks but there is a difference in the block architecture between openai's and huggingFaces.
As it is represented below, In OpenAI whisper decoder, there is a GELU activation function between two linear layer (at mlp block) but in HuggingFace whisper decoder, there isn't that GELU activation function between fc1 and fc2 block.

So based on such difference, is the huggignface whisper model works similar to openai's one?
And is it a totally a difference between these two whisper-medium models?
@sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22056/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22055
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22055/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22055/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22055/events
|
https://github.com/huggingface/transformers/pull/22055
| 1,617,351,816
|
PR_kwDOCUB6oc5Lrpp9
| 22,055
|
pt-to-tf model architecture override
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
MEMBER
| null |
This PR adds an extra arg to the `pt-to-tf` conversion script. We've seen a few uploaded models where the `config.json` doesn't specify the model class and the script autodetects the wrong one, which means some weights are not converted. This argument lets you override the autodetection and specify a model class to use for the conversion.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22055/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22055",
"html_url": "https://github.com/huggingface/transformers/pull/22055",
"diff_url": "https://github.com/huggingface/transformers/pull/22055.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22055.patch",
"merged_at": 1678376190000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22054
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22054/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22054/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22054/events
|
https://github.com/huggingface/transformers/pull/22054
| 1,617,329,286
|
PR_kwDOCUB6oc5LrkwN
| 22,054
|
Show the number of `huggingface_hub` warnings in CI report
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Show the number of `huggingface_hub` warnings in CI report, as discussed in #22051
Will be shown like below
<img width="488" alt="Screenshot 2023-03-09 134910" src="https://user-images.githubusercontent.com/2521628/224050100-90c1477a-c33f-485e-85e1-ec648cbb4f91.png">
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22054/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22054/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22054",
"html_url": "https://github.com/huggingface/transformers/pull/22054",
"diff_url": "https://github.com/huggingface/transformers/pull/22054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22054.patch",
"merged_at": 1678372745000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22053
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22053/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22053/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22053/events
|
https://github.com/huggingface/transformers/issues/22053
| 1,616,947,617
|
I_kwDOCUB6oc5gYKmh
| 22,053
|
WhisperTimeStampLogitsProcessor error while using Whisper pipelines. Was WhisperTimeStampLogitsProcessor used?
|
{
"login": "melihogutcen",
"id": 43522440,
"node_id": "MDQ6VXNlcjQzNTIyNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/43522440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/melihogutcen",
"html_url": "https://github.com/melihogutcen",
"followers_url": "https://api.github.com/users/melihogutcen/followers",
"following_url": "https://api.github.com/users/melihogutcen/following{/other_user}",
"gists_url": "https://api.github.com/users/melihogutcen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/melihogutcen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/melihogutcen/subscriptions",
"organizations_url": "https://api.github.com/users/melihogutcen/orgs",
"repos_url": "https://api.github.com/users/melihogutcen/repos",
"events_url": "https://api.github.com/users/melihogutcen/events{/privacy}",
"received_events_url": "https://api.github.com/users/melihogutcen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Narsil as this might follow the latest update of the `return_stimestamps`",
"Do you have the faulty sample too ? I cannot reproduce with a dummy file ?\r\n\r\n@ArthurZucker it does look like the last token is indeed not a timestamp, but it could be linked to batching possibly ?",
"I'm using this audio https://github.com/frankiedrake/demo/blob/master/whisper_test.wav to test with your script. ",
"You can use this full script for testing. I uploaded an English sound to GitHub. By using this, you can try it too. \r\n\r\n```\r\nfrom six.moves.urllib.request import urlopen\r\nimport io\r\nimport numpy as np\r\nimport soundfile as sf\r\nfrom transformers import pipeline\r\n\r\nsound_link = \"https://github.com/melihogutcen/sound_data/blob/main/accidents_resampled.wav?raw=true\"\r\ndata, sr = sf.read(io.BytesIO(urlopen(sound_link).read()))\r\n\r\nsound_arr_first_ch1 = np.asarray(data, dtype=np.float64)\r\naudio_in_memory_ch1 = {\"raw\": sound_arr_first_ch1,\r\n \"sampling_rate\": 16000}\r\n\r\nMODEL_NAME = \"openai/whisper-large-v2\"\r\n\r\npipe = pipeline(\r\n task=\"automatic-speech-recognition\",\r\n model=MODEL_NAME,\r\n device='cuda:0')\r\n\r\nresults_pipe_ch1 = pipe(audio_in_memory_ch1, return_timestamps=True, chunk_length_s=30,\r\n stride_length_s=[6, 0], batch_size=32,\r\n generate_kwargs = {\"language\":\"<|en|>\",\r\n \"task\": \"transcribe\"})\r\nprint(results_pipe_ch1[\"text\"])\r\nprint(results_pipe_ch1)\r\n\r\n```\r\n\r\nError as below.\r\n\r\n```\r\n warnings.warn(\r\nTraceback (most recent call last):\r\n File \"/SpeechToText/whisper_trials.py\", line 21, in <module>\r\n results_pipe_ch1 = pipe(audio_in_memory_ch1, return_timestamps=True, chunk_length_s=30,\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 272, in __call__\r\n return super().__call__(inputs, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1101, in __call__\r\n return next(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py\", line 125, in __next__\r\n processed = self.infer(item, **self.params)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 527, in postprocess\r\n text, optional = self.tokenizer._decode_asr(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/whisper/tokenization_whisper_fast.py\", line 480, in _decode_asr\r\n return _decode_asr(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/whisper/tokenization_whisper.py\", line 881, in _decode_asr\r\n raise ValueError(\r\nValueError: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?\r\n```",
"Thanks, I have been able to reproduce, defnitely linked to batching, as the thing works with `batch_size=1`.\r\n\r\nWorking on a fix.",
"Ok, the issue is that the model uses `50256` for padding, or silence. \r\n\r\n@ArthurZucker should we make this a special token ? (This would mean it would be ignored in the state machine, which is OK since this token is `''`.\r\n\r\nThe other solution would be to decode the `previous_tokens` before failing and checking that the decoding is the nil string, but that seems like a workaround the fact that token 50256 is special and means silence (or pad I guess)",
"This is the issue: https://huggingface.co/openai/whisper-large-v2/blob/main/generation_config.json#L124\r\n\r\n@melihogutcen A fix is coming.\r\n",
"Proposed changes: \r\n\r\nhttps://huggingface.co/openai/whisper-base/discussions/12\r\nhttps://huggingface.co/openai/whisper-large/discussions/29\r\nhttps://huggingface.co/openai/whisper-medium/discussions/12\r\nhttps://huggingface.co/openai/whisper-large-v2/discussions/30\r\nhttps://huggingface.co/openai/whisper-small/discussions/19\r\nhttps://huggingface.co/openai/whisper-tiny/discussions/9",
"I fixed my problem by updating `generation_config.json`. Thanks!",
"Oops! I have tried different sounds with the new config. And rarely, I got this error again on some sounds. \r\n```\r\nTraceback (most recent call last):\r\n File \"/SpeechToText/whisper_trials.py\", line 63, in <module>\r\n results_pipe_ch1 = pipe(resampled16k_data_ch1, return_timestamps=True, chunk_length_s=30,\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 272, in __call__\r\n return super().__call__(inputs, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1101, in __call__\r\n return next(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py\", line 125, in __next__\r\n processed = self.infer(item, **self.params)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 527, in postprocess\r\n text, optional = self.tokenizer._decode_asr(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/whisper/tokenization_whisper_fast.py\", line 480, in _decode_asr\r\n return _decode_asr(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/whisper/tokenization_whisper.py\", line 881, in _decode_asr\r\n raise ValueError(\r\nValueError: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?\r\n```",
"Thanks, any potential to see the files ?\r\n\r\nOr if you could print `previous_tokens` just before this error that would be nice.\r\n\r\nThis error occurs when the state machine still has some dangling tokens and no timestamp token in the end, meaning we have no ending timestamp. This shouldn't happen given how WhisperTimestampLogitsProcessor is supposed to work. The previous error was that it would use a padding_token_id which wasn't a special_token so it would be considered as text (which it isn't)",
"Sorry, I couldn't share these files due to privacy, but I can send the `previous_tokens`. I added print function here. https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/tokenization_whisper.py#:~:text=current_tokens%20%3D%20%5B%5D-,if%20previous_tokens%3A,-if%20return_timestamps%3A\r\nIs it correct?\r\n```\r\nPrevious tokens: [[16729, 44999, 39196, 259, 13]]\r\nThere was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?\r\n```",
"I suspect the logits processor @Narsil but this is strange that it didn’t came up before",
"@melihogutcen This is Turkish, on `whisper-large-v2` correct ? I'll try to run a batch on some dataset to try and trigger it elsewhere. Still using the same script as above correct ?\r\n\r\nWe need to reproduce to understand what's going on. It could be the WhisperLogitsProcessor, but also a bug somewhere else.",
"Yes, it is Turkish and I used `whisper-large-v2.` I used the same script as above, I just used \"<|tr|>\" language and I changed `generation_config.json` as you said.",
"Could it be possible that this is due to all the batches processes are silence? I have seem that the error generates when the Audio has a section that is mainly silence (I test with a 10 min silece). With the original whisper what I get is allucination and repeated words.",
"I'm getting this error as well, but only on a fine-tuned model. I will try my program with huggingface openai/whisper-medium and it will work fine, and then I will change just the model over to a model of whisper medium trained on the common_voice_11_0 dataset, and any audio file I try to pass through gets this error.\r\n\r\n2023-03-15 15:06:11 Error occurred while processing File1.wav. Exception: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?\r\nTraceback (most recent call last):\r\n File \"/home/user/basictest.py\", line 64, in transcribe_audio\r\n out = pipeline(audio)\r\n File \"/home/user/anaconda3/lib/python3.9/site-packages/speechbox/diarize.py\", line 120, in __call__\r\n asr_out = self.asr_pipeline(\r\n File \"/home/user/anaconda3/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 272, in __call__\r\n return super().__call__(inputs, **kwargs)\r\n File \"/home/user/anaconda3/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 1101, in __call__\r\n return next(\r\n File \"/home/user/anaconda3/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py\", line 125, in __next__\r\n processed = self.infer(item, **self.params)\r\n File \"/home/user/anaconda3/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 527, in postprocess\r\n text, optional = self.tokenizer._decode_asr(\r\n File \"/home/user/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/tokenization_whisper_fast.py\", line 480, in _decode_asr\r\n return _decode_asr(\r\n File \"/home/user/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/tokenization_whisper.py\", line 881, in _decode_asr\r\n raise ValueError(\r\nValueError: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?",
"@alextomana, did you try comparing the `generation_config` as mentioned above? \r\nAbout the silence or what not, not really sure",
"Seeing the same with a fine-tuned model.\r\n\r\n```python\r\nimport requests\r\nimport transformers\r\nfrom transformers import GenerationConfig\r\n\r\npipe = transformers.pipeline(\r\n \"automatic-speech-recognition\",\r\n model=\"vasista22/whisper-hindi-large-v2\",\r\n device=\"cuda:0\",\r\n)\r\npipe.model.generation_config = GenerationConfig.from_pretrained(\"openai/whisper-large-v2\")\r\n\r\naudio = requests.get(\r\n \"https://storage.googleapis.com/dara-c1b52.appspot.com/daras_ai/media/e00ba954-c980-11ed-8700-8e93953183bb/6.ogg\"\r\n).content\r\n\r\nforced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(task=\"transcribe\", language=\"hindi\")\r\npipe(\r\n audio,\r\n return_timestamps=True,\r\n generate_kwargs=dict(\r\n forced_decoder_ids=forced_decoder_ids,\r\n ),\r\n chunk_length_s=30,\r\n stride_length_s=[6, 0],\r\n batch_size=32,\r\n)\r\n```\r\n\r\n```console\r\n/root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (448) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.\r\n warnings.warn(\r\n╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮\r\n│ <stdin>:1 in <module> │\r\n│ │\r\n│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/pipelines/automatic_spee │\r\n│ ch_recognition.py:272 in __call__ │\r\n│ │\r\n│ 269 │ │ │ │ │ │ \"there\", \"timestamps\": (1.0, 1.5)}]`. The original full text can │\r\n│ 270 │ │ │ │ │ │ `\"\".join(chunk[\"text\"] for chunk in output[\"chunks\"])`. │\r\n│ 271 │ │ \"\"\" │\r\n│ ❱ 272 │ │ return super().__call__(inputs, **kwargs) │\r\n│ 273 │ │\r\n│ 274 │ def _sanitize_parameters( │\r\n│ 275 │ │ self, │\r\n│ │\r\n│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/pipelines/base.py:1101 │\r\n│ in __call__ │\r\n│ │\r\n│ 1098 │ │ elif is_iterable: │\r\n│ 1099 │ │ │ return self.iterate(inputs, preprocess_params, forward_params, postprocess_p │\r\n│ 1100 │ │ elif self.framework == \"pt\" and isinstance(self, ChunkPipeline): │\r\n│ ❱ 1101 │ │ │ return next( │\r\n│ 1102 │ │ │ │ iter( │\r\n│ 1103 │ │ │ │ │ self.get_iterator( │\r\n│ 1104 │ │ │ │ │ │ [inputs], num_workers, batch_size, preprocess_params, forward_pa │\r\n│ │\r\n│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py:12 │\r\n│ 5 in __next__ │\r\n│ │\r\n│ 122 │ │ │\r\n│ 123 │ │ # We're out of items within a batch │\r\n│ 124 │ │ item = next(self.iterator) │\r\n│ ❱ 125 │ │ processed = self.infer(item, **self.params) │\r\n│ 126 │ │ # We now have a batch of \"inferred things\". │\r\n│ 127 │ │ if self.loader_batch_size is not None: │\r\n│ 128 │ │ │ # Try to infer the size of the batch │\r\n│ │\r\n│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/pipelines/automatic_spee │\r\n│ ch_recognition.py:527 in postprocess │\r\n│ │\r\n│ 524 │ │ │ │ │ stride_right /= sampling_rate │\r\n│ 525 │ │ │ │ │ output[\"stride\"] = chunk_len, stride_left, stride_right │\r\n│ 526 │ │ │ │\r\n│ ❱ 527 │ │ │ text, optional = self.tokenizer._decode_asr( │\r\n│ 528 │ │ │ │ model_outputs, │\r\n│ 529 │ │ │ │ return_timestamps=return_timestamps, │\r\n│ 530 │ │ │ │ return_language=return_language, │\r\n│ │\r\n│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/models/whisper/tokenizat │\r\n│ ion_whisper_fast.py:480 in _decode_asr │\r\n│ │\r\n│ 477 │ │ return forced_decoder_ids │\r\n│ 478 │ │\r\n│ 479 │ def _decode_asr(self, model_outputs, *, return_timestamps, return_language, time_pre │\r\n│ ❱ 480 │ │ return _decode_asr( │\r\n│ 481 │ │ │ self, │\r\n│ 482 │ │ │ model_outputs, │\r\n│ 483 │ │ │ return_timestamps=return_timestamps, │\r\n│ │\r\n│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/models/whisper/tokenizat │\r\n│ ion_whisper.py:881 in _decode_asr │\r\n│ │\r\n│ 878 │ │ if return_timestamps: │\r\n│ 879 │ │ │ # Last token should always be timestamps, so there shouldn't be │\r\n│ 880 │ │ │ # leftover │\r\n│ ❱ 881 │ │ │ raise ValueError( │\r\n│ 882 │ │ │ │ \"There was an error while processing timestamps, we haven't found a time │\r\n│ 883 │ │ │ │ \" WhisperTimeStampLogitsProcessor used?\" │\r\n│ 884 │ │ │ ) │\r\n╰──────────────────────────────────────────────────────────────────────────────────────────────────╯\r\nValueError: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?\r\n```",
"Running into the same issue:\r\n\r\n```\r\nimport torch\r\nimport gdown\r\nfrom transformers import pipeline, AutomaticSpeechRecognitionPipeline, Pipeline, GenerationConfig, \\\r\n WhisperTokenizer, WhisperModel, WhisperConfig, WhisperForConditionalGeneration, WhisperTokenizerFast, \\\r\n WhisperProcessor\r\n\r\n\r\nurl = 'https://drive.google.com/uc?id=1IcnHiL5gdGs8zr-NwuSQm_hsAZugz4mq'\r\naudio_path = 'audio.wav'\r\ngdown.download(url, audio_path, quiet=False)\r\n\r\n\r\nmodel_name = \"openai/whisper-small\"\r\ntask = 'transcribe'\r\nlanguage = 'spanish'\r\npredict_timestamps = True\r\nchunk_length = 30\r\nmax_length = 100\r\nbatch_size = 1\r\ndevice = 'cuda:0' if torch.cuda.is_available() else 'cpu'\r\n# -----------------------------------------------------------------------\r\n\r\nconfig = WhisperConfig.from_pretrained(model_name)\r\nmodel = WhisperForConditionalGeneration.from_pretrained(model_name, config=config)\r\n\r\ntokenizer = WhisperTokenizer.from_pretrained(model_name)\r\n# tokenizer.set_prefix_tokens(language=language, task=task, predict_timestamps=predict_timestamps)\r\nprocessor = WhisperProcessor.from_pretrained(model_name)\r\n\r\npipe = pipeline(\r\n task='automatic-speech-recognition',\r\n model=model,\r\n chunk_length_s=chunk_length,\r\n batch_size=batch_size,\r\n tokenizer=tokenizer,\r\n feature_extractor=processor.feature_extractor,\r\n device=device\r\n)\r\n\r\nforced_decoder_ids = tokenizer.get_decoder_prompt_ids(language=language, task=task, no_timestamps=not predict_timestamps)\r\nprint(forced_decoder_ids)\r\ngenerate_kwargs = {'max_length': max_length, \"forced_decoder_ids\": forced_decoder_ids}\r\n\r\n\r\nprint('audio_path: ', audio_path)\r\nresult = pipe(audio_path, return_timestamps=predict_timestamps, generate_kwargs=generate_kwargs)\r\nprint(result)\r\n```\r\n\r\nwith error\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/spanagiotidi/notebook_dir/whisper_tests/test6.py\", line 47, in <module>\r\n print(result)\r\n File \"/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 272, in __call__\r\n return super().__call__(inputs, **kwargs)\r\n File \"/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 1101, in __call__\r\n return next(\r\n File \"/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py\", line 125, in __next__\r\n processed = self.infer(item, **self.params)\r\n File \"/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 527, in postprocess\r\n text, optional = self.tokenizer._decode_asr(\r\n File \"/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/tokenization_whisper.py\", line 708, in _decode_asr\r\n return _decode_asr(\r\n File \"/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/tokenization_whisper.py\", line 881, in _decode_asr\r\n raise ValueError(\r\nValueError: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?\r\n```\r\n",
"cc @Narsil maybe an edge case that was not handle (and that was previously ignored) let's be more permissive on the last timestamps + will check with the provided example the reason why we are not getting a last timestamps. \r\nMight be something relating to the length of the `forced_decoder_ids` that can affect the `WhisperTImestampsLogitProcessor`. Something to lookout for",
"@devxpy I have reproduced with your example. It seems this model never outputs timestamps. \r\n\r\nI am guessing it was finetuned without timestamps and so the error is kind of normal.\r\nHowever it lead me to reduce the hard error to a soft error. The results are still nonsensical (check out the test).\r\n\r\nI spent some time trying to find a better fix by fixing the logits processor itself, but to no avail. There's just no way to fix models that refuse to output timestamp tokens. To be noted is that whisper models are never even forced to output increasing timestamp tokens, so there's already a lot of room there. Soft error is better.\r\n\r\n",
"https://github.com/huggingface/transformers/pull/22475/files",
"I received this error when transcribing audio with `openai/whisper-large-v2`. For me, the cause was 10 seconds of silence at the end of the file. Maybe this can be added as a potential solution to the error/warning, or maybe this can be detected and silently ignored.",
"Thanks for this comment! @narsil, I think it makes sense",
"@Narsil @devxpy @ArthurZucker I also did finetuning without timestamps, and now I have an issue where timestamps are not appearing. Is there a good way to finetune and include timestamps? Do I need to add 1500 special tokens for each timestamp in the tokenizer? I made sure that the tokenizer doesn't have a timestamps. #20225\r\n\r\n",
"Hey! For finetuning with timestamps, you should either use the latest tokenizer (which by default should add 1500 special tokens, not more) or use the previous one, which also supported them, but not for encoding. Pinging @sanchit-gandhi as he has been working on distil whisper, might have a training script to add timestamps. Also this kind of question would be better for the [forum](https://discuss.huggingface.co/)",
"Hey @upskyy - in my experience, fine-tuning with LoRA / QLoRA is a fantastic way to prevent this 'catastrophic forgetting' effect where Whisper forgets how to predict timestamps after fine-tuning. For this, you can check-out the following repo: https://github.com/Vaibhavs10/fast-whisper-finetuning\r\n\r\nAnd @ArthurZucker - cool that the latest tokenizer has the 1500 special tokens already added! This should make our lives a lot easier for encoding with timestamps, since the tokenizer is now able to map the timestamp strings to tokens. \r\n\r\nAll we really need to do then is have a small amount of data in our train set that has timestamps in the Whisper format, e.g.\r\n```\r\n\"<|0.00|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all and<|6.24|><|6.24|> can discover in it but little of rocky Ithaca.<|9.44|>\"\r\n```\r\nGenerally, you only need between 1-5% of your data to be timestamped to ensure you retain Whisper's timestamp prediction abilities. The easiest way of getting this data is to use the pre-trained Whisper model to re-annotate 1% of your training data with timestamps. You can then merge this data into your full training corpus to train on both non-timestamped (99%) and timestamped (1%) data.\r\n\r\nWhat we then want to do is enable/disable timestamps when we encode the labels, depending on whether the labels have timestamps or not:\r\n\r\n```python\r\ndef prepare_dataset(batch):\r\n # load and resample audio data from 48 to 16kHz\r\n audio = batch[\"audio\"]\r\n\r\n # compute log-Mel input features from input audio array \r\n batch[\"input_features\"] = feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n\r\n # set tokenizer prefix tokens depending on whether we have timestamps or not\r\n predict_timestamps = batch[\"predict_timestamps\"] # boolean that tells us whether our labels have timestamps or not (add this column to your dataset to indicate)\r\n tokenizer.set_prefix_tokens(language=language, task=\"transcribe\", predict_timestamps= predict_timestamps)\r\n\r\n # encode target text to label ids \r\n batch[\"labels\"] = tokenizer(batch[\"sentence\"]).input_ids\r\n return batch\r\n```",
"@ArthurZucker @sanchit-gandhi Thank you so much for the detailed explanation. I'm trying to download a new tokenizer, but it seems like it was updated 5 months ago. Can I get it like this? [[link]](https://huggingface.co/openai/whisper-medium/tree/main)\r\nWhat is the latest tokenizer you are talking about?\r\nCurrently, my tokenizer is splitting one by one like this.\r\n\r\n```python\r\nfrom transformers import WhisperProcessor\r\n\r\n\r\nprocessor = WhisperProcessor.from_pretrained(\"openai/whisper-tiny\")\r\ntokens = processor.tokenizer(\"<|0.00|>Hello!<|2.34|>\").input_ids\r\nprint(tokens)\r\n# [50258, 50363, 27, 91, 15, 13, 628, 91, 29, 15947, 0, 27, 91, 17, 13, 12249, 91, 29, 50257]\r\n\r\ntext = processor.decode([27, 91, 15, 13, 628, 91, 29])\r\nprint(text)\r\n# <|0.00|>\r\n```",
"@ArthurZucker could you give @upskyy a hand with downloading the latest version of the tokenizer please! 🙌"
] | 1,678
| 1,687
| 1,680
|
NONE
| null |
### System Info
Hello,
When I tried this notebook, https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor?usp=sharing#scrollTo=Ca4YYdtATxzo, I encountered an error that is:` There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?` Especially sounds greater than the 30s, I encountered this error. On the other hand, it returns timestamps when sounds are lower than 30 seconds.
How can I fix it?
Specs:
`transformers==4.27.0.dev0`
```
from transformers import pipeline
MODEL_NAME = "openai/whisper-large-v2"
pipe = pipeline(
task="automatic-speech-recognition",
model=MODEL_NAME,
device='cuda:0',
generate_kwargs = {"language":"<|tr|>","task": "transcribe"})
results = pipe(speech_file, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0], batch_size=32, generate_kwargs = {"language":"<|tr|>","task": "transcribe"})
```
### Who can help?
@ArthurZucker @sanchit-gandhi @Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
MODEL_NAME = "openai/whisper-large-v2"
pipe = pipeline(
task="automatic-speech-recognition",
model=MODEL_NAME,
device='cuda:0',
generate_kwargs = {"language":"<|tr|>","task": "transcribe"})
results = pipe(speech_file, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0], batch_size=32, generate_kwargs = {"language":"<|tr|>","task": "transcribe"})
```
### Expected behavior
```
results = {'text':'Some Turkish results.',
'chunks':[
{'text': ' Some Turkish results.',
'timestamp': (0.0,4.4)},
{'text': ' Some Turkish results.',
'timestamp': (4.4,28.32)},
{'text': ' Some Turkish results.',
'timestamp': (28.32,45.6)}]
}
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22053/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22052
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22052/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22052/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22052/events
|
https://github.com/huggingface/transformers/issues/22052
| 1,616,917,718
|
I_kwDOCUB6oc5gYDTW
| 22,052
|
Add a newline here in the docstring
|
{
"login": "stefanvasilev",
"id": 31345149,
"node_id": "MDQ6VXNlcjMxMzQ1MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/31345149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefanvasilev",
"html_url": "https://github.com/stefanvasilev",
"followers_url": "https://api.github.com/users/stefanvasilev/followers",
"following_url": "https://api.github.com/users/stefanvasilev/following{/other_user}",
"gists_url": "https://api.github.com/users/stefanvasilev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefanvasilev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefanvasilev/subscriptions",
"organizations_url": "https://api.github.com/users/stefanvasilev/orgs",
"repos_url": "https://api.github.com/users/stefanvasilev/repos",
"events_url": "https://api.github.com/users/stefanvasilev/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefanvasilev/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Seems like it indeed! Do you want to suggest a PR with the change?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
There should be a newline to separate prev_key_values from inputs_embeds.
https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/whisper/modeling_whisper.py#L827
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22052/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22051
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22051/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22051/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22051/events
|
https://github.com/huggingface/transformers/pull/22051
| 1,616,797,048
|
PR_kwDOCUB6oc5LpuaY
| 22,051
|
Remove set_access_token usage + fail tests if FutureWarning
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
`set_access_token` is deprecated and will be removed in `huggingface_hub>=0.14`.
This PR removes it from the tests (it was not used in `transformers` source code itself). In the future, use `set_git_credential` if needed. It is a git-credential-agnostic helper, i.e. you can store your git token in `git-credential-cache`, `git-credential-store`, `osxkeychain`, etc. The legacy `set_access_token` was only able to set in `git-credential-store` no matter the user preference.
(for context, I found out about this while working on https://github.com/huggingface/huggingface_hub/pull/1381)
---
In addition to this, I have added
```
filterwarnings =
error::FutureWarning:huggingface_hub*
```
to the `setup.cfg` config file to fail on future warnings from `huggingface_hub`. In `hfh`'s CI we trigger on FutureWarning from any package but it's less robust (any package update leads can lead to a failure). No obligation to keep it like that (I can remove it if you prefer) but I think it's a good idea in order to track future FutureWarnings.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22051/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22051",
"html_url": "https://github.com/huggingface/transformers/pull/22051",
"diff_url": "https://github.com/huggingface/transformers/pull/22051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22051.patch",
"merged_at": 1678371829000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22050
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22050/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22050/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22050/events
|
https://github.com/huggingface/transformers/issues/22050
| 1,616,761,103
|
I_kwDOCUB6oc5gXdEP
| 22,050
|
run_speech_recognition_seq2seq to fine-tune whisper tiny model stop in dataset map
|
{
"login": "xyx361100238",
"id": 19569322,
"node_id": "MDQ6VXNlcjE5NTY5MzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xyx361100238",
"html_url": "https://github.com/xyx361100238",
"followers_url": "https://api.github.com/users/xyx361100238/followers",
"following_url": "https://api.github.com/users/xyx361100238/following{/other_user}",
"gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions",
"organizations_url": "https://api.github.com/users/xyx361100238/orgs",
"repos_url": "https://api.github.com/users/xyx361100238/repos",
"events_url": "https://api.github.com/users/xyx361100238/events{/privacy}",
"received_events_url": "https://api.github.com/users/xyx361100238/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @xyx361100238 - could you try with a lower value of `preprocessing_num_workers`? Maybe first by reducing it by a factor of 2:\r\n```diff\r\n-\t--preprocessing_num_workers=\"16\" \\\r\n+\t--preprocessing_num_workers=\"8\" \\\r\n```\r\nThis is usually the cause of datasets map hanging.",
"THX @sanchit-gandhi , I tried preprocess num 4 / 8 / 16 but still hanging the datasets map.\r\nThis time I reinstall my huggingface-transformers, it works! I can finish the train use official examples.\r\n",
"Hey @xyx361100238! That's super strange since the `transformers` library shouldn't interact with `datasets`'s map method 🧐 glad you found a fix by reinstalling `transformers`, it might have bumped a package version that is a `datasets` dependency that unblocked map for you. Closing as complete!",
"@sanchit-gandhi ,still not work today,hanging in map\r\n`python run_speech_recognition_seq2seq.py --model_name_or_path=\"openai/whisper-tiny\" --dataset_name=\"mozilla-foundation/common_voice_11_0\" --dataset_config_name=\"zh-CN\" --language=\"chinese\" --train_split_name=\"train+validation\" --eval_split_name=\"test\" --max_steps=\"5000\" --output_dir=\"./whisper-small-zh\" --per_device_train_batch_size=\"16\" --gradient_accumulation_steps=\"2\" --per_device_eval_batch_size=\"16\" --logging_steps=\"25\" --learning_rate=\"1e-5\" --warmup_steps=\"500\" --evaluation_strategy=\"steps\" --eval_steps=\"1000\" --save_strategy=\"steps\" --save_steps=\"1000\" --generation_max_length=\"225\" --preprocessing_num_workers=\"16\" --length_column_name=\"input_length\" --max_duration_in_seconds=\"30\" --text_column_name=\"sentence\" --freeze_feature_encoder=\"False\" --gradient_checkpointing --group_by_length --fp16 --overwrite_output_dir --do_train --do_eval --predict_with_generate \r\n`\r\n\r\n`preprocess train dataset (num_proc=16): 0%| | 0/39637 [00:00<?, ? examples/s] `\r\n",
"Hey @xyx361100238, I think this is `datasets` library issue that would be more apt there: https://github.com/huggingface/datasets\r\n\r\nYou can create a dummy reproducible codesnippet for this issue with something like:\r\n```python\r\nfrom datasets import Audio, load_dataset\r\n\r\nraw_dataset = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"zh-CN\")\r\n\r\nraw_dataset = raw_dataset.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n\r\ndef preprocess_dataset(batch):\r\n audio = batch[\"audio\"]\r\n return batch\r\n\r\nraw_dataset = raw_dataset.map(preprocess_dataset, num_proc=16)\r\n```\r\n\r\nFeel free to check if that hangs -> you can add the minimum amount of code that reproduces your issue and then post the codesnippet on the datasets repo."
] | 1,678
| 1,679
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.4.0-144-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
according to official doc in [sequence-to-sequence](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#sequence-to-sequence) ,run the same cmd (use model tiny),it's always stop in step 7:
preprocess train dataset (num_proc=16): 0%| | 0/6540 [00:00<?, ? examples/s]
but if I verify it in python accorind [fine-tune-whsiper](https://huggingface.co/blog/fine-tune-whisper),it works well:
preprocess train dataset (num_proc=16): 7%|███████████▌ | 197/2894 [00:11<02:19, 19.34 examples/s]
### Expected behavior
can finish the train in official example
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22050/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22049
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22049/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22049/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22049/events
|
https://github.com/huggingface/transformers/issues/22049
| 1,616,648,065
|
I_kwDOCUB6oc5gXBeB
| 22,049
|
Tracing mismatch during conversion of Whisper model to ONNX using torch.onnx.export
|
{
"login": "hannan72",
"id": 8229163,
"node_id": "MDQ6VXNlcjgyMjkxNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hannan72",
"html_url": "https://github.com/hannan72",
"followers_url": "https://api.github.com/users/hannan72/followers",
"following_url": "https://api.github.com/users/hannan72/following{/other_user}",
"gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hannan72/subscriptions",
"organizations_url": "https://api.github.com/users/hannan72/orgs",
"repos_url": "https://api.github.com/users/hannan72/repos",
"events_url": "https://api.github.com/users/hannan72/events{/privacy}",
"received_events_url": "https://api.github.com/users/hannan72/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @hannan72! I recommend that you use Optimum for exporting Whisper to the ONNX format (it will basically be a wrapper around `torch.onnx.export` but it is tested and Whisper is supported). You can find more information in the doc: https://huggingface.co/docs/optimum/exporters/onnx/overview\r\nIf you encounter any issue, feel free to open an issue in the Optimum repo.",
"> Hi @hannan72! I recommend that you use Optimum for exporting Whisper to the ONNX format (it will basically be a wrapper around `torch.onnx.export` but it is tested and Whisper is supported). You can find more information in the doc: https://huggingface.co/docs/optimum/exporters/onnx/overview If you encounter any issue, feel free to open an issue in the Optimum repo.\r\n\r\nI have used the Optimum but I get such a Warning and the resulted ONNX model deployed by Optimum ORT is about 50% slower that pytorch model deployment",
"Yes I see you opened this issue in Optimum: https://github.com/huggingface/optimum/issues/827\r\nI think the best is to wait for @fxmarty to take a look at it.\r\n\r\nRegarding these warnings, I don't think they are the reason why it is slow. They just mean that the expression in the if statements will not be evaluated at runtime, so the model may fail with different batch sizes.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
I'm trying to convert Whisper model to onnx, so when exporting encoder of Whisper model to onnx by using torch.onnx.export:
```
mel = torch.zeros((1, 80, 3000))
encoder = model.get_encoder().to('cpu')
audio_features = encoder(mel)
torch.onnx.export(
encoder,
mel,
"whisper_encoder.onnx",
input_names=["mel"],
output_names=["output_features"]
)
```
It raises a TracerWarning as follows:
```
/usr/local/lib/python3.8/dist-packages/transformers/models/whisper/modeling_whisper.py:207: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
/usr/local/lib/python3.8/dist-packages/transformers/models/whisper/modeling_whisper.py:246: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
```
Afterwards, the onnx file is generated but the resulted model for runtime (using Optimum) is slow (about 50% slower than pytorch run)! I guess that slowness of the onnx model is due to the TracerWarning.
Any Idea?
I'm using transformers == 4.26.0, optimum==1.6.1, onnx==1.10.0 and torch==1.12.0+cu116.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22049/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22048
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22048/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22048/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22048/events
|
https://github.com/huggingface/transformers/issues/22048
| 1,616,643,413
|
I_kwDOCUB6oc5gXAVV
| 22,048
|
Cannot import name 'deepspeed_reinit' from 'transformers.deepspeed'
|
{
"login": "rubenCrayon",
"id": 121863605,
"node_id": "U_kgDOB0N9tQ",
"avatar_url": "https://avatars.githubusercontent.com/u/121863605?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rubenCrayon",
"html_url": "https://github.com/rubenCrayon",
"followers_url": "https://api.github.com/users/rubenCrayon/followers",
"following_url": "https://api.github.com/users/rubenCrayon/following{/other_user}",
"gists_url": "https://api.github.com/users/rubenCrayon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rubenCrayon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rubenCrayon/subscriptions",
"organizations_url": "https://api.github.com/users/rubenCrayon/orgs",
"repos_url": "https://api.github.com/users/rubenCrayon/repos",
"events_url": "https://api.github.com/users/rubenCrayon/events{/privacy}",
"received_events_url": "https://api.github.com/users/rubenCrayon/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @rubenCrayon!\r\n`deepspeed_reinit` was removed a few versions ago, you should use a more recent version of Optimum. Which may requires to change your script a bit, in that case I recommend that you open an issue in Optimum: https://github.com/huggingface/optimum/issues",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### System Info
Bug found when following the optimization steps provided in **https://huggingface.co/blog/optimum-inference.**
It seems like transformers/deepspeed.py does not contain the method '**deepspeed_reinit**' so it's not possible to import it when loading ORTModel objects.
Thanks in advance for your incredible work. @stas00, @pacman100
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Install optimum[onnxruntime]==1.2.0
Run: from optimum.onnxruntime import ORTModelForQuestionAnswering or import optimum.onnxruntime
### Expected behavior
The package should import the ORTModels without any issue, enabling the optimization of the ONNX models using DeepSpeed
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22048/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22047
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22047/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22047/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22047/events
|
https://github.com/huggingface/transformers/issues/22047
| 1,616,607,875
|
I_kwDOCUB6oc5gW3qD
| 22,047
|
add progress bar to the sharded model download status
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Nested tqdm bars in the console are unreadable though, so not sure how to fix that.",
"I didn't know that. I remember it definitely worked in the past. Unless you're referring to something I'm not aware of when you say it's unreadable. Is it because the outside tqdm line will be so far from line 72 that it won't be seen as it'd scroll past visible area? \r\n\r\nIf that's the case then perhaps this would work:\r\n\r\n1. switch to show and erase individual shard download progress as soon as it completed\r\n2. only keep the overall progress bar \r\n\r\nso that there are only 2 lines dedicated to the progress updates.\r\n\r\nBut if none of this is doable nicely, perhaps at least using `desc` to number each shard's tqdm as in \"43/72\" would at least give some indication. Though it won't really help that much since one can't tell how much time it took to download the previous x entries.\r\n\r\n",
"Note that the description should already contain the name of the file, which ends with 0043-of-0072 normally.\r\n\r\nI can have a look at what adding an overall progress bar would look like, but I don't have any control on the per-file progress bar, as it's issued by huggingface_hub. I could deactivate it entirely (so solution 2.) but I don't have control over it's closing.",
"> Note that the description should already contain the name of the file, which ends with 0043-of-0072 normally.\r\n\r\nnot for me:\r\n\r\n\r\n\r\n> I can have a look at what adding an overall progress bar would look like, but I don't have any control on the per-file progress bar, as it's issued by huggingface_hub. I could deactivate it entirely (so solution 2.) but I don't have control over it's closing.\r\n\r\nUnderstood! Thank you for looking, Sylvain!",
"Oh maybe you have an older version of huggingface_hub?",
"oh, I didn't know - you're correct - updating to 0.13 did add the filenames - that's much better. Thank you for that, Sylvain.\r\n",
"Can you try the PR mentioned above? I got confused and nested progress bars do appear nicely in the console. It's in notebooks that the result is messy."
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### Feature request
`from_pretrained('bigscience/bloom')` is taking forever the first time until it's cached (~350GB) - I thought that perhaps with 72 shards it'd be awesome to have an overall progress bar (in addition to the each shard download progress bar) to know where things stand and how many hours the coffee break should last.
Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22047/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22046
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22046/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22046/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22046/events
|
https://github.com/huggingface/transformers/pull/22046
| 1,616,439,678
|
PR_kwDOCUB6oc5Lofuo
| 22,046
|
Can't install tf2 on M1 Chip by default
|
{
"login": "shaun-scale",
"id": 58447835,
"node_id": "MDQ6VXNlcjU4NDQ3ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/58447835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shaun-scale",
"html_url": "https://github.com/shaun-scale",
"followers_url": "https://api.github.com/users/shaun-scale/followers",
"following_url": "https://api.github.com/users/shaun-scale/following{/other_user}",
"gists_url": "https://api.github.com/users/shaun-scale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shaun-scale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaun-scale/subscriptions",
"organizations_url": "https://api.github.com/users/shaun-scale/orgs",
"repos_url": "https://api.github.com/users/shaun-scale/repos",
"events_url": "https://api.github.com/users/shaun-scale/events{/privacy}",
"received_events_url": "https://api.github.com/users/shaun-scale/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This actually got worse... if you open a new Laptop, you also have to do the following\r\n```\r\nbrew install cmake\r\nbrew install pkg-config\r\nbrew install sentencepiece\r\npip install sentencepiece\r\n```\r\n\r\nand then I had to also install Rust next...\r\n```\r\ncurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\r\nsource \"$HOME/.cargo/env\"\r\n```\r\n\r\nAfter that, it finally works\r\n```\r\npip install 'transformers[tf-cpu]'\r\n```\r\n\r\nNot sure how thorough we want to be in the docs of getting people fully up to speed vs. making certain assumptions. The `sentencepiece` part was fairly brutal to work through",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Trying to
```
pip install 'transformers[tf-cpu]'
```
will give you a confusing error like below
```
Collecting sentencepiece==0.1.91
Using cached sentencepiece-0.1.91.tar.gz (500 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
Package sentencepiece was not found in the pkg-config search path.
Perhaps you should add the directory containing `sentencepiece.pc'
to the PKG_CONFIG_PATH environment variable
No package 'sentencepiece' found
Failed to find sentencepiece pkgconfig
[end of output]
```
The answer is to install `cmake` and `pkg-config` based on the reply here:
https://github.com/google/sentencepiece/issues/378#issuecomment-969896519
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22046/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22046",
"html_url": "https://github.com/huggingface/transformers/pull/22046",
"diff_url": "https://github.com/huggingface/transformers/pull/22046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22046.patch",
"merged_at": 1678365898000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22045
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22045/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22045/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22045/events
|
https://github.com/huggingface/transformers/pull/22045
| 1,616,405,822
|
PR_kwDOCUB6oc5LoYSw
| 22,045
|
Docs Improvement - In ZSH, not using ' ' around pip install fails, fix it
|
{
"login": "shaun-scale",
"id": 58447835,
"node_id": "MDQ6VXNlcjU4NDQ3ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/58447835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shaun-scale",
"html_url": "https://github.com/shaun-scale",
"followers_url": "https://api.github.com/users/shaun-scale/followers",
"following_url": "https://api.github.com/users/shaun-scale/following{/other_user}",
"gists_url": "https://api.github.com/users/shaun-scale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shaun-scale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaun-scale/subscriptions",
"organizations_url": "https://api.github.com/users/shaun-scale/orgs",
"repos_url": "https://api.github.com/users/shaun-scale/repos",
"events_url": "https://api.github.com/users/shaun-scale/events{/privacy}",
"received_events_url": "https://api.github.com/users/shaun-scale/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Running
```
pip install transformers[torch]
```
in the default ZSH terminal will fail with the error `zsh: no matches found: transformers[torch]`
The solution is to wrap the installation path in ' ' like
```
pip install 'transformers[torch]'
```
Relevant StackOverflow: https://stackoverflow.com/questions/30539798/zsh-no-matches-found-requestssecurity
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22045/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22045",
"html_url": "https://github.com/huggingface/transformers/pull/22045",
"diff_url": "https://github.com/huggingface/transformers/pull/22045.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22045.patch",
"merged_at": 1678365830000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22044
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22044/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22044/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22044/events
|
https://github.com/huggingface/transformers/pull/22044
| 1,616,319,411
|
PR_kwDOCUB6oc5LoFjH
| 22,044
|
[deepspeed] offload + non-cpuadam optimizer exception doc
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
part 2 of https://github.com/huggingface/transformers/pull/22043, but we can't merge it until `deepspeed==0.8.3` is released.
This PR documents the new feature and up's the min deepspeed version.
**XXX: DO NOT MERGE UNTIL `deepspeed==0.8.3` is released.**
I'm keeping it as a DRAFT so that I don't mistakenly merge it to soon. But we can pre-approve.
cc: @jeffra
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22044/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22044/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22044",
"html_url": "https://github.com/huggingface/transformers/pull/22044",
"diff_url": "https://github.com/huggingface/transformers/pull/22044.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22044.patch",
"merged_at": 1679443205000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22043
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22043/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22043/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22043/events
|
https://github.com/huggingface/transformers/pull/22043
| 1,616,223,978
|
PR_kwDOCUB6oc5LnxFg
| 22,043
|
[deepspeed] offload + non-cpuadam optimizer exception
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
adapting to https://github.com/microsoft/DeepSpeed/pull/2971 - as our deepspeed tests will fail without that new flag when that deepspeed PR will get merged.
Will add the new config `zero_force_ds_cpu_optimizer` to the integration docs and require `deepspeed>=0.8.3`, but can't do it here w/o breaking DS' CI. Will do it here post new release https://github.com/huggingface/transformers/pull/22044
@jeffra
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22043/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22043/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22043",
"html_url": "https://github.com/huggingface/transformers/pull/22043",
"diff_url": "https://github.com/huggingface/transformers/pull/22043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22043.patch",
"merged_at": 1678378377000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22042
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22042/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22042/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22042/events
|
https://github.com/huggingface/transformers/pull/22042
| 1,616,180,092
|
PR_kwDOCUB6oc5LnnyJ
| 22,042
|
testing tokengt
|
{
"login": "Raman-Kumar",
"id": 32980600,
"node_id": "MDQ6VXNlcjMyOTgwNjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/32980600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raman-Kumar",
"html_url": "https://github.com/Raman-Kumar",
"followers_url": "https://api.github.com/users/Raman-Kumar/followers",
"following_url": "https://api.github.com/users/Raman-Kumar/following{/other_user}",
"gists_url": "https://api.github.com/users/Raman-Kumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raman-Kumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raman-Kumar/subscriptions",
"organizations_url": "https://api.github.com/users/Raman-Kumar/orgs",
"repos_url": "https://api.github.com/users/Raman-Kumar/repos",
"events_url": "https://api.github.com/users/Raman-Kumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raman-Kumar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22042). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22042/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22042",
"html_url": "https://github.com/huggingface/transformers/pull/22042",
"diff_url": "https://github.com/huggingface/transformers/pull/22042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22042.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22041
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22041/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22041/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22041/events
|
https://github.com/huggingface/transformers/pull/22041
| 1,616,063,838
|
PR_kwDOCUB6oc5LnNnr
| 22,041
|
Tokengt branch
|
{
"login": "Raman-Kumar",
"id": 32980600,
"node_id": "MDQ6VXNlcjMyOTgwNjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/32980600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raman-Kumar",
"html_url": "https://github.com/Raman-Kumar",
"followers_url": "https://api.github.com/users/Raman-Kumar/followers",
"following_url": "https://api.github.com/users/Raman-Kumar/following{/other_user}",
"gists_url": "https://api.github.com/users/Raman-Kumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raman-Kumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raman-Kumar/subscriptions",
"organizations_url": "https://api.github.com/users/Raman-Kumar/orgs",
"repos_url": "https://api.github.com/users/Raman-Kumar/repos",
"events_url": "https://api.github.com/users/Raman-Kumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raman-Kumar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22041). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22041/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22041",
"html_url": "https://github.com/huggingface/transformers/pull/22041",
"diff_url": "https://github.com/huggingface/transformers/pull/22041.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22041.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22040
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22040/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22040/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22040/events
|
https://github.com/huggingface/transformers/pull/22040
| 1,615,923,155
|
PR_kwDOCUB6oc5LmvDD
| 22,040
|
Return analysis for hyperparameter_search with Ray backend
|
{
"login": "anruijian",
"id": 115125339,
"node_id": "U_kgDOBtysWw",
"avatar_url": "https://avatars.githubusercontent.com/u/115125339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anruijian",
"html_url": "https://github.com/anruijian",
"followers_url": "https://api.github.com/users/anruijian/followers",
"following_url": "https://api.github.com/users/anruijian/following{/other_user}",
"gists_url": "https://api.github.com/users/anruijian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anruijian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anruijian/subscriptions",
"organizations_url": "https://api.github.com/users/anruijian/orgs",
"repos_url": "https://api.github.com/users/anruijian/repos",
"events_url": "https://api.github.com/users/anruijian/events{/privacy}",
"received_events_url": "https://api.github.com/users/anruijian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #22037
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
Issue #22037
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22040/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22040",
"html_url": "https://github.com/huggingface/transformers/pull/22040",
"diff_url": "https://github.com/huggingface/transformers/pull/22040.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22040.patch",
"merged_at": 1678373058000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22039
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22039/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22039/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22039/events
|
https://github.com/huggingface/transformers/pull/22039
| 1,615,894,710
|
PR_kwDOCUB6oc5Lmo4S
| 22,039
|
Mark all `BridgeTower` tests slow for now
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Mark all `BridgeTower` tests slow for now.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22039/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22039",
"html_url": "https://github.com/huggingface/transformers/pull/22039",
"diff_url": "https://github.com/huggingface/transformers/pull/22039.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22039.patch",
"merged_at": 1678308510000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22038
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22038/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22038/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22038/events
|
https://github.com/huggingface/transformers/issues/22038
| 1,615,832,586
|
I_kwDOCUB6oc5gT6YK
| 22,038
|
Pytorch MBart Model - Trace on CPU and run inference on GPU.
|
{
"login": "gnovack",
"id": 50467879,
"node_id": "MDQ6VXNlcjUwNDY3ODc5",
"avatar_url": "https://avatars.githubusercontent.com/u/50467879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gnovack",
"html_url": "https://github.com/gnovack",
"followers_url": "https://api.github.com/users/gnovack/followers",
"following_url": "https://api.github.com/users/gnovack/following{/other_user}",
"gists_url": "https://api.github.com/users/gnovack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gnovack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gnovack/subscriptions",
"organizations_url": "https://api.github.com/users/gnovack/orgs",
"repos_url": "https://api.github.com/users/gnovack/repos",
"events_url": "https://api.github.com/users/gnovack/events{/privacy}",
"received_events_url": "https://api.github.com/users/gnovack/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @ArthurZucker and @younesbelkada ",
"EDIT: in order to actually solve this, we would need a lot of potential usage. \r\nThe reason is that after fixing the positional ids with a `registered buffer` we need to modify the causal attention mask which also has to be a buffer otherwise it does not work. \r\nThis is a lot of refactoring on a lot of model (even if we juste fix this one, it is still a bit too much): we would have to implement the same logic as in GPT2 and GPTNeo. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,684
| 1,684
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.157-139.675.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.15
- Huggingface_hub version: 0.13.0
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Load MBart model and trace it on CPU with `torch.jit.trace()`
```python
import torch
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt")
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50", torchscript=True)
traced_model = torch.jit.trace(model, [inputs.input_ids, inputs.attention_mask])
torch.jit.save(traced_model, "mbart-traced.pt")
```
2. Load traced model and place it on GPU using `torch.jit.load()`
```python
loaded_model_gpu = torch.jit.load("mbart-traced.pt", map_location=torch.device('cuda'))
```
3. Run inference on GPU
```python
loaded_model_gpu(inputs.input_ids.to('cuda'), inputs.attention_mask.to('cuda'))
```
The following error is raised while running inference:
```python
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/__torch__/transformers/models/mbart/modeling_mbart/___torch_mangle_1394.py", line 15, in forward
lm_head = self.lm_head
model = self.model
_0 = (model).forward(input_ids, attention_mask, )
~~~~~~~~~~~~~~ <--- HERE
_1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, _16, _17, _18, _19, _20, _21, _22, _23, _24, _25, _26, _27, _28, _29, _30, _31, _32, _33, _34, _35, _36, _37, _38, _39, _40, _41, _42, _43, _44, _45, _46, _47, _48, _49, _50, = _0
_51 = torch.add((lm_head).forward(_1, ), final_logits_bias)
File "code/__torch__/transformers/models/mbart/modeling_mbart/___torch_mangle_1392.py", line 31, in forward
_7 = torch.slice(prev_output_tokens0, 0, 0, 9223372036854775807)
_8 = torch.fill_(torch.select(_7, 1, 0), decoder_start_tokens)
_9 = (encoder).forward(embed_tokens, weight, input_ids, attention_mask, )
~~~~~~~~~~~~~~~~ <--- HERE
_10 = (decoder).forward(weight, prev_output_tokens0, attention_mask, _9, )
_11, _12, _13, _14, _15, _16, _17, _18, _19, _20, _21, _22, _23, _24, _25, _26, _27, _28, _29, _30, _31, _32, _33, _34, _35, _36, _37, _38, _39, _40, _41, _42, _43, _44, _45, _46, _47, _48, _49, _50, _51, _52, _53, _54, _55, _56, _57, _58, _59, = _10
File "code/__torch__/transformers/models/mbart/modeling_mbart/___torch_mangle_1181.py", line 47, in forward
_13 = (argument_1).forward(weight, input, )
inputs_embeds = torch.mul(_13, CONSTANTS.c1)
_14 = (embed_positions).forward(input_ids, )
~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
input0 = torch.add(inputs_embeds, _14)
_15 = (layernorm_embedding).forward(input0, )
File "code/__torch__/transformers/models/mbart/modeling_mbart/___torch_mangle_1045.py", line 17, in forward
positions = torch.expand(_2, [_0, -1])
input = torch.add(positions, CONSTANTS.c3)
return torch.embedding(weight, input)
~~~~~~~~~~~~~~~ <--- HERE
Traceback of TorchScript, original code (most recent call last):
...
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
```
Also, by running `dump_to_str` I am able to see that device is set to `cpu` within `MBartLearnedPositionalEmbedding`:
```python
>>> loaded_model_gpu._c.dump_to_str(True, False, False)
module __torch__.transformers.models.mbart.modeling_mbart.___torch_mangle_4565.MBartLearnedPositionalEmbedding {
parameters {
weight = ...
}
attributes {
weight = ...
training = False
_is_full_backward_hook = None
}
methods {
method forward {
graph(%self.1 : __torch__.transformers.models.mbart.modeling_mbart.___torch_mangle_4565.MBartLearnedPositionalEmbedding,
%input_ids.1 : Tensor):
%34 : Tensor = prim::Constant[value={2}]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:133:0
%25 : bool = prim::Constant[value=0]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:129:0
%52 : Device = prim::Constant[value="cpu"]()
%22 : NoneType = prim::Constant() # :0:0
%16 : Tensor = prim::Constant[value={0}]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:130:0
%5 : int = prim::Constant[value=0]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:128:0
%12 : int = prim::Constant[value=1]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:128:0
%21 : int = prim::Constant[value=4]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:129:0
%29 : int = prim::Constant[value=-1]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:129:0
%weight.1 : Tensor = prim::GetAttr[name="weight"](%self.1)
%6 : int = aten::size(%input_ids.1, %5) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:128:0
%bsz.1 : Tensor = prim::NumToTensor(%6) # :0:0
%10 : int = aten::Int(%bsz.1) # :0:0
%13 : int = aten::size(%input_ids.1, %12) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:128:0
%seq_len.1 : Tensor = prim::NumToTensor(%13) # :0:0
%18 : Tensor = aten::add(%seq_len.1, %16, %12) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:130:0
%19 : Scalar = aten::ScalarImplicit(%18) # :0:0
%26 : Tensor = aten::arange(%5, %19, %21, %22, %52, %25) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:129:0
%30 : int[] = prim::ListConstruct(%10, %29)
%positions.1 : Tensor = aten::expand(%26, %30, %25) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:129:0
%input.1 : Tensor = aten::add(%positions.1, %34, %12) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:133:0
%42 : Tensor = aten::embedding(%weight.1, %input.1, %29, %25, %25) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/functional.py:2210:0
return (%42)
}
}
submodules {
}
}
```
### Expected behavior
I expected to be able to run inference successfully on GPU.
I have come across some similar issues related to other types of models:
- https://github.com/huggingface/transformers/issues/5664
- https://github.com/pytorch/pytorch/issues/50971
And some PRs to address some similar issues:
- https://github.com/huggingface/transformers/pull/11252
- https://github.com/huggingface/transformers/pull/12290
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22038/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22037
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22037/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22037/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22037/events
|
https://github.com/huggingface/transformers/issues/22037
| 1,615,801,760
|
I_kwDOCUB6oc5gTy2g
| 22,037
|
Trainer hyperparameter_search only returns the best trial config
|
{
"login": "anruijian",
"id": 115125339,
"node_id": "U_kgDOBtysWw",
"avatar_url": "https://avatars.githubusercontent.com/u/115125339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anruijian",
"html_url": "https://github.com/anruijian",
"followers_url": "https://api.github.com/users/anruijian/followers",
"following_url": "https://api.github.com/users/anruijian/following{/other_user}",
"gists_url": "https://api.github.com/users/anruijian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anruijian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anruijian/subscriptions",
"organizations_url": "https://api.github.com/users/anruijian/orgs",
"repos_url": "https://api.github.com/users/anruijian/repos",
"events_url": "https://api.github.com/users/anruijian/events{/privacy}",
"received_events_url": "https://api.github.com/users/anruijian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"We'd be happy to look at a PR!",
"@sgugger I have a concern regarding the different backends we use (ray, optuna, sigopt, wandb) and their varying return objects. I wonder if we should consider modifying all backends to return a more comprehensive object, such as the `analysis` object used in ray, to ensure consistency across all the backends. \r\n\r\nWhile I am familiar with the ray tune backend, I am unsure about how to proceed with the other backends. I checked the code briefly to find the object that acts as `analysis` for ray:\r\n\r\n1. [study](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations.py#L196) for optuna\r\n2. [entire list of experiments](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations.py#L433) for sigopt\r\n3. [dictionary](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations.py#L433) for wandb (need to modify the dictionary to record results of all experiments instead of the best one.)\r\n\r\nLet me know if I understand it correctly.",
"I think it's okay if the object is backend specific.",
"PR #22040 submitted."
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### Feature request
Allow `hyperparameter_search` method of `Trainer` to return the entire `ExperimentAnalysis` object instead of a single `best_run`.
### Motivation
The `hyperparameter_search` method of the `Trainer` currently only returns the best configuration `best_run`, instead of the more comprehensive `ExperimentAnalysis` object `analysis`. However, I believe that `analysis` would be more valuable than just the single best run configuration since it offers additional attributes and methods that can provide more useful information about tuning (see [doc](https://docs.ray.io/en/releases-1.11.0/tune/api_docs/analysis.html#analysis-tune-analysis)). Therefore, I suggest modifying the`hyperparameter_search` method to return the `ExperimentAnalysis` object so that users can do more analysis.
```python
analysis = ray.tune.run(
dynamic_modules_import_trainable,
config=trainer.hp_space(None),
num_samples=n_trials,
**kwargs,
)
best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3], scope=trainer.args.ray_scope)
best_run = BestRun(best_trial.trial_id, best_trial.last_result["objective"], best_trial.config)
if _tb_writer is not None:
trainer.add_callback(_tb_writer)
return best_run
```
### Your contribution
I can submit a PR for this feature.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22037/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22037/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22036
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22036/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22036/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22036/events
|
https://github.com/huggingface/transformers/pull/22036
| 1,615,770,698
|
PR_kwDOCUB6oc5LmO8O
| 22,036
|
[21737][T5]: Fix gradient checkpoint bug
|
{
"login": "nipunjindal",
"id": 6430864,
"node_id": "MDQ6VXNlcjY0MzA4NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6430864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nipunjindal",
"html_url": "https://github.com/nipunjindal",
"followers_url": "https://api.github.com/users/nipunjindal/followers",
"following_url": "https://api.github.com/users/nipunjindal/following{/other_user}",
"gists_url": "https://api.github.com/users/nipunjindal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nipunjindal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nipunjindal/subscriptions",
"organizations_url": "https://api.github.com/users/nipunjindal/orgs",
"repos_url": "https://api.github.com/users/nipunjindal/repos",
"events_url": "https://api.github.com/users/nipunjindal/events{/privacy}",
"received_events_url": "https://api.github.com/users/nipunjindal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you @gante!"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Part of https://github.com/huggingface/transformers/issues/21737
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22036/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22036",
"html_url": "https://github.com/huggingface/transformers/pull/22036",
"diff_url": "https://github.com/huggingface/transformers/pull/22036.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22036.patch",
"merged_at": 1678364264000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22035
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22035/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22035/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22035/events
|
https://github.com/huggingface/transformers/pull/22035
| 1,615,560,336
|
PR_kwDOCUB6oc5LliB_
| 22,035
|
Avoid `text_config_dict` and `vision_config_dict` being saved for CLIP-like models
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"> Thanks! This may need to be copied over some CLIP-like models that also have some backward-compatibility code with the config dicts.\r\n\r\nSure, in the plan already. Quoted in the PR description\r\n\r\n> I will apply the same change to other CLIP-like models if the idea/approach is accepted.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
**Avoid `text_config_dict` and `vision_config_dict` being saved for CLIP-like models.** So less confusion.
Currently, configuration classes for CLIP-like models will save both `text_config` and `text_config_dict`, if `text_config_dict` is provided (as `kwargs`). Similarly, for `vision_config` and `vision_config_dict`. Many configuration files on the Hub have all these keys, and they look really confusing, see for example [openai/clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16/blob/main/config.json) or [laion/CLIP-ViT-H-14-laion2B-s32B-b79K](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/blob/main/config.json#L115)
This issue dates back before the PR #19954. The PR #19954 tried to avoid the usage of `text_config_dict` and `vision_config_dict` (while keeping back compatibility), it didn't prevent `text_config_dict` and `vision_config_dict` being saved.
This PR:
- avoid `text_config_dict` and `vision_config_dict` being saved,
- make sure all the values provided in `text_config_dict` and `vision_config_dict` will be used to update `text_config` and `vision_config` (so backward compatibility), and only `text_config` and `vision_config` are saved
- **therefore, we can load a existing configuration, save it and upload it again --> make it clean + less confusing, and not break anything**
I will apply the same change to other CLIP-like models if the idea/approach is accepted.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22035/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22035",
"html_url": "https://github.com/huggingface/transformers/pull/22035",
"diff_url": "https://github.com/huggingface/transformers/pull/22035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22035.patch",
"merged_at": 1678303650000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22034
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22034/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22034/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22034/events
|
https://github.com/huggingface/transformers/pull/22034
| 1,615,552,741
|
PR_kwDOCUB6oc5LlgbI
| 22,034
|
Bug fix: token classification pipeline while passing offset_mapping
|
{
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"the current failing tests look unrelated to me \r\n",
"LGTM @sgugger \r\n\r\nI'm confused by the failure, I'm guessing it's CircleCI running the wrong runner, but I don't remember the fix."
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Bug fix: add check so `AttributeError` isn't preventing using slow tokenizers with `offset_mapping`
On token-classification pipelines it threw an error (AttributeError None) if using a slow tokenizer & passing `offset_mapping`.
It is intended to work so (if you want) you can calculate offsets yourself while using a slow(or custom) tokenizer. otherwise "start"&"end" values returned from the pipeline are `None`
For example 'google/canine-c' (pretend it is finetuned)
```python
from transformers import pipeline
token_classifier = pipeline(
"token-classification", model='google/canine-c',
aggregation_strategy="simple", ignore_labels=[],
)
offset_mapping=[(0,0)]+[(i,i+1) for i,t in enumerate(text)]+[(0,0)] # canine is an easy enough tokenizer to calculate offsets ourselves accounting for [cls],[sep].
ents=token_classifier(text)
print(ents) # without offset_mapping "start"&"end" is None
ents=token_classifier(text, offset_mapping=offset_mapping)
print(ents) # should return entities with "start"&"end" index values.
```
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
- pipelines: @Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22034/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22034",
"html_url": "https://github.com/huggingface/transformers/pull/22034",
"diff_url": "https://github.com/huggingface/transformers/pull/22034.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22034.patch",
"merged_at": 1678310507000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22033
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22033/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22033/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22033/events
|
https://github.com/huggingface/transformers/pull/22033
| 1,615,514,271
|
PR_kwDOCUB6oc5LlYIV
| 22,033
|
Edit the docstring of `image_processing_donut` to match code
|
{
"login": "vermouthmjl",
"id": 3142085,
"node_id": "MDQ6VXNlcjMxNDIwODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3142085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vermouthmjl",
"html_url": "https://github.com/vermouthmjl",
"followers_url": "https://api.github.com/users/vermouthmjl/followers",
"following_url": "https://api.github.com/users/vermouthmjl/following{/other_user}",
"gists_url": "https://api.github.com/users/vermouthmjl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vermouthmjl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vermouthmjl/subscriptions",
"organizations_url": "https://api.github.com/users/vermouthmjl/orgs",
"repos_url": "https://api.github.com/users/vermouthmjl/repos",
"events_url": "https://api.github.com/users/vermouthmjl/events{/privacy}",
"received_events_url": "https://api.github.com/users/vermouthmjl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@vermouthmjl If running `make style` didn't resolve the `check_code_quality` checks, make sure the most recent formatting libraries are installed using `pip install -e .[quality]`"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
It changes the list of arguments in the the docstring of the class `DonutImageProcessor`, for the current docstring does not match the list of parameters in the code.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger
Donut: @amyeroberts @alaradirik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22033/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22033",
"html_url": "https://github.com/huggingface/transformers/pull/22033",
"diff_url": "https://github.com/huggingface/transformers/pull/22033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22033.patch",
"merged_at": 1678383344000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22032
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22032/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22032/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22032/events
|
https://github.com/huggingface/transformers/pull/22032
| 1,615,494,099
|
PR_kwDOCUB6oc5LlTxs
| 22,032
|
handle numpy inputs in whole word mask data collator
|
{
"login": "dwyatte",
"id": 2512762,
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwyatte",
"html_url": "https://github.com/dwyatte",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@Rocketknight1, can you have a look too plz? You have been working with these :) Then we can tag Sylvain, after you approve too"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Adds support to `DataCollatorForWholeWordMask` to work on numpy arrays as inputs. I added tests for all variants (np, pt, tf), but only tf had the bug which is now fixed.
Fixes https://github.com/huggingface/transformers/issues/22009
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22032/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22032",
"html_url": "https://github.com/huggingface/transformers/pull/22032",
"diff_url": "https://github.com/huggingface/transformers/pull/22032.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22032.patch",
"merged_at": 1678463430000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22031
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22031/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22031/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22031/events
|
https://github.com/huggingface/transformers/pull/22031
| 1,615,470,381
|
PR_kwDOCUB6oc5LlO2D
| 22,031
|
Add tokenize_kwargs parameter definition in the FeatureExtractionPipeline
|
{
"login": "anruijian",
"id": 115125339,
"node_id": "U_kgDOBtysWw",
"avatar_url": "https://avatars.githubusercontent.com/u/115125339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anruijian",
"html_url": "https://github.com/anruijian",
"followers_url": "https://api.github.com/users/anruijian/followers",
"following_url": "https://api.github.com/users/anruijian/following{/other_user}",
"gists_url": "https://api.github.com/users/anruijian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anruijian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anruijian/subscriptions",
"organizations_url": "https://api.github.com/users/anruijian/orgs",
"repos_url": "https://api.github.com/users/anruijian/repos",
"events_url": "https://api.github.com/users/anruijian/events{/privacy}",
"received_events_url": "https://api.github.com/users/anruijian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks great to me.\r\n\r\nTo fix quality you can do `pip install -e .[quality] && make fixup` \r\n\r\n@sgugger for final review.",
"The quality is failing due to the branch being too old, not something in this PR. Merging.\r\nThanks for your contribution!"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21971
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@Narsil
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22031/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22031",
"html_url": "https://github.com/huggingface/transformers/pull/22031",
"diff_url": "https://github.com/huggingface/transformers/pull/22031.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22031.patch",
"merged_at": 1678293812000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22030
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22030/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22030/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22030/events
|
https://github.com/huggingface/transformers/pull/22030
| 1,615,464,066
|
PR_kwDOCUB6oc5LlNd-
| 22,030
|
Thomas/llama
|
{
"login": "yonikremer",
"id": 76044840,
"node_id": "MDQ6VXNlcjc2MDQ0ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/76044840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonikremer",
"html_url": "https://github.com/yonikremer",
"followers_url": "https://api.github.com/users/yonikremer/followers",
"following_url": "https://api.github.com/users/yonikremer/following{/other_user}",
"gists_url": "https://api.github.com/users/yonikremer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonikremer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonikremer/subscriptions",
"organizations_url": "https://api.github.com/users/yonikremer/orgs",
"repos_url": "https://api.github.com/users/yonikremer/repos",
"events_url": "https://api.github.com/users/yonikremer/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonikremer/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi! Thank you for opening a PR. I see that you've forked my branch. I think it's perfectly fine if you want to keep on working on my branch for your experiments. However I think the more promising branch is this one https://github.com/huggingface/transformers/pull/21955 (as in the most likely to be merged). I'm closing this PR as this has low probability of being merged on `main`."
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22030/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22030",
"html_url": "https://github.com/huggingface/transformers/pull/22030",
"diff_url": "https://github.com/huggingface/transformers/pull/22030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22030.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22028
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22028/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22028/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22028/events
|
https://github.com/huggingface/transformers/pull/22028
| 1,615,319,842
|
PR_kwDOCUB6oc5LkuT4
| 22,028
|
Fix test for torchneuroncore in Trainer
|
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
The test was always passing since the function is not None...
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22028/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22028",
"html_url": "https://github.com/huggingface/transformers/pull/22028",
"diff_url": "https://github.com/huggingface/transformers/pull/22028.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22028.patch",
"merged_at": 1678284763000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22027
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22027/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22027/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22027/events
|
https://github.com/huggingface/transformers/issues/22027
| 1,615,290,456
|
I_kwDOCUB6oc5gR2BY
| 22,027
|
KeyError: 'eval_metric-name' in trainer.py, line 2339
|
{
"login": "pminervini",
"id": 227357,
"node_id": "MDQ6VXNlcjIyNzM1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pminervini",
"html_url": "https://github.com/pminervini",
"followers_url": "https://api.github.com/users/pminervini/followers",
"following_url": "https://api.github.com/users/pminervini/following{/other_user}",
"gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pminervini/subscriptions",
"organizations_url": "https://api.github.com/users/pminervini/orgs",
"repos_url": "https://api.github.com/users/pminervini/repos",
"events_url": "https://api.github.com/users/pminervini/events{/privacy}",
"received_events_url": "https://api.github.com/users/pminervini/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Without seeing the code you run and how it was launched, there is very little we can do to help you.",
"@sgugger no worries it's just to be able to reference to an issue when/if I submit a pull request"
] | 1,678
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
### System Info
Latest `transformers` version from the `main` branch, running on Ubuntu
```python
File "code-cli.py", line 347, in <module>
main(sys.argv[1:])
File "code-cli.py", line 281, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 1631, in train
return inner_training_loop(
File "[..]/python3.10/site-packages/transformers/trainer.py", line 1975, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 2236, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 2339, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_metric-name'
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Seq2seq training on NQ
### Expected behavior
```python
File "code-cli.py", line 347, in <module>
main(sys.argv[1:])
File "code-cli.py", line 281, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 1631, in train
return inner_training_loop(
File "[..]/python3.10/site-packages/transformers/trainer.py", line 1975, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 2236, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 2339, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_metric-name'
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22027/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22026
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22026/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22026/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22026/events
|
https://github.com/huggingface/transformers/pull/22026
| 1,615,272,712
|
PR_kwDOCUB6oc5LkkNK
| 22,026
|
[`bnb`] Fix bnb error message
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/22018
This PR introduces a clearer error message to users who wants to explore how to dispatch a model between CPU & GPU when loading a model in 8bit
cc @sgugger
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22026/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22026",
"html_url": "https://github.com/huggingface/transformers/pull/22026",
"diff_url": "https://github.com/huggingface/transformers/pull/22026.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22026.patch",
"merged_at": 1678283505000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22025
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22025/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22025/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22025/events
|
https://github.com/huggingface/transformers/pull/22025
| 1,615,224,199
|
PR_kwDOCUB6oc5LkZuP
| 22,025
|
Update ALIGN docs
|
{
"login": "alaradirik",
"id": 8944735,
"node_id": "MDQ6VXNlcjg5NDQ3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8944735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alaradirik",
"html_url": "https://github.com/alaradirik",
"followers_url": "https://api.github.com/users/alaradirik/followers",
"following_url": "https://api.github.com/users/alaradirik/following{/other_user}",
"gists_url": "https://api.github.com/users/alaradirik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alaradirik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alaradirik/subscriptions",
"organizations_url": "https://api.github.com/users/alaradirik/orgs",
"repos_url": "https://api.github.com/users/alaradirik/repos",
"events_url": "https://api.github.com/users/alaradirik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alaradirik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Improves ALIGN docs, fixes typos.
## Before submitting
- [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22025/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22025",
"html_url": "https://github.com/huggingface/transformers/pull/22025",
"diff_url": "https://github.com/huggingface/transformers/pull/22025.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22025.patch",
"merged_at": 1678360337000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22024
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22024/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22024/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22024/events
|
https://github.com/huggingface/transformers/pull/22024
| 1,615,167,681
|
PR_kwDOCUB6oc5LkNf1
| 22,024
|
[WIP]`NLLB-MoE` Adds the moe model
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,679
| 1,679
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #21300
To-Dos:
- [x] Conversion script and original weights available [here](https://huggingface.co/ArthurZ/fairseq-nllb-moe)
- [x] Converted checkpoints and configuration file available:
- [moe-128](https://huggingface.co/ArthurZ/nllb-moe-128) experts
- [x] Make the common tests go green
- [x] Implement top 2 gating mecanism
- [x] Add integration tests for:
- [x] the routers
- [x] the logits
- [x] the generation using greedy search
- [x] Cleanup the PR
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22024/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22024/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22024",
"html_url": "https://github.com/huggingface/transformers/pull/22024",
"diff_url": "https://github.com/huggingface/transformers/pull/22024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22024.patch",
"merged_at": 1679938920000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22023
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22023/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22023/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22023/events
|
https://github.com/huggingface/transformers/pull/22023
| 1,615,067,228
|
PR_kwDOCUB6oc5Lj30M
| 22,023
|
Update `AudioClassificationPipelineTests::test_small_model_pt` for PT 2.0.0
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
(Not tiny, but not too large neither) Different values with different torch/cuda versions.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22023/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22023",
"html_url": "https://github.com/huggingface/transformers/pull/22023",
"diff_url": "https://github.com/huggingface/transformers/pull/22023.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22023.patch",
"merged_at": 1678280208000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22022
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22022/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22022/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22022/events
|
https://github.com/huggingface/transformers/pull/22022
| 1,615,052,872
|
PR_kwDOCUB6oc5Lj01I
| 22,022
|
VideoMAE doctest - use valid dummy pixel values
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
This PR updates the raw input video frames to the expected data type.
The pixel values passed into the image processor in the tests took values sampled from a standard normal distribution. For an image (or frame) this represents pixels which have been rescaled between [0 - 1] and normalized i.e. one which has already been passed to the image processor.
After merging #21969, resizing the image throws an error as this image cannot be converted to a PIL.Image.Image without possibly unexpected behaviour from numpy and overflow issues.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22022/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22022",
"html_url": "https://github.com/huggingface/transformers/pull/22022",
"diff_url": "https://github.com/huggingface/transformers/pull/22022.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22022.patch",
"merged_at": 1678276483000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22021
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22021/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22021/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22021/events
|
https://github.com/huggingface/transformers/pull/22021
| 1,614,894,852
|
PR_kwDOCUB6oc5LjS4Q
| 22,021
|
[DO NOT MERGE] Test v0.13.0.rc0
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"CI is green. I'm closing this :)"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
DO NOT MERGE.
Only to test the CI with huggingface_hub 0.13 release.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22021/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22021",
"html_url": "https://github.com/huggingface/transformers/pull/22021",
"diff_url": "https://github.com/huggingface/transformers/pull/22021.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22021.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22020
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22020/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22020/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22020/events
|
https://github.com/huggingface/transformers/pull/22020
| 1,614,860,457
|
PR_kwDOCUB6oc5LjLqm
| 22,020
|
Add missing optional argument summary_proj_to_labels to XLNetConfig
|
{
"login": "akashsara",
"id": 10437649,
"node_id": "MDQ6VXNlcjEwNDM3NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/10437649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akashsara",
"html_url": "https://github.com/akashsara",
"followers_url": "https://api.github.com/users/akashsara/followers",
"following_url": "https://api.github.com/users/akashsara/following{/other_user}",
"gists_url": "https://api.github.com/users/akashsara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akashsara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akashsara/subscriptions",
"organizations_url": "https://api.github.com/users/akashsara/orgs",
"repos_url": "https://api.github.com/users/akashsara/repos",
"events_url": "https://api.github.com/users/akashsara/events{/privacy}",
"received_events_url": "https://api.github.com/users/akashsara/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22020). All of your documentation changes will be reflected on that endpoint.",
"Yes. I wanted to get some clarity on that before working on fixing the tests. From a logical perspective, I would believe that the argument should exist since at the end of the day it decides only one thing - \"Whether the projection outputs should have config.num_labels or config.hidden_size classes.\". As it stands, it will only use `config.hidden_size` classes since the argument doesn't currently exist. Do you know someone who might be able to clarify this?\r\n\r\nI believe the tests are failing since this change causes the output of the summary layer to be different. Setting `summary_proj_to_labels=False` causes all tests to pass locally. I'll look into the tests sometime this weekend assuming that we want to include this change.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
# What does this PR do?
`XLNetConfig` (`src\transformers\models\xlnet\configuration_xlnet.py`) lists an argument `summary_proj_to_labels` as optional and with a default value of `True`. However, this is not actually included in the arguments and is not set anywhere. Initializing an XLNet model thus results in no such parameter existing. Also fixes a very minor typo (`boo` -> `bool`).
For reference: the same argument also is utilized in `XLMConfig` (`src\transformers\models\xlm\configuration_xlm.py`) but is actually utilized there.
From personal experience, this argument is used in `SequenceSummary` (`src\transformers\modeling_utils.py`). When using default arguments, there is an inconsistency between the models where one would have `hidden_size` -> `hidden_size` layers while the other would have `hidden_size` -> `num_labels` layers.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc: @sgugger
Still a draft right now. Need to make changes to the tests to adapt to this. Please let me know if this is intended functionality though.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22020/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22020",
"html_url": "https://github.com/huggingface/transformers/pull/22020",
"diff_url": "https://github.com/huggingface/transformers/pull/22020.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22020.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/22019
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22019/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22019/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22019/events
|
https://github.com/huggingface/transformers/pull/22019
| 1,614,843,226
|
PR_kwDOCUB6oc5LjH-b
| 22,019
|
fixes the gradient checkpointing of whisper
|
{
"login": "soma2000-lang",
"id": 56045049,
"node_id": "MDQ6VXNlcjU2MDQ1MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/56045049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soma2000-lang",
"html_url": "https://github.com/soma2000-lang",
"followers_url": "https://api.github.com/users/soma2000-lang/followers",
"following_url": "https://api.github.com/users/soma2000-lang/following{/other_user}",
"gists_url": "https://api.github.com/users/soma2000-lang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soma2000-lang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soma2000-lang/subscriptions",
"organizations_url": "https://api.github.com/users/soma2000-lang/orgs",
"repos_url": "https://api.github.com/users/soma2000-lang/repos",
"events_url": "https://api.github.com/users/soma2000-lang/events{/privacy}",
"received_events_url": "https://api.github.com/users/soma2000-lang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
fixes the gradient checkpointing of whisper
@gante
#21737
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22019/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22019",
"html_url": "https://github.com/huggingface/transformers/pull/22019",
"diff_url": "https://github.com/huggingface/transformers/pull/22019.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22019.patch",
"merged_at": 1678303298000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22018
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22018/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22018/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22018/events
|
https://github.com/huggingface/transformers/issues/22018
| 1,614,838,684
|
I_kwDOCUB6oc5gQHuc
| 22,018
|
PretrainedModel.from_pretrained does not work with load_in_8bit=True, llm_int8_enable_fp32_cpu_offload=True and device_map='auto'
|
{
"login": "sgsdxzy",
"id": 1655353,
"node_id": "MDQ6VXNlcjE2NTUzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1655353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgsdxzy",
"html_url": "https://github.com/sgsdxzy",
"followers_url": "https://api.github.com/users/sgsdxzy/followers",
"following_url": "https://api.github.com/users/sgsdxzy/following{/other_user}",
"gists_url": "https://api.github.com/users/sgsdxzy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgsdxzy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgsdxzy/subscriptions",
"organizations_url": "https://api.github.com/users/sgsdxzy/orgs",
"repos_url": "https://api.github.com/users/sgsdxzy/repos",
"events_url": "https://api.github.com/users/sgsdxzy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgsdxzy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"All the weights offloaded to the CPU won't be in int8 though, so the model is not loaded in 8 bits as requested. This is why we throw an error and choose not to support this use case (cc @younesbelkada ).",
"Yes, as explained by @sgugger we don't support `device_map=auto` + `llm_int8_enable_fp32_cpu_offload` you need to pass a custom device map as explained in https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu \r\nThe main motivation behind that is that we want to avoid unexpected behavior for users that are new to this feature and prefer to support this only for advanced use cases where users know exactly what they are doing. \r\nI agree though the warning message is slightly misleading and we can phrase it differently ",
"@sgsdxzy \r\nAs #22026 has been merged it closed the issue, feel free to re-open the issue if you think that there is something that needs to be fixed\r\nThanks!"
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.8
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Nas
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
quantization_config = BitsAndBytesConfig(load_in_8bit=True, llm_int8_enable_fp32_cpu_offload=True)
AutoModelForCausalLM.from_pretrained(path, device_map='auto', quantization_config=quantization_config)
```
If the model does not fit into VRAM, it reports:
```
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you have set a value for `max_memory` you should increase that. To have
an idea of the modules that are set on the CPU or RAM you can print model.hf_device_map.
```
auto-created device map example:
```
{'model.decoder.embed_tokens': 0, 'model.decoder.layers.0': 0, 'model.decoder.layers.1': 0, 'model.decoder.layers.2': 0, 'model.decoder.layers.3': 0, 'model.decoder.layers.4': 0, 'model.decoder.layers.5': 0, 'model.decoder.layers.6': 0, 'model.decoder.layers.7': 0, 'model.decoder.layers.8': 0, 'model.decoder.layers.9': 0, 'model.decoder.layers.10': 0, 'model.decoder.layers.11': 0, 'model.decoder.layers.12': 0, 'model.decoder.layers.13': 0, 'model.decoder.layers.14': 0, 'model.decoder.layers.15': 0, 'model.decoder.layers.16': 0, 'model.decoder.layers.17': 0, 'model.decoder.layers.18': 0, 'model.decoder.layers.19': 0, 'model.decoder.layers.20': 0, 'model.decoder.layers.21': 0, 'model.decoder.layers.22': 0, 'model.decoder.layers.23': 0, 'model.decoder.layers.24': 0, 'model.decoder.layers.25': 'cpu', 'model.decoder.layers.26': 'cpu', 'model.decoder.layers.27': 'cpu', 'model.decoder.layers.28': 'cpu', 'model.decoder.layers.29': 'cpu', 'model.decoder.layers.30': 'cpu', 'model.decoder.layers.31': 'cpu', 'model.decoder.layers.32': 'cpu', 'model.decoder.layers.33': 'cpu', 'model.decoder.layers.34': 'cpu', 'model.decoder.layers.35': 'cpu', 'model.decoder.layers.36': 'cpu', 'model.decoder.layers.37': 'cpu', 'model.decoder.layers.38': 'cpu', 'model.decoder.layers.39': 'cpu', 'model.decoder.norm': 'cpu', 'lm_head': 'cpu'}
```
### Expected behavior
It should auto create device_map, quantize what's in VRAM to int8, and keep what on cpu/RAM as float32.
In fact if the `device_map` is passed manually it runs correctly. The problem is that `PretrainedModel.from_pretrained` expand `device='auto'` to acutal mapping after populating `modules_to_not_convert`, so modules automatically offloaded to RAM are missing from the list. If I edit modeling_utils.py and expand `device='auto'` before `replace_8bit_linear` it works correctly.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22018/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22017
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22017/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22017/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22017/events
|
https://github.com/huggingface/transformers/issues/22017
| 1,614,764,225
|
I_kwDOCUB6oc5gP1jB
| 22,017
|
Weight mismatch when using deepspeed zero-stage 3 and pretrained codegen model
|
{
"login": "KaiLv69",
"id": 39761308,
"node_id": "MDQ6VXNlcjM5NzYxMzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/39761308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaiLv69",
"html_url": "https://github.com/KaiLv69",
"followers_url": "https://api.github.com/users/KaiLv69/followers",
"following_url": "https://api.github.com/users/KaiLv69/following{/other_user}",
"gists_url": "https://api.github.com/users/KaiLv69/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaiLv69/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaiLv69/subscriptions",
"organizations_url": "https://api.github.com/users/KaiLv69/orgs",
"repos_url": "https://api.github.com/users/KaiLv69/repos",
"events_url": "https://api.github.com/users/KaiLv69/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaiLv69/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey, there is something wrong indeed : \r\n> ignore_mismatched_sizes=True # if False, it would run in error \r\nThe error in witch you run should indicate how to fix the problems (most probably malformed configuration file)",
"Thanks for you quick reply.\r\nActually, I followed the error message to set it to True. When I set 'ignore_mismatched_sizes' to False, it prints as followings:\r\n<img width=\"1440\" alt=\"image\" src=\"https://user-images.githubusercontent.com/39761308/223704955-699b9db5-906c-4d1a-af62-61f93189d968.png\">\r\n",
"Ah sorry, you were right in ignoring the missmatches! Yes, there is a special argument to initialise your model using deepspeed in transformers but it does not support the deepspeed stage 3: \r\n```\r\n * `low_cpu_mem_usage` algorithm:\r\n\r\n This is an experimental function that loads the model using ~1x model size CPU memory\r\n\r\n Here is how it works:\r\n\r\n 1. save which state_dict keys we have\r\n 2. drop state_dict before the model is created, since the latter takes 1x model size CPU memory\r\n 3. after the model has been instantiated switch to the meta device all params/buffers that\r\n are going to be replaced from the loaded state_dict\r\n 4. load state_dict 2nd time\r\n 5. replace the params/buffers from the state_dict\r\n\r\n Currently, it can't handle deepspeed ZeRO stage 3 and ignores loading errors\r\n```\r\nThe documentation mentions this. \r\nSo this is expeced, but @stas00 is the deep speed boss so pinging him for help, but this is more a feature request than a bug IMO",
"For Non HF-Trainer integration please see:\r\nhttps://huggingface.co/docs/transformers/main/main_classes/deepspeed#nontrainer-deepspeed-integration\r\n\r\n`zero.Init` is already done for you inside the modeling code - you just need to set `dschf = HfDeepSpeedConfig(args.deepspeed_config)` and keep it alive before you call `from_pretrained` - that's it.\r\n\r\nI fixed your program to work:\r\n\r\n```\r\nfrom transformers import AutoModelForCausalLM, AutoConfig\r\nfrom transformers.models.codegen.modeling_codegen import CodeGenMLP\r\nimport argparse\r\nimport torch\r\nimport time, datetime\r\nimport deepspeed\r\nfrom deepspeed.accelerator import get_accelerator\r\nfrom torch.utils.data import Dataset\r\nfrom transformers.activations import ClippedGELUActivation, LinearActivation\r\nfrom lion_pytorch import Lion\r\nSEQ_LEN = 300\r\nVOCAB_SIZE = 10000\r\nDATA_SIZE = 100\r\n\r\nclass FakeDataset(Dataset):\r\n def __init__(self, length, seq_len, vocab_size):\r\n self.length = length\r\n self.seq_len = seq_len\r\n self.vocab_size = vocab_size\r\n\r\n def __len__(self):\r\n return self.length\r\n\r\n def __getitem__(self, index):\r\n input_ids = torch.randint(0, self.vocab_size, (self.seq_len, ))\r\n attention_mask = torch.ones_like(input_ids)\r\n return input_ids, attention_mask\r\n\r\n\r\ndef train(args):\r\n from transformers.deepspeed import HfDeepSpeedConfig\r\n dschf = HfDeepSpeedConfig(args.deepspeed_config) # keep this object alive\r\n\r\n model = AutoModelForCausalLM.from_pretrained(\"Salesforce/codegen-350M-mono\")\r\n\r\n optimizer = Lion(model.parameters(), lr=1e-4, weight_decay=1e-2)\r\n\r\n print(f\"[{datetime.datetime.today()}] Loading dataset.\")\r\n dataset = FakeDataset(DATA_SIZE, SEQ_LEN, VOCAB_SIZE)\r\n\r\n print(f\"[{datetime.datetime.today()}] Initializing DeepSpeed Engine.\")\r\n model_engine, optimizer, trainloader, _ = deepspeed.initialize(\r\n args=args,\r\n model=model,\r\n optimizer=optimizer,\r\n model_parameters=model.parameters(),\r\n training_data=dataset)\r\n\r\n model.train()\r\n for i, data in enumerate(trainloader):\r\n model_engine.zero_grad()\r\n optimizer.zero_grad()\r\n input_ids, attn_mask = data[0].cuda(), data[1].cuda()\r\n output = model_engine(input_ids=input_ids,\r\n attention_mask=attn_mask,\r\n labels=input_ids)\r\n\r\n model_engine.backward(output['loss'])\r\n\r\n model_engine.step()\r\n\r\n # 2 pytorch allocator cache flushes since last step. this happens when\r\n # there is high memory pressure and is detrimental to performance. if\r\n # this is happening frequently consider adjusting settings to reduce\r\n # memory consumption. If you are unable to make the cache flushes go\r\n # away consider adding get_accelerator().empty_cache() calls in your\r\n # training loop to ensure that all ranks flush their caches at the\r\n # same time\r\n get_accelerator().empty_cache()\r\n\r\nif __name__ == \"__main__\":\r\n parser = argparse.ArgumentParser()\r\n parser.add_argument('--local_rank', type=int, default=-1)\r\n parser.add_argument('--deepspeed_config', type=str)\r\n args = parser.parse_args()\r\n train(args)\r\n```",
"BTW, when you use deepspeed offload w/ LION it will be slow. \r\n\r\nYou want deepspeed's Adam instead or turn off offload. You shouldn't need it with 8 gpus and this small model. Unless you were just using it for a repro case, still 8 gpus is a lot of sharding.\r\n\r\nThe Deepspeed team are working on flagging this incompatibility here https://github.com/microsoft/DeepSpeed/pull/2971\r\n\r\nMake sure to enabled gradient checkpointing - which will save you a ton of gpu memory at a small cost of slowdown. (unrelated to deepspeed)",
"Thanks very much. The problem have been solved."
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.15.0-189-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: True
### Who can help?
@stas @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. My code for load model.
```python
from transformers import AutoModelForCausalLM, AutoConfig
from transformers.models.codegen.modeling_codegen import CodeGenMLP
import argparse
import torch
import time, datetime
import deepspeed
from deepspeed.accelerator import get_accelerator
from torch.utils.data import Dataset
from transformers.activations import ClippedGELUActivation, LinearActivation
from lion_pytorch import Lion
SEQ_LEN = 300
VOCAB_SIZE = 10000
DATA_SIZE = 100
class FakeDataset(Dataset):
def __init__(self, length, seq_len, vocab_size):
self.length = length
self.seq_len = seq_len
self.vocab_size = vocab_size
def __len__(self):
return self.length
def __getitem__(self, index):
input_ids = torch.randint(0, self.vocab_size, (self.seq_len, ))
attention_mask = torch.ones_like(input_ids)
return input_ids, attention_mask
def train():
with deepspeed.zero.Init():
model = AutoModelForCausalLM.from_pretrained(
"Salesforce/codegen-350M-mono",
ignore_mismatched_sizes=True # if False, it would run in error
)
optimizer = Lion(model.parameters(), lr=1e-4, weight_decay=1e-2)
print(f"[{datetime.datetime.today()}] Loading dataset.")
dataset = FakeDataset(DATA_SIZE, SEQ_LEN, VOCAB_SIZE)
print(f"[{datetime.datetime.today()}] Initializing DeepSpeed Engine.")
model_engine, optimizer, trainloader, _ = deepspeed.initialize(
args=args,
model=model,
optimizer=optimizer,
model_parameters=model.parameters(),
training_data=dataset)
model.train()
for i, data in enumerate(trainloader):
model_engine.zero_grad()
optimizer.zero_grad()
input_ids, attn_mask = data[0].cuda(), data[1].cuda()
output = model_engine(input_ids=input_ids,
attention_mask=attn_mask,
labels=input_ids)
model_engine.backward(output['loss'])
model_engine.step()
# 2 pytorch allocator cache flushes since last step. this happens when
# there is high memory pressure and is detrimental to performance. if
# this is happening frequently consider adjusting settings to reduce
# memory consumption. If you are unable to make the cache flushes go
# away consider adding get_accelerator().empty_cache() calls in your
# training loop to ensure that all ranks flush their caches at the
# same time
get_accelerator().empty_cache()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--local_rank', type=int, default=-1)
parser = deepspeed.add_config_arguments(parser)
args = parser.parse_args()
train()
```
2. Deepspeed config
```json
{
"gradient_accumulation_steps": 1,
"train_micro_batch_size_per_gpu": 1,
"steps_per_print": 1,
"wall_clock_breakdown": true,
"fp16": {
"enabled": true,
"auto_cast": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.001,
"betas": [
0.8,
0.999
],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"zero_allow_untested_optimizer": true,
"zero_optimization": {
"stage": 3,
"contiguous_gradients": true,
"overlap_comm": true,
"reduce_scatter": true,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
}
}
}
```
4. bash script to run the
```bash
deepspeed --include localhost:0,1,2,3,4,5,6,7 train.py --deepspeed_config 350m.json
```
5. Relevant output snippets. It shows the weird behaviour wherein the model isn't being properly initialized with the pretrained weights.
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/39761308/223641149-1f25d27f-2069-43c3-bddc-0d6cad143be5.png">
### Expected behavior
Model being properly initialized with the pretrained weights when using DeepSpeed ZERO Stage-3. It seems that the model parameters are randomly initialized so far.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22017/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22016
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22016/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22016/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22016/events
|
https://github.com/huggingface/transformers/issues/22016
| 1,614,631,548
|
I_kwDOCUB6oc5gPVJ8
| 22,016
|
`clean_up_tokenization` too many false positives
|
{
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! Thanks for pointing this out! I do agree with you on this one, I am also guessing that if you have `he said that 'revenge' was over` the same problem will occur. \r\nCurrently this is what is being used :\r\n```python \r\n out_string = (\r\n out_string.replace(\" .\", \".\")\r\n .replace(\" ?\", \"?\")\r\n .replace(\" !\", \"!\")\r\n .replace(\" ,\", \",\")\r\n .replace(\" ' \", \"'\")\r\n .replace(\" n't\", \"n't\")\r\n .replace(\" 'm\", \"'m\")\r\n .replace(\" 's\", \"'s\")\r\n .replace(\" 've\", \"'ve\")\r\n .replace(\" 're\", \"'re\")\r\n )\r\n```\r\nSo a lot of patterns are going to get swallowed up by this. \r\ncc @Narsil is it not too breaking to switch to the `re` pattern? The same thing happens in `wordpiece.rs`",
"> holy grail of original == decode(encode(original))\r\n\r\nBloom tokenizer achieves this if you're looking for it. To the exception that there's a very old default: https://github.com/huggingface/transformers/pull/20846\r\n\r\n@ArthurZucker \r\nI feel really bad about making changes to such old things. It's been in use for so long I don't feel it's a bug anymore but a feature. Allowing users to disengage from the cleanup (and maybe make it a default for newly created tokenizers) is OK, but modifying existing behavior, I don't feel good about (in theory I like it, but I'm fairly confident it will blow up as soon as released, and if it blows up a little bit later, then we'll be in a worse position even since you have 2 different behavior unable to find a good compromise).\r\n\r\nMy take is that the replace is bad, but the cleanup itself is bad and should just be not used anymore (and for BC we should just modify future behavior, not the current one).\r\n\r\n",
"Yes this method seems like a good candidate for the great deprecation, and we can see if we want to officially support something better.",
"I appreciate the reluctance to 'move fast and break things' - nice to see :)\r\n\r\nAs a user finding his way around the Hugging Face packages, it did strike me as odd that there was extra magic in the `transformers` tokenizer that wasn't in the underlying `tokenizers` tokenizer. It certainly makes troubleshooting difficult, so my humble vote would go toward deprecating.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### System Info
The method `PreTrainedTokenizerBase.clean_up_tokenization` attempts to fix some quote marks, but breaks quite a lot of the time.
I'm testing various tokenization techniques searching for the holy grail of `original == decode(encode(original))`
Looping through docs in OpenWebText, here's some of the results:

The fix is pretty easy: instead of doing `text.replace(" 's", "'s")`, do `re.sub(r" 's\b", "'s", text)`.
I note that this has already been logged, and the AUTO CLOSED here: https://github.com/huggingface/transformers/issues/6164
Please let me know if you would like to hear my thoughts about auto closing bugs :)
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For any tokenizer `tok`, note the output of:
```py
tok.decode(tok("asking why 'my people' wanted").input_ids)
```
### Expected behavior
Output should be "asking why 'my people' wanted", not "asking why'my people' wanted"
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22016/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22014
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22014/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22014/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22014/events
|
https://github.com/huggingface/transformers/issues/22014
| 1,614,598,292
|
I_kwDOCUB6oc5gPNCU
| 22,014
|
Support `padding_side` in `Blip2Processor`
|
{
"login": "jemmyshin",
"id": 16580382,
"node_id": "MDQ6VXNlcjE2NTgwMzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/16580382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jemmyshin",
"html_url": "https://github.com/jemmyshin",
"followers_url": "https://api.github.com/users/jemmyshin/followers",
"following_url": "https://api.github.com/users/jemmyshin/following{/other_user}",
"gists_url": "https://api.github.com/users/jemmyshin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jemmyshin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jemmyshin/subscriptions",
"organizations_url": "https://api.github.com/users/jemmyshin/orgs",
"repos_url": "https://api.github.com/users/jemmyshin/repos",
"events_url": "https://api.github.com/users/jemmyshin/events{/privacy}",
"received_events_url": "https://api.github.com/users/jemmyshin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi,\r\n\r\nYou can achieve that by simply updating the `padding_side` attribute of the processor's tokenizer:\r\n```\r\nprocessor.tokenizer.padding_side = \"left\"\r\n```\r\nNote that Blip2Processor is just a wrapper around both the image processor and the tokenizer.",
"Thanks a lot!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### Feature request
Support different `padding_side` for `Blip2Processor`.
### Motivation
When I use BLIP2 LLM, I found that the padding style is different between decoder-only model (opt-2.7b for example) and encoder-decoder model (flan-t5-xl for example). So I assume that the paddings are different from `Salesforce/blip2-opt-2.7b` and `Salesforce/blip2-flan-t5-xl`. But actually I got the same padding results, the default is `padding_side`=right.
Code example:
```
prompt = ["hello world", "Question: how many cats are there? Answer:"]
processor_1 = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
processor_2 = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl")
inputs_1 = processor_1(text=prompt, return_tensors="pt", padding=True)
inputs_2 = processor_2(text=prompt, return_tensors="pt", padding=True)
```
Output (the same for inputs_1 and inputs_2):
```
{'input_ids': tensor([[ 2, 42891, 232, 1, 1, 1, 1, 1, 1, 1,
1],
[ 2, 45641, 35, 141, 171, 10017, 32, 89, 116, 31652,
35]]), 'attention_mask': tensor([[1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
```
I believe that right padding for flan-t5 will give the wrong outputs when calling `generate`, correct me if I am wrong: (transformers/generation/utils.py)
<img width="940" alt="image" src="https://user-images.githubusercontent.com/16580382/223613908-3ca57575-5536-4296-a449-e49ad3b4fa90.png">
Expected outputs (when setting `padding_side`=left):
```
{'input_ids': tensor([[ 1, 1, 1, 1, 1, 1, 1,
1, 2, 42891, 232],
[ 2, 45641, 35, 141, 171, 10017, 32, 89, 116, 31652,
35]]), 'attention_mask': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
```
### Your contribution
I found the this `padding_side` feature exists in `AutoTokenizer`, is that possible to move this feature in `Blip2Processor`?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22014/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22013
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22013/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22013/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22013/events
|
https://github.com/huggingface/transformers/issues/22013
| 1,614,596,821
|
I_kwDOCUB6oc5gPMrV
| 22,013
|
Dataset load problem when using own data to run run_mim.py
|
{
"login": "chenbingxiayu",
"id": 23647595,
"node_id": "MDQ6VXNlcjIzNjQ3NTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/23647595?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenbingxiayu",
"html_url": "https://github.com/chenbingxiayu",
"followers_url": "https://api.github.com/users/chenbingxiayu/followers",
"following_url": "https://api.github.com/users/chenbingxiayu/following{/other_user}",
"gists_url": "https://api.github.com/users/chenbingxiayu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenbingxiayu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenbingxiayu/subscriptions",
"organizations_url": "https://api.github.com/users/chenbingxiayu/orgs",
"repos_url": "https://api.github.com/users/chenbingxiayu/repos",
"events_url": "https://api.github.com/users/chenbingxiayu/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenbingxiayu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Would you like to open a PR with your fix?",
"> Would you like to open a PR with your fix?\r\nI am not sure whether my solution is correct, I hope your organization could check it. If the solution is ok, it is my honor to open a new PR.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### System Info
transformers 4.27.0
python 3.8.16
Ubuntu 20.04
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I directly use the script run_mim.py (examples/pytorch/image-pretraining) to fine-tune the VIT model on my own data, I got the error FileNotFoundError: Unable to find 'my dataset absolute path' at /. However, when I run the script run_image_classification_no_trainer.py (examples/pytorch/image-classification) to fine-tune the VIT model on the same data with the same path, everything is ok.
### Expected behavior
I compare the implementation of run_mim.py and run_image_classification_no_trainer.py. It seems that the previous one has some problems when sets train_dir.
In run_image_classification_no_trainer.py, the implementation is data_files["train"] = os.path.join(args.train_dir, "**") seeing https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification_no_trainer.py#L266
In run_mim.py, the implementation is data_files["train"] = self.train_dir seeing https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mim.py#L109
The later one misses "**" in the file_path for loading dataset.
I changed the code in my local file, and the script run_mim.py runs well.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22013/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22012
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22012/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22012/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22012/events
|
https://github.com/huggingface/transformers/pull/22012
| 1,614,549,272
|
PR_kwDOCUB6oc5LiKgV
| 22,012
|
update: bertology paper
|
{
"login": "QiushiSun",
"id": 54871790,
"node_id": "MDQ6VXNlcjU0ODcxNzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/54871790?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QiushiSun",
"html_url": "https://github.com/QiushiSun",
"followers_url": "https://api.github.com/users/QiushiSun/followers",
"following_url": "https://api.github.com/users/QiushiSun/following{/other_user}",
"gists_url": "https://api.github.com/users/QiushiSun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QiushiSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QiushiSun/subscriptions",
"organizations_url": "https://api.github.com/users/QiushiSun/orgs",
"repos_url": "https://api.github.com/users/QiushiSun/repos",
"events_url": "https://api.github.com/users/QiushiSun/events{/privacy}",
"received_events_url": "https://api.github.com/users/QiushiSun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Add additional reference papers for the documentation of BERTology.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22012/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22012",
"html_url": "https://github.com/huggingface/transformers/pull/22012",
"diff_url": "https://github.com/huggingface/transformers/pull/22012.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22012.patch",
"merged_at": 1678280070000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22011
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22011/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22011/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22011/events
|
https://github.com/huggingface/transformers/issues/22011
| 1,614,503,880
|
I_kwDOCUB6oc5gO1_I
| 22,011
|
Blip2ForConditionalGeneration.from_pretrained is limited by 100% CPU usability (on one single core)
|
{
"login": "Marcophono2",
"id": 22599855,
"node_id": "MDQ6VXNlcjIyNTk5ODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/22599855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Marcophono2",
"html_url": "https://github.com/Marcophono2",
"followers_url": "https://api.github.com/users/Marcophono2/followers",
"following_url": "https://api.github.com/users/Marcophono2/following{/other_user}",
"gists_url": "https://api.github.com/users/Marcophono2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Marcophono2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Marcophono2/subscriptions",
"organizations_url": "https://api.github.com/users/Marcophono2/orgs",
"repos_url": "https://api.github.com/users/Marcophono2/repos",
"events_url": "https://api.github.com/users/Marcophono2/events{/privacy}",
"received_events_url": "https://api.github.com/users/Marcophono2/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @younesbelkada ",
"Hello @Marcophono2 \r\nThanks for the issue, can you try: \r\n```python\r\nimport torch\r\nimport requests\r\nfrom PIL import Image\r\nfrom transformers import Blip2Processor, Blip2ForConditionalGeneration\r\nimport torch\r\n\r\ndevice = \"cuda\"\r\n\r\nprocessor = Blip2Processor.from_pretrained(\"Salesforce/blip2-flan-t5-xxl\")\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-flan-t5-xxl\", load_in_8bit=True, device_map=\"auto\")\r\n\r\nprint(model.hf_device_map)\r\n\r\nfor i in range(1, 923):\r\n raw_image = Image.open('UIDimgs/' + str(i) + '.jpg').convert('RGB')\r\n\r\n inputs = processor(raw_image, return_tensors=\"pt\").to(device, torch.float16)\r\n\r\n out = model.generate(**inputs, max_length=64, min_length=20)\r\n print(i,': ',processor.decode(out[0], skip_special_tokens=True))\r\n```\r\nAnd let me know what you get for `print(model.hf_device_map)`?",
"Thank you, @younesbelkada !The result I get is\r\n\r\n`{'': 0}`",
"This is a bit strange @Marcophono2 , \r\n\r\n`{'': 0}` indicates that the entire model is on the GPU device. Can you confirm with us the GPU VRAM of your gpu?\r\nAlso I would replace: \r\n```python\r\nwith torch.device(\"cuda\"):\r\n model = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-flan-t5-xxl\", load_in_8bit=True, device_map=\"auto\")\r\n```\r\nWith:\r\n```python\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-flan-t5-xxl\", load_in_8bit=True, device_map=\"auto\")\r\n```\r\nAlso make sure to use the latest `accelerate` and `bitsandbytes` versions:\r\n```bash\r\npip install --upgrade accelerate bitsandbytes\r\n```",
"Yes, that is correct, @younesbelkada , the entire model is in the VRAM (RTX 4090). There is not much space left but it's matching. ;)\r\nBefore I tried without \r\n\r\n`with torch.device(\"cuda\"):`\r\n\r\nI updated accelerate from 0.16 to 0.17 (bitsandbytes was up to date) but no difference. Meanwhile I am not sure anymore if this 100% cpu usage is really a \"limit\". When I analyse how the load is split up then I can see that sometimes 2 cores are working. One with 40%, the other with 61% (as an example). Then it would be just an accident. But what would then be the bottleneck that my GPU usability is never > 32%?",
"It seems that the model loading in 8 bit is the reason for the 100% cpu (one core/thread) limitation. I replaced the code now with\r\n\r\n`model3 = Blip2ForConditionalGeneration.from_pretrained(\"Salesforce/blip2-opt-2.7b\", torch_dtype=torch.float16).to(\"cuda\")`\r\n\r\nand the cpu can use up to 200% which the gpu usage is at 60%. Still not perfect but double performance. But I do not want to use the 2.7 model. :-) I want to use the blip2-flan-t5-xxl model which is too large for my VRAM as long as I do not use the 8 bit version. Has anyone an idea how I can activate also the other cpu cores when using 8 bit?",
"Sorry @ArthurZucker , but as you seem to be very near at the core, may be you have an idea for this issue I posted last week, too?",
"Hey, I think setting `devic_map = \"auto\"` should help balancing the load when using the `flan-t5-xxl` model to both CPU and GPU. This should allow you to run on both. You need `accelerate` library for this to work! Would that fix your issue? ",
"Nope, @ArthurZucker . I already have device_map = \"auto\" included in my code. Or do you mean to implement it anywhere else too? Also accelerate is installed.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.19.0-31-generic-x86_64-with-glibc2.36
- Python version: 3.10.6
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 2.0.0.dev20230209+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run this code on a computer with stron GPU and strong CPU:
```
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
import torch
device = "cuda"
processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto")
with torch.device("cuda"):
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto")
for i in range(1, 923):
raw_image = Image.open('UIDimgs/' + str(i) + '.jpg').convert('RGB')
inputs = processor(raw_image, return_tensors="pt").to(device, torch.float16)
out = model.generate(**inputs, max_length=64, min_length=20)
print(i,': ',processor.decode(out[0], skip_special_tokens=True))
```
### Expected behavior
Hello!
When running the above code the usability of my RTX 4090 is only around 30%. My CPU usability is all the time limited with 100%. Unfortunately Python here only uses one single core of my AMD 5900X (12+12 cores).
Can anyone see an error in my code? How can I bring the code to use more than only one single CPU core?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22011/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22010
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22010/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22010/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22010/events
|
https://github.com/huggingface/transformers/issues/22010
| 1,614,418,418
|
I_kwDOCUB6oc5gOhHy
| 22,010
|
Can't import Blip2Processor
|
{
"login": "tanayvarshney",
"id": 11531975,
"node_id": "MDQ6VXNlcjExNTMxOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/11531975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanayvarshney",
"html_url": "https://github.com/tanayvarshney",
"followers_url": "https://api.github.com/users/tanayvarshney/followers",
"following_url": "https://api.github.com/users/tanayvarshney/following{/other_user}",
"gists_url": "https://api.github.com/users/tanayvarshney/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanayvarshney/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanayvarshney/subscriptions",
"organizations_url": "https://api.github.com/users/tanayvarshney/orgs",
"repos_url": "https://api.github.com/users/tanayvarshney/repos",
"events_url": "https://api.github.com/users/tanayvarshney/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanayvarshney/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You need to install transformers from source to get access to BLIP-2:\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```",
"@sgugger thank you! Just one minor feedback since you are the docs PIC, if a note modal or just a highlighted note could be added informing if a module hasn't been added to the stable release would be helpful. (If it is already there, I might have missed it, apologies.)",
"Note that it's not in the [stable documentation](https://huggingface.co/docs/transformers/index) (which is what is viewed by default) only the [main documentation](https://huggingface.co/docs/transformers/main/en/index). Did you get on the page via a search engine maybe and did not realize you were not on the documentation of the latest release?",
"> Note that it's not in the [stable documentation](https://huggingface.co/docs/transformers/index) (which is what is viewed by default) only the [main documentation](https://huggingface.co/docs/transformers/main/en/index). Did you get on the page via a search engine maybe and did not realize you were not on the documentation of the latest release?\r\n\r\nI am not the author but this is exactly what happened to me - I did not see at all there's a dropdown for versions so I assumed BLIP2 is just available.",
"🤓 ",
"Please restart your notebook, and re-install using below command.\r\n\r\n`pip install git+https://github.com/huggingface/transformers`\r\n"
] | 1,678
| 1,693
| 1,678
|
NONE
| null |
### System Info
I was trying to follow [this tutorial](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2Model.forward.example) and ran into the following issue:
```
ImportError: cannot import name 'Blip2Processor' from 'transformers' (/usr/local/lib/python3.8/dist-packages/transformers/__init__.py)
```
version: '4.26.1'
@sgugger @ArthurZucker @amyeroberts (there is no PIC for multimodal models so tagging both PICs)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just run the following example: https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2Model.forward.example
### Expected behavior
Should import the preprocessor
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22010/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22009
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22009/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22009/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22009/events
|
https://github.com/huggingface/transformers/issues/22009
| 1,614,418,161
|
I_kwDOCUB6oc5gOhDx
| 22,009
|
DataCollatorForWholeWordMask does not handle numpy inputs when return_tensors="tf"
|
{
"login": "dwyatte",
"id": 2512762,
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwyatte",
"html_url": "https://github.com/dwyatte",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @dwyatte 👋 \r\n\r\nAt a first glance, a missing `tf.cast` seems to be indeed the problem. Would you be interested in opening a PR with the fix? 🤗 ",
"@gante sure thing, here you go: https://github.com/huggingface/transformers/pull/22032. Requested you for review"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: macOS-13.2.1-x86_64-i386-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@gante @Rocketknight1
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import numpy as np
from transformers import AutoTokenizer, DataCollatorForWholeWordMask
features = [{"input_ids": np.array(list(range(10)))}, {"input_ids": np.array(list(range(10)))}]
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
data_collator = DataCollatorForWholeWordMask(tokenizer, return_tensors="tf")
batch = data_collator(features)
```
```
InvalidArgumentError Traceback (most recent call last)
Cell In[1], line 9
6 tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
7 data_collator = DataCollatorForWholeWordMask(tokenizer, return_tensors="tf")
----> 9 batch = data_collator(features)
File ~/venv/lib/python3.9/site-packages/transformers/data/data_collator.py:43, in DataCollatorMixin.__call__(self, features, return_tensors)
41 return_tensors = self.return_tensors
42 if return_tensors == "tf":
---> 43 return self.tf_call(features)
44 elif return_tensors == "pt":
45 return self.torch_call(features)
File ~/venv/lib/python3.9/site-packages/transformers/data/data_collator.py:912, in DataCollatorForWholeWordMask.tf_call(self, examples)
910 mask_labels.append(self._whole_word_mask(ref_tokens))
911 batch_mask = _tf_collate_batch(mask_labels, self.tokenizer, pad_to_multiple_of=self.pad_to_multiple_of)
--> 912 inputs, labels = self.tf_mask_tokens(batch_input, batch_mask)
913 return {"input_ids": inputs, "labels": labels}
File ~/venv/lib/python3.9/site-packages/transformers/data/data_collator.py:1067, in DataCollatorForWholeWordMask.tf_mask_tokens(self, inputs, mask_labels)
1065 indices_random = self.tf_bernoulli(input_shape, 0.1) & masked_indices & ~indices_replaced
1066 random_words = tf.random.uniform(input_shape, maxval=len(self.tokenizer), dtype=tf.int64)
-> 1067 inputs = tf.where(indices_random, random_words, inputs)
1069 # The rest of the time (10% of the time) we keep the masked input tokens unchanged
1070 return inputs, labels
File ~/venv/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback.<locals>.error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
File ~/venv/lib/python3.9/site-packages/tensorflow/python/framework/ops.py:7215, in raise_from_not_ok_status(e, name)
7213 def raise_from_not_ok_status(e, name):
7214 e.message += (" name: " + name if name is not None else "")
-> 7215 raise core._status_to_exception(e) from None
InvalidArgumentError: cannot compute SelectV2 as input #2(zero-based) was expected to be a int64 tensor but is a int32 tensor [Op:SelectV2]
```
### Expected behavior
No exception.
This is a pretty simple bug. Seems we just need to cast the inputs to tf.int64 [here](https://github.com/huggingface/transformers/blob/b338414e614a30af5f940269484ef15bf716d078/src/transformers/data/data_collator.py#L910) which we do in `DataCollatorForLanguageModeling` but not `DataCollatorForWholeWordMask`
This is necessary to use the data collator with https://github.com/huggingface/datasets `datasets.Dataset.to_tf_dataset` since it implicitly formats data as `numpy` causing it to come into the data collator as int32
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22009/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22008
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22008/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22008/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22008/events
|
https://github.com/huggingface/transformers/issues/22008
| 1,614,345,640
|
I_kwDOCUB6oc5gOPWo
| 22,008
|
DataCollatorForSpanPreTraining
|
{
"login": "zanussbaum",
"id": 33707069,
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zanussbaum",
"html_url": "https://github.com/zanussbaum",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Transformers is a library of models, not data collators. You can adapt the code of this data collator to PyTorch in your code, but we won't have it in the main library (the same way the Flax one is just in an example script).",
"No worries, I thought maybe it would fit nicely within the same class as [DataCollatorForLanguageModeling](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/data/data_collator.py#L609) but totally understand wanting to keep the scope contained!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
### Feature request
It seems that there's already a script existing for [T5 pretraining](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_t5_mlm_flax.py) that has a DataCollator, but only available in Flax. Would we be able to add a Data Collator for the Span pretraining task that's implemented in the T5 papers?
### Motivation
Currently it's not super easy to run T5 pretraining in Pytorch with Transformers
### Your contribution
I can help with the PR!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22008/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22007
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22007/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22007/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22007/events
|
https://github.com/huggingface/transformers/pull/22007
| 1,614,237,610
|
PR_kwDOCUB6oc5LhIL8
| 22,007
|
Fix case when using --gradient_accumulation_steps with DDP disabled.
|
{
"login": "sangeethabal",
"id": 83724701,
"node_id": "MDQ6VXNlcjgzNzI0NzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83724701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sangeethabal",
"html_url": "https://github.com/sangeethabal",
"followers_url": "https://api.github.com/users/sangeethabal/followers",
"following_url": "https://api.github.com/users/sangeethabal/following{/other_user}",
"gists_url": "https://api.github.com/users/sangeethabal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sangeethabal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sangeethabal/subscriptions",
"organizations_url": "https://api.github.com/users/sangeethabal/orgs",
"repos_url": "https://api.github.com/users/sangeethabal/repos",
"events_url": "https://api.github.com/users/sangeethabal/events{/privacy}",
"received_events_url": "https://api.github.com/users/sangeethabal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> I think it would be easier to just adapt [this property](https://github.com/huggingface/transformers/blob/dfe9a3197364c7f0e2169d7c16c357c9c1311cb9/src/transformers/training_args.py#L1802) to add the torch_neuroncore_available there.\r\n\r\n@sgugger I have made the required changes to this PR itself. Please take a look. TIA"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
When --gradient_accumulation_steps option is used with DDP disabled, HF has a call to ```model.no_sync``` which doesn't exist. This PR is to fix the issue of ```model.no_sync```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/aws-neuron/aws-neuron-sdk/issues/635
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22007/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22007",
"html_url": "https://github.com/huggingface/transformers/pull/22007",
"diff_url": "https://github.com/huggingface/transformers/pull/22007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22007.patch",
"merged_at": 1678390318000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22006
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22006/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22006/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22006/events
|
https://github.com/huggingface/transformers/pull/22006
| 1,614,099,982
|
PR_kwDOCUB6oc5Lgp6B
| 22,006
|
Update tiny model creation script and some others files
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
The original goal is to update tiny model creation script, so we can create tiny models for some (newly added) model classes. It turns out some files are needed to be updated too. See my own review comments.
Note: This PR doesn't imply we are able to create tiny models for all involved model classes in this PR. Some model classes require more work to be done (`speecht5, tvlt` for example), but let me do it in separate PR(s).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22006/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/22006",
"html_url": "https://github.com/huggingface/transformers/pull/22006",
"diff_url": "https://github.com/huggingface/transformers/pull/22006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/22006.patch",
"merged_at": 1678224674000
}
|
https://api.github.com/repos/huggingface/transformers/issues/22005
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22005/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22005/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22005/events
|
https://github.com/huggingface/transformers/issues/22005
| 1,614,048,026
|
I_kwDOCUB6oc5gNGsa
| 22,005
|
Bug in t5x to PyTorch weights conversion script
|
{
"login": "rinapch",
"id": 61157346,
"node_id": "MDQ6VXNlcjYxMTU3MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/61157346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rinapch",
"html_url": "https://github.com/rinapch",
"followers_url": "https://api.github.com/users/rinapch/followers",
"following_url": "https://api.github.com/users/rinapch/following{/other_user}",
"gists_url": "https://api.github.com/users/rinapch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rinapch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rinapch/subscriptions",
"organizations_url": "https://api.github.com/users/rinapch/orgs",
"repos_url": "https://api.github.com/users/rinapch/repos",
"events_url": "https://api.github.com/users/rinapch/events{/privacy}",
"received_events_url": "https://api.github.com/users/rinapch/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker and @younesbelkada ",
"hello @rinapch \r\nThanks for the issue, \r\nwe used the same script to convert `flan-ul2` and did not face into any issue. Can you share with use the `t5x` version you used?",
"This is related to an update of jax and jax.numpy. `torch.FloatTensor(weights[\"token_embedder\"][\"embedding\"])` does not work anymore as it was reported. Will have a look as the broader impact this has on our codebase. Thanks for reporting!",
"Hey @younesbelkada! As far as I know, t5x do not really release versions (their `version.py` still states \"0.0.0\" - https://github.com/google-research/t5x/blob/main/t5x/version.py). I used a clone of their repo to build t5x module, and I cloned it on monday, so the code is up to date ",
"hi @rinapch \r\ncan you try:\r\n```bash\r\npip install git+https://github.com/google-research/t5x@45c1a9d02321afeadb43f496de83c52421f52d66\r\n```\r\nthis is the version of `t5x` that worked fine on my setup",
"Repeated the steps with this version and I get the following error:\r\n> File \"convert_t5x_checkpoint_to_pytorch.py\", line 36, in <module>\r\n from t5x import checkpoints\r\n File \"/root/.cache/pypoetry/virtualenvs/chatbot-JrwxGvoq-py3.8/lib/python3.8/site-packages/t5x/__init__.py\", line 17, in <module>\r\n import t5x.adafactor\r\n File \"/root/.cache/pypoetry/virtualenvs/chatbot-JrwxGvoq-py3.8/lib/python3.8/site-packages/t5x/adafactor.py\", line 63, in <module>\r\n from t5x import utils\r\n File \"/root/.cache/pypoetry/virtualenvs/chatbot-JrwxGvoq-py3.8/lib/python3.8/site-packages/t5x/utils.py\", line 46, in <module>\r\n from t5x import checkpoints\r\n File \"/root/.cache/pypoetry/virtualenvs/chatbot-JrwxGvoq-py3.8/lib/python3.8/site-packages/t5x/checkpoints.py\", line 160, in <module>\r\n orbax.checkpoint.utils.register_ts_spec_for_serialization()\r\nAttributeError: module 'orbax.checkpoint.utils' has no attribute 'register_ts_spec_for_serialization'",
"@rinapch \r\nCan you try with: `orbax @ git+https://github.com/google/orbax@4ca7a3b61081e91323c89cf09f8c1a53c06cccda` ?\r\n\r\n```bash\r\npip install git+https://github.com/google/orbax@4ca7a3b61081e91323c89cf09f8c1a53c06cccda\r\n```",
"This worked, yep!",
"Awesome, feel free to close the issue, so the fix was to:\r\n```bash\r\npip install git+https://github.com/google-research/t5x@45c1a9d02321afeadb43f496de83c52421f52d66\r\npip install git+https://github.com/google/orbax@4ca7a3b61081e91323c89cf09f8c1a53c06cccda\r\n```"
] | 1,678
| 1,679
| 1,679
|
NONE
| null |
### System Info
transformers version: 4.26.1
Platform: Ubuntu 20.04.5 LTS (Focal Fossa)
Python version: 3.8
Huggingface_hub version: 0.12.1
PyTorch version (GPU?): 1.13.1 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): 0.6.6 (GPU)
Jax version: 0.4.5
JaxLib version: 0.4.4
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is the official example for script `transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py`
1. `gsutil -m cp -r gs://t5-data/pretrained_models/t5x/t5_1_1_small $HOME/`
2. `python3 convert_t5x_checkpoint_to_pytorch.py --t5x_checkpoint_path=$HOME/t5_1_1_small --config_file=config.json --pytorch_dump_path=$HOME/t5_1_1_small_pt`
Where `config.json` is a config for `t5-small `(https://huggingface.co/t5-small/blob/main/config.json)
When running this, I get an error:
> Traceback (most recent call last):
File "/root/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 231, in <module>
convert_t5x_checkpoint_to_pytorch(
File "/root/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 200, in convert_t5x_checkpoint_to_pytorch
load_t5x_weights_in_t5(model, config, t5x_checkpoint_path, is_encoder_only)
File "/root/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 181, in load_t5x_weights_in_t5
state_dict = make_state_dict(converted, is_encoder_only)
File "/root/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 160, in make_state_dict
state_dict = collections.OrderedDict([(k, torch.from_numpy(v.copy())) for (k, v) in converted_params.items()])
File "/root/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 160, in <listcomp>
state_dict = collections.OrderedDict([(k, torch.from_numpy(v.copy())) for (k, v) in converted_params.items()])
TypeError: expected np.ndarray (got Array)
This can be fixed easily by importing numpy and changing line 160 to:
`state_dict = collections.OrderedDict([(k, torch.from_numpy(np.array(v.copy()))) for (k, v) in converted_params.items()])`
### Expected behavior
After converting `v` to `np.array(v)` the script exceutes fine and returns
> All model checkpoint weights were used when initializing T5ForConditionalGeneration.
>All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at /root/t5_1_1_small_pt.
If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
loading configuration file /root/t5_1_1_small_pt/generation_config.json
>Generate config GenerationConfig {
"_from_model_config": true,
"decoder_start_token_id": 0,
"eos_token_id": 1,
"pad_token_id": 0,
"transformers_version": "4.26.1"
}
>Done
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22005/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22004
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22004/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22004/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22004/events
|
https://github.com/huggingface/transformers/issues/22004
| 1,614,042,972
|
I_kwDOCUB6oc5gNFdc
| 22,004
|
Clip embeddings text/vision missmatch for the model 'laion/CLIP-ViT-H-14-laion2B-s32B-b79K'
|
{
"login": "johngull",
"id": 1451797,
"node_id": "MDQ6VXNlcjE0NTE3OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1451797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johngull",
"html_url": "https://github.com/johngull",
"followers_url": "https://api.github.com/users/johngull/followers",
"following_url": "https://api.github.com/users/johngull/following{/other_user}",
"gists_url": "https://api.github.com/users/johngull/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johngull/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johngull/subscriptions",
"organizations_url": "https://api.github.com/users/johngull/orgs",
"repos_url": "https://api.github.com/users/johngull/repos",
"events_url": "https://api.github.com/users/johngull/events{/privacy}",
"received_events_url": "https://api.github.com/users/johngull/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @amyeroberts ",
"Hi, this particular model (laion/CLIP-ViT-H-14-laion2B-s32B-b79K) uses a different `hidden_size` for the text and vision encoders (1024 and 1280 respectively), but they get projected to the same dimensionality using a linear projection layer (for this model, the `projection_dim` is 1024 as seen in the [config](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/blob/main/config.json#L177)). It's recommended to use the [get_text_features](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPModel.get_text_features) and [get_image_features](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPModel.get_image_features) methods of `CLIPModel` to get embeddings which have the same dimensionality. The pooler output is pre-projection.",
"@NielsRogge Thank you for the explanation"
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I use text and vision models for the same clip model and got different dimensionality of embeddings, which is wrong with the idea of the CLIP models.
Here is the source code to reproduce (outputs as comments)
```
CACHE_PATH = "../models_cache"
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer, CLIPVisionModel, AutoProcessor, CLIPConfig
clip_model = CLIPTextModel.from_pretrained("laion/CLIP-ViT-H-14-laion2B-s32B-b79K", cache_dir=CACHE_PATH)
clip_model = clip_model.to("cuda")
tokenizer = CLIPTokenizer.from_pretrained("laion/CLIP-ViT-H-14-laion2B-s32B-b79K", cache_dir=CACHE_PATH)
clip_v_model = CLIPVisionModel.from_pretrained("laion/CLIP-ViT-H-14-laion2B-s32B-b79K", cache_dir=CACHE_PATH)
clip_v_model = clip_v_model.to("cuda")
v_preprocess = AutoProcessor.from_pretrained("laion/CLIP-ViT-H-14-laion2B-s32B-b79K", cache_dir=CACHE_PATH)
....
image = Image.open(img_filename)
prompt = "some prompt"
with torch.no_grad():
inputs = tokenizer([prompt], padding=True, return_tensors="pt").to(gpu_device)
outputs = clip_model(**inputs)
print(outputs.pooler_output.shape) # torch.Size([1, 1024])
print(outputs.last_hidden_state.shape) # torch.Size([1, 12, 1024])
inputs = v_preprocess(images=image, return_tensors="pt").to(gpu_device)
image_features = clip_v_model(**inputs)
print(image_features.pooler_output.shape) # torch.Size([1, 1280]) !!!!
print(image_features.last_hidden_state.shape) # torch.Size([1, 257, 1280])
```
### Expected behavior
I expect CLIPVisionModel to produce embedding with the shape [1, 1024].
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22004/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22003
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22003/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22003/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22003/events
|
https://github.com/huggingface/transformers/issues/22003
| 1,613,825,490
|
I_kwDOCUB6oc5gMQXS
| 22,003
|
Add X-Decoder Model
|
{
"login": "ChanBong",
"id": 73221930,
"node_id": "MDQ6VXNlcjczMjIxOTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/73221930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChanBong",
"html_url": "https://github.com/ChanBong",
"followers_url": "https://api.github.com/users/ChanBong/followers",
"following_url": "https://api.github.com/users/ChanBong/following{/other_user}",
"gists_url": "https://api.github.com/users/ChanBong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChanBong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChanBong/subscriptions",
"organizations_url": "https://api.github.com/users/ChanBong/orgs",
"repos_url": "https://api.github.com/users/ChanBong/repos",
"events_url": "https://api.github.com/users/ChanBong/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChanBong/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] |
[
"Hi @ChanBong, thanks for opening the issue!\r\n\r\nYou can expect to see an X-Decoder PR is the next two weeks :)",
"Hi @alaradirik, can we please collaborate in adding this model?",
"Hi @atharvakavitkar, the PR is almost done but won't include the _referring image editing_ task, which require integration with Stable Diffusion inpainting. Perhaps you could create a tutorial or demo for this task?",
"Hi @alaradirik, thank you for reaching out to me. I must admit that I have not yet added a model to HuggingFace. But I really want to learn how to do it. Would creating this tutorial be the right step? Or should I search for a simpler model to implement from scratch?"
] | 1,678
| 1,682
| null |
NONE
| null |
### Model description
X-Decoder is a generalized decoding pipeline that can predict pixel-level segmentation and language tokens seamlessly. X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks.
The model exhibits strong transferability to a wide range of downstream tasks in both zero-shot and fine-tuning settings, achieving state-of-the-art open-vocabulary segmentation and referring segmentation on 10 settings of 7 datasets and should be a valuable addition to transformers library
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper: https://arxiv.org/pdf/2212.11270.pdf
Code: https://github.com/microsoft/X-Decoder
Weights: https://huggingface.co/spaces/xdecoder/Demo/blob/main/xdecoder_focalt_last.pt
Author: @eltociear
Cc: @NielsRogge @alaradirik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22003/timeline
| null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/22002
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22002/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22002/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22002/events
|
https://github.com/huggingface/transformers/issues/22002
| 1,613,736,026
|
I_kwDOCUB6oc5gL6ha
| 22,002
|
Unable to create a keras model with a pretrained TFBertModel using inputs_embeds as inputs
|
{
"login": "Giorgia3",
"id": 26458245,
"node_id": "MDQ6VXNlcjI2NDU4MjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/26458245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Giorgia3",
"html_url": "https://github.com/Giorgia3",
"followers_url": "https://api.github.com/users/Giorgia3/followers",
"following_url": "https://api.github.com/users/Giorgia3/following{/other_user}",
"gists_url": "https://api.github.com/users/Giorgia3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Giorgia3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Giorgia3/subscriptions",
"organizations_url": "https://api.github.com/users/Giorgia3/orgs",
"repos_url": "https://api.github.com/users/Giorgia3/repos",
"events_url": "https://api.github.com/users/Giorgia3/events{/privacy}",
"received_events_url": "https://api.github.com/users/Giorgia3/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @Giorgia3, this is unfortunately part of Keras that we can't work around! What's happening is that our layers can take multiple arguments, but Keras insists that the first argument is always present. \r\n\r\nThe first argument to a `TFBertModel` is `input_ids`, which is a sequence of integer tokens. However, you can also pass pre-embedded float `input_embeds`, which is what you're doing in your example. The error arises because you have not passed `input_ids`.\r\n\r\nWhat you're trying to do is totally reasonable if you're already embedding your inputs in some other way! But if you want to make it work while respecting the \"first argument must be passed\" rule, you should instead pass a dict of inputs to the first argument of the Model, and this will be unpacked and passed to the corresponding arguments. All TF models in `transformers` will understand this input and unpack it correctly. So the line\r\n\r\n`embedding = encoder(inputs_embeds=inputs_embeds) `\r\n\r\nwould be replaced by\r\n\r\n`embedding = encoder({\"inputs_embeds\": inputs_embeds}) `\r\n\r\nRemember that you should only use `inputs_embeds` if your inputs are already embeddings with the right dimension, though! If you just want to pass integer tokens, which is much more common, use the first argument `input_ids`.",
"Thank you very much, it worked!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Rocketknight1 @gant
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to create a BERT layer inside a Keras Model to fine-tune it.
Colab link of the code sample: [Colab link](https://colab.research.google.com/drive/1rirX6R_hG3VfkyxbiD7-x5AbAEUe7cNK?usp=sharing)
This is the code snippet:
```
import tensorflow as tf
from keras.models import Model
from keras.layers import Input, Dense, Dropout
from transformers import TFBertModel
inputs_embeds = Input(shape=(3,5,))
encoder = TFBertModel.from_pretrained("bert-base-uncased")
embedding = encoder(inputs_embeds=inputs_embeds)
x = Dense(32, activation="relu")(embedding)
x = Dropout(0.1)(x)
outputs = Dense(7, activation="linear")(x)
model = Model(inputs=inputs_embeds, outputs=outputs)
```
This is the error I get:
```
Some layers from the model checkpoint at bert-base-uncased were not used when initializing TFBertModel: ['mlm___cls', 'nsp___cls']
- This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFBertModel were initialized from the model checkpoint at bert-base-uncased.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertModel for predictions without further training.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-26-57294296cc4b>](https://localhost:8080/#) in <module>
7
8 encoder = TFBertModel.from_pretrained("bert-base-uncased")
----> 9 embedding = encoder(inputs_embeds=inputs_embeds)
10 x = Dense(32, activation="relu")(embedding)
11 x = Dropout(0.1)(x)
1 frames
[/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
[/usr/local/lib/python3.8/dist-packages/keras/utils/layer_utils.py](https://localhost:8080/#) in split_out_first_arg(self, args, kwargs)
807 inputs = kwargs.pop(self._arg_names[0])
808 else:
--> 809 raise ValueError(
810 "The first argument to `Layer.call` must always be passed."
811 )
ValueError: The first argument to `Layer.call` must always be passed.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22002/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22001
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22001/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22001/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22001/events
|
https://github.com/huggingface/transformers/issues/22001
| 1,613,717,911
|
I_kwDOCUB6oc5gL2GX
| 22,001
|
Is there a data leakage in causal masking?
|
{
"login": "Shaier",
"id": 43555163,
"node_id": "MDQ6VXNlcjQzNTU1MTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/43555163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shaier",
"html_url": "https://github.com/Shaier",
"followers_url": "https://api.github.com/users/Shaier/followers",
"following_url": "https://api.github.com/users/Shaier/following{/other_user}",
"gists_url": "https://api.github.com/users/Shaier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shaier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shaier/subscriptions",
"organizations_url": "https://api.github.com/users/Shaier/orgs",
"repos_url": "https://api.github.com/users/Shaier/repos",
"events_url": "https://api.github.com/users/Shaier/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shaier/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! \r\n1. The `attention_mask` can be assimilated to the `padding_mask` which tells the model where the pad tokens are. \r\n2. The `causal_mask` defined with :\r\n```python \r\n # if only \"normal\" attention layer implements causal mask\r\n query_length, key_length = query.size(-2), key.size(-2)\r\n causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length]\r\n mask_value = torch.finfo(attn_weights.dtype).min\r\n # Need to be a tensor, otherwise we get error: `RuntimeError: expected scalar type float but found double`.\r\n # Need to be on the same device, otherwise `RuntimeError: ..., x and y to be on the same device`\r\n mask_value = torch.full([], mask_value, dtype=attn_weights.dtype).to(attn_weights.device)\r\n attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value)\r\n```\r\nline 195 as you mention is the actual causal mask that is used in the SelfAttention, right before the softmax. \r\nWhen we create this attention mask, we make sure that the values that we want to mask have a `mask_value = torch.finfo(attn_weights.dtype).min`, this is a very big *negative* number. What you are using is a causal mask with values 0 and 1, which will no affect the attention scores. \r\n\r\nIf you are using a pretrained model, its normal that this does not affect it. If you are not, it is also normal, but if you try to run inference, the model will perform worse than to a properly trained model.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,683
| 1,683
|
NONE
| null |
### System Info
Following [this tutorial](https://huggingface.co/course/chapter7/6?fw=pt) on training a causal language model from scratch. I found the [source code](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/gpt2/modeling_gpt2.py#L666) for the model they use (GPT2). On line 195 we define “causal_mask”. I tried commenting out this line and defining a new “causal_mask” with the same shape but either all True or all False entries (instead of the triangle masking). Though, the model still learned in both cases to generate natural language. This is unexpected as if all the inputs are masked all the time the model should not learn to generate coherent text. Am I missing something or is there data leakage?

I don't know if the following is relevant to the issue, but I also find that on line 822 we have "attention_mask" which from the comments suppose to mask out as well:
>
> # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
> # masked positions, this operation will create a tensor which is 0.0 for
> # positions we want to attend and the dtype's smallest value for masked positions.
> # Since we are adding it to the raw scores before the softmax, this is
> # effectively the same as removing these entirely.
But I find that if I print
`print('attention_mask', torch.min(attention_mask))`
the result is always -0.0. So I assume this is not actually masking anything for some reason?
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Comment line 195 which defines "causal_mask".
2. Instead, define causal_mask as:
```
causal_mask = torch.rand(1,1,context_length,context_length)
causal_mask = causal_mask > 110.0 # all False
# or
causal_mask = causal_mask > 0.0 # all True
causal_mask = causal_mask.to(device)
```
3. Run script
Note that I'm using a small dataset with 10 short paragraphs.
### Expected behavior
Masking all the inputs all the time should not allow the model to learn to generate natural language. Instead. the model should generate random text.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22001/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/22000
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/22000/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/22000/comments
|
https://api.github.com/repos/huggingface/transformers/issues/22000/events
|
https://github.com/huggingface/transformers/issues/22000
| 1,613,688,247
|
I_kwDOCUB6oc5gLu23
| 22,000
|
Expanding static features when embedding - bug
|
{
"login": "LtlSh",
"id": 109275417,
"node_id": "U_kgDOBoNpGQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109275417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LtlSh",
"html_url": "https://github.com/LtlSh",
"followers_url": "https://api.github.com/users/LtlSh/followers",
"following_url": "https://api.github.com/users/LtlSh/following{/other_user}",
"gists_url": "https://api.github.com/users/LtlSh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LtlSh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LtlSh/subscriptions",
"organizations_url": "https://api.github.com/users/LtlSh/orgs",
"repos_url": "https://api.github.com/users/LtlSh/repos",
"events_url": "https://api.github.com/users/LtlSh/events{/privacy}",
"received_events_url": "https://api.github.com/users/LtlSh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @kashif",
"thanks @LtlSh for the report.\r\n\r\nSo `embedding_dimension` is the size of the resulting vector for the given categorical covariate. `cardinality` is the unique number of categories. So if you have only 15 different categories, perhaps it does not make sense to map the resulting vector to a 349 vector. Also the lags you can set it to `[1]` .\r\n\r\nFinally, note the categorical feature is `static` meaning it has no temporal component and thus would have shape for a single feature `[B, 1]`\r\n\r\nlet me know if that makes sense?\r\n\r\n",
"Thank you for your answer! @kashif \r\nWe tried your suggestion, but we are still getting the same error:\r\n\r\n`\r\nC:\\Users\\Cognition\\anaconda3\\envs\\ArielLital\\python.exe \"D:\\Final Project\\fMRI_Ariel_Lital\\train.py\" \r\nTraceback (most recent call last):\r\n File \"D:\\Final Project\\fMRI_Ariel_Lital\\train.py\", line 62, in <module>\r\n outputs = model(\r\n File \"C:\\Users\\Cognition\\anaconda3\\envs\\ArielLital\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"C:\\Users\\Cognition\\anaconda3\\envs\\ArielLital\\lib\\site-packages\\transformers\\models\\time_series_transformer\\modeling_time_series_transformer.py\", line 1626, in forward\r\n transformer_inputs, scale, static_feat = self.create_network_inputs(\r\n File \"C:\\Users\\Cognition\\anaconda3\\envs\\ArielLital\\lib\\site-packages\\transformers\\models\\time_series_transformer\\modeling_time_series_transformer.py\", line 1535, in create_network_inputs\r\n static_feat = torch.cat((embedded_cat, static_real_features, log_scale), dim=1)\r\nRuntimeError: Sizes of tensors must match except in dimension 1. Expected size 3 but got size 349 for tensor number 1 in the list.\r\n\r\nProcess finished with exit code 1\r\n\r\n`\r\n\r\nIn addition, we couldn't understand from your answer why there isn't a contradiction \r\n(I'm referring to this part of our previous comment: \r\n**embedded_cat** has 3 dimensions: (batch_size, rows, columns)\r\n**log_scale** has 3 dimensions: (batch_size, 1, columns)\r\nIn order to use 'torch.cat', '**static_real_features**' must have the shape: [batch_size, n, columns]\r\nThis means that after concatenation of these 3 variables, '**static_feat**' will have 3 dimensions.\r\nThen, when unsqueezing it will have 4 and then 'expand' won't work.)\r\n\r\nMany thanks!!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,684
| 1,684
|
NONE
| null |
### System Info
Python 3.9, Pycharm
### Who can help?
@sgugger @ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here is the training script we used:
```py
import torch
import pandas as pd
from torch.utils.data import Dataset, DataLoader
from transformers import TimeSeriesTransformerConfig, TimeSeriesTransformerModel, TimeSeriesTransformerForPrediction
from Preprocess import create_dataset
class TimeSeriesDataset(Dataset):
def __init__(self, subjects_dict):
self.subjects_dict = subjects_dict
self.subjects = list(subjects_dict.keys())
def __len__(self):
return len(self.subjects)
def __getitem__(self, idx):
subject = self.subjects[idx]
subject_dict = self.subjects_dict[subject]
# df_numpy = df.to_numpy()
# inputs = torch.tensor(df[['past_values', 'future_values']].values, dtype=torch.float32)
# inputs = torch.tensor()
return subject_dict
# Instantiating the dataset
directory = 'D:\Final Project\TASK_PCC_PFC\TEMP'
subjects_dict = create_dataset(directory)
dataset = TimeSeriesDataset(subjects_dict)
# Creating the dataloader
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
# Instantiating the TimeSeriesTransformerForPrediction
# model = TimeSeriesTransformerForPrediction
embedding_dimension = [349]
cardinality = [15]#[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] #[15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15]#
# Initializing a default Time Series Transformer configuration
configuration = TimeSeriesTransformerConfig(prediction_length = 327, lags_sequence = [0, 0, 0], embedding_dimension = embedding_dimension,
num_static_categorical_features = 1, encoder_attention_heads = 2, decoder_attention_heads = 2, cardinality =cardinality )
# Randomly initializing a model (with random weights) from the configuration
model = TimeSeriesTransformerModel(configuration)
# Accessing the model configuration
configuration = model.config
#we dont know if passing the data as a dataframe instead if a tesndor would work
#currently model.train() is throwing an error, maybe we need to use a gpu? TODO
# Setting the model to training mode
model.train()
# Defining the loss function and optimizer
loss_fn = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
# Training loop
for epoch in range(100):
for batch in dataloader:
# Forward pass
outputs = model(
past_values=batch["past_values"],
past_time_features=batch["past_time_features"],
past_observed_mask=None,
static_categorical_features=batch['static_categorical_features'],
static_real_features=batch['static_real_features'],
future_values=batch["future_values"],
future_time_features=batch["future_time_features"],
)
loss = loss_fn(outputs, batch)
# Backward pass and optimization
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Printing the training loss
if (epoch + 1) % 10 == 0:
print(f"Epoch [{epoch + 1}/100], Loss: {loss.item()}")
```
Dataset:
HPC voxel dataset
### Expected behavior
Hi,
We are trying to train TimeSeriesTransformer for forcasting using fMRI voxel data. The shape of the data is: (batch size, rows of datapoints, columns of features)
We encountered an issues in the embedding phase.
This is from the source code:
```
# embeddings
embedded_cat = self.embedder(static_categorical_features)
# static features
log_scale = scale.log() if self.config.input_size == 1 else scale.squeeze(1).log()
static_feat = torch.cat((embedded_cat, static_real_features, log_scale), dim=1)
expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
```
This is the error:
```
Traceback (most recent call last):
File "D:\Final Project\fMRI_Ariel_Lital\train.py", line 61, in <module>
outputs = model(
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1626, in forward
transformer_inputs, scale, static_feat = self.create_network_inputs(
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1536, in create_network_inputs
expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
RuntimeError: expand(torch.DoubleTensor{[32, 1, 329, 349]}, size=[-1, 654, -1]): the number of sizes provided (3) must be greater or equal to the number of dimensions in the tensor (4)
Process finished with exit code 1
```
To our understanding there is a contradiction in this code.
**embedded_cat** has 3 dimentions: (batch_size, rows, columns)
**log_scale** has 3 dimenstions: (batch_size, 1, columns)
In order to use 'torch.cat', '**static_real_features**' must have the shape: [batch_size, n, columns]
This means that aftre concatenation of these 3 variables, '**static_feat**' will have 3 dimensions.
Then, when **unsqueezing** it will have 4 and then '**expand**' won't work.
How can we solve this?
Many thanks!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/22000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/22000/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21999
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21999/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21999/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21999/events
|
https://github.com/huggingface/transformers/pull/21999
| 1,613,649,188
|
PR_kwDOCUB6oc5LfIY_
| 21,999
|
Move `is_pipeline_test_to_skip` to specific model test classes
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> I don't get the use case for `is_pipeline_test_to_skip`, we have the `pipeline_model_mapping`, why not just remove the model from there?\r\n\r\nThe reasons are:\r\n\r\n- the mapping `pipeline_model_mapping` is (IMO) what should be tested in **theory**, i.e. what model classes of a specific model type are for some pipeline tasks\r\n - **it's not the place to control what to skip or not**\r\n - if we do skip tests by using this mapping, we lose the important information of `why this test is skipped`. We will only see some model class is not in the mapping, but not `skip this test as ...[whatever more precise reason]`\r\n - define this mapping as what should be tested in theory allows:\r\n - to generate the mapping in a systematic way **(less error prone)**\r\n - to define a repo check (so **less chance to miss a pipeline test**)\r\n- But **most importantly**, the skip conditions sometimes go to the level of what tokenizer/process classes are used. It's not just about the model class\r\n - For example, a model class might be tested with a faster tokenizer, but not a slow tokenizer (due to some issue) ",
"> Ok then, let's validate with @LysandreJik as well though!\r\n\r\nI have talked to him offline when you were off. But yeah, let's make it official :-)",
"I think this would add another layer of unneeded complexity; do you think this will touch a lot of the models? In any case if you're both ok with this, I'm fine with merging it, but let's aim for as simple a change as possible that makes contributing a new pipeline/model as simple as possible. ",
"@LysandreJik \r\n\r\nThis would almost **never** affect the **model** contribution experience:\r\n- when a model is added, the tiny model creation **would not run by the contributor** (at least for this moment, as it's way to complex)\r\n- no tiny model checkpoint on the Hub for the newly added model -> no pipeline test will be (actually) run for that model\r\n- no pipeline test being run -> no need to skip any test by adding/changing `is_pipeline_test_to_skip`.\r\n\r\n==> It's us (if not me) to create and upload the tiny models. Once done, if there is anything not working, **it's us to skip them**.\r\n(The complexity is absorbed by the hidden process of tiny model creation that is not run by the contributor)\r\n\r\nRegarding **adding a new pipeline**: if an existing tiny model work (i.e. it could run other existing pipeline tasks), the chance that it works for the new task (if it is suitable for that task) is high. **So the chance to changing existing `is_pipeline_test_to_skip` is low**.",
"Note that since it's a test modeling file and contributors add new models by copying those, those `is_pipeline_test_to_skip` will be copied over new models automatically. So it will be part of the contributor experience (though probably unnoticed) and we will get lots of those without really noticing (since the PRs that add new models are very big). This can be alleviated in some way if the add-new-model-like command takes special care to remove the `is_pipeline_test_to_skip` from the new test file, but this is again more complexity.",
"Thank you @sgugger , very nice point! Let me play with `add-new-model-like command` and see how the current `pipeline_model_mapping` and `is_pipeline_test_to_skip` will be treated by this command.\r\n",
"I tried it, both `pipeline_model_mapping` and `is_pipeline_test_to_skip` will be copied.\r\n\r\nIf `pipeline_model_mapping` is used to also control which tests should be skip or not, it's also dangerous that this attribute being copied (especially automatically) to another model test files: as we are very likely to miss more and more tests that should be tested (a test that fails for an existing model have the chance to work on a new similar model - and should be tested at once to determine if we need to skip it).\r\n\r\nAlso as mentioned earlier:\r\n - manually edit `pipeline_model_mapping` have more disadvantages than good.\r\n - having `pipeline_model_mapping` edited by a contributor won't actually make the pipeline tests to run - we still need to create and upload the tiny models to `hf-internal-testing` \r\n\r\n**I am going to make changes to `add_new_model_like` to not copy these 2 attributes**. It makes the script a bit more complex, but it won't bother the users - as long as we all agree and know that these 2 attributes for pipeline testing are not for contributors to add/change (at least not before we can have a much easier and safer process to create/upload tiny models). \r\n\r\nIs this OK for you @sgugger ?",
"That plan works for me!",
"As discussed offline, I changed the approach to use string only.\r\n\r\nWould still like @sgugger to elaborate a bit more:\r\n\r\n> Using ast would be a first for the repo and would make contributing harder \r\n\r\nDo you mean for the contributors (either external or internal?) who want (or might need) to modify `src/transformers/commands/add_new_model_like.py`? If so, I agree better not to use `ast` here. If you mean the usage only, I don't think using `ast` is a real problem - if they don't need to look the internals.\r\n\r\nI would also like to mention, for automatically adding `pipeline_model_mapping` to a test file (from the auto mapping, prepared in `XXXPipelineTest` classes, we will need more access to the test files. And string approach would make it more complex (well `ast` is complex, but at least it also avoid a lot of things). Furthermore, if we want to add a new repo check on `pipeline_model_mapping`, the same consideration applies.\r\n\r\nSo let's have a talk later - at least for the above 2 scripts that I might have to implement.\r\n\r\n(well, after a 2nd thought, I understand using `ast` might bring new burden to the reviewers.)",
"@sgugger Need your feedback :-) for\r\n\r\nhttps://github.com/huggingface/transformers/pull/21999#discussion_r1132834136\r\n"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
As promised!
So far, it's incomplete - just for you to check this way is OK. If so, I will move all of them around.
It's normal to have some test failures at this moment.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21999/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21999",
"html_url": "https://github.com/huggingface/transformers/pull/21999",
"diff_url": "https://github.com/huggingface/transformers/pull/21999.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21999.patch",
"merged_at": 1678784583000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21998
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21998/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21998/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21998/events
|
https://github.com/huggingface/transformers/pull/21998
| 1,613,577,396
|
PR_kwDOCUB6oc5Le41J
| 21,998
|
audio_utils improvements
|
{
"login": "hollance",
"id": 346853,
"node_id": "MDQ6VXNlcjM0Njg1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hollance",
"html_url": "https://github.com/hollance",
"followers_url": "https://api.github.com/users/hollance/followers",
"following_url": "https://api.github.com/users/hollance/following{/other_user}",
"gists_url": "https://api.github.com/users/hollance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hollance/subscriptions",
"organizations_url": "https://api.github.com/users/hollance/orgs",
"repos_url": "https://api.github.com/users/hollance/repos",
"events_url": "https://api.github.com/users/hollance/events{/privacy}",
"received_events_url": "https://api.github.com/users/hollance/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I cleaned up `hertz_to_mel` and `mel_to_hertz` a bit:\r\n\r\n- more consistent doc comments\r\n- both support single float inputs as well as numpy arrays\r\n- simplified the formulas so it's not literally the same as the librosa code but also doesn't do pointless calculations\r\n\r\nSince I think this implementation was based on librosa, we should also give them credit.",
"I rewrote `power_to_db` and added `amplitude_to_db`. They still work like the librosa versions but with argument names that make more sense to me.",
"Changed `get_mel_filter_banks` into `mel_filter_bank`. Mostly renamed arguments and variables and cleaned up the doc comments, so that the naming is more in line with the rest of Transformers, e.g. `num_frequency_bins` instead of `nb_frequency_bins`.\r\n",
"Pushed significant changes to the `stft` code.\r\n\r\n- Removed `fram_wave`; this is really an implementation detail that should happen inside the STFT.\r\n\r\n- The new `stft` gives the same results as librosa and torchaudio for the same options. It's 25% faster than the previous implementation, mostly due to using `rfft` instead of `fft` (since the input is always real-only, not complex).\r\n\r\n- librosa is still faster since they use a bunch of tricks under the hood to avoid memory copies etc; we can slowly work towards matching this speed (not super important to do this immediately since the new `stft` is already faster than what we had before)\r\n\r\n- No batching yet.\r\n\r\nI will be replacing the other hand-rolled STFTs with this soon (also in this PR). \r\n\r\nNone of the changes I made are set in stone — feel free to discuss things like the argument names, the shapes of the returned tensors, and so on.\r\n",
"Replaced the hand-rolled STFT in the different models with the one from `audio_utils`:\r\n\r\n- CLAP\r\n- M-CTC-T\r\n- SpeechT5\r\n- TVLT\r\n- Whisper\r\n\r\nDid not do `audio_spectrogram_transformer` and `speech_to_text`. These use `ta_kaldi.fbank`, which is simple enough and faster than `audio_utils`. If we want to get completely rid of torchaudio we could also replace these.\r\n\r\n",
"@sanchit-gandhi @ArthurZucker I think this is ready for review now. Feel free to look at this with a critical eye! \r\n\r\nThe STFT code is currently written for ease of understanding and flexibility, not speed, although it does outperform the previous methods we were using.\r\n",
"@sanchit-gandhi @ArthurZucker Are you OK with the PR in its current state? Then I can ask a core maintainer for a final review.",
"Took a second look through and the changes LGTM @hollance!",
"If everyone's happy with it, feel free to merge (I don't have rights).\r\n"
] | 1,678
| 1,683
| 1,683
|
CONTRIBUTOR
| null |
# What does this PR do?
Recently the `audio_utils.py` file was added to Transformers to provide shared functions for audio processing such as STFT. This PR aims to clean up the code and make the API more robust.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21998/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21998",
"html_url": "https://github.com/huggingface/transformers/pull/21998",
"diff_url": "https://github.com/huggingface/transformers/pull/21998.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21998.patch",
"merged_at": 1683637818000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21997
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21997/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21997/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21997/events
|
https://github.com/huggingface/transformers/pull/21997
| 1,613,523,654
|
PR_kwDOCUB6oc5LetGg
| 21,997
|
Stop requiring Torch for our TF examples!
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
MEMBER
| null |
This PR overrides a property in `TFTrainingArguments` to ensure that our TF examples don't accidentally depend on `torch`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21997/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21997",
"html_url": "https://github.com/huggingface/transformers/pull/21997",
"diff_url": "https://github.com/huggingface/transformers/pull/21997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21997.patch",
"merged_at": 1678204451000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21996
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21996/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21996/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21996/events
|
https://github.com/huggingface/transformers/pull/21996
| 1,613,493,608
|
PR_kwDOCUB6oc5LemZo
| 21,996
|
[Whisper] Remove embed_tokens from encoder docstring
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Rebased and pushed two dummy commits to re-trigger the CI after GH SSO (no new changes to the PR in the last two commits https://github.com/huggingface/transformers/pull/21996/commits/ef692e28e650d305ba2947c21b572b447d0eb01f and https://github.com/huggingface/transformers/pull/21996/commits/ecfddec7b2ff4958e3fe3259d09ed5e681cf8fe1)",
"@sanchit-gandhi \r\n\r\nFor trigger CI, you can do sth like `git commit --allow-empty -m \"Empty commit to trigger CI\"`\r\n(in the future when you need it)",
"Thanks for the tip @ydshieh! Looks the CI is red on main due to a `500 Server Error` with the HF Hub, see https://github.com/huggingface/transformers/actions/runs/4392426382/jobs/7692183097.",
"I re-run that CI job and it is green now :-)\r\n\r\nThe failed test in the job `test_tf` is irrelevant to this PR I believe.",
"Amazing, thanks @ydshieh! 🙌"
] | 1,678
| 1,687
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
`embed_tokens` is not an arg for the `WhisperEncoder`. It looks like it was copied from BART (where we do use it) and left in by mistake!
https://github.com/huggingface/transformers/blob/9402788b34fbc6581ae9d7d9d68612a96d9aa111/src/transformers/models/bart/modeling_bart.py#L708
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21996/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21996",
"html_url": "https://github.com/huggingface/transformers/pull/21996",
"diff_url": "https://github.com/huggingface/transformers/pull/21996.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21996.patch",
"merged_at": 1678539816000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21995
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21995/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21995/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21995/events
|
https://github.com/huggingface/transformers/issues/21995
| 1,613,361,099
|
I_kwDOCUB6oc5gKe_L
| 21,995
|
TypeError: 'NoneType' object is not subscriptable in modeling_utils.py
|
{
"login": "MartinPicc",
"id": 35298861,
"node_id": "MDQ6VXNlcjM1Mjk4ODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/35298861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MartinPicc",
"html_url": "https://github.com/MartinPicc",
"followers_url": "https://api.github.com/users/MartinPicc/followers",
"following_url": "https://api.github.com/users/MartinPicc/following{/other_user}",
"gists_url": "https://api.github.com/users/MartinPicc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MartinPicc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MartinPicc/subscriptions",
"organizations_url": "https://api.github.com/users/MartinPicc/orgs",
"repos_url": "https://api.github.com/users/MartinPicc/repos",
"events_url": "https://api.github.com/users/MartinPicc/events{/privacy}",
"received_events_url": "https://api.github.com/users/MartinPicc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I think this has been fixed by #21542. Could you try on the main branch of Transformers and see if you still have the bug?",
"Oh yes perfect! I'll wait for the next release to update then.\r\nthank you",
"Next release should be this week or beginning of next, as an FYI :-) "
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
### System Info
Using free tier Google Colab, it gives the following output of `transformers-cli env`:
2023-03-07 12:26:45.314129: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-03-07 12:26:45.314255: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-03-07 12:26:45.314280: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-03-07 12:26:49.528826: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Using the library [Detoxify](https://github.com/unitaryai/detoxify) raises an error in transformers code if using transformers version >= 4.25.1, but works well with version 4.24 and below.
The error is: `TypeError: 'NoneType' object is not subscriptable` in [file `modeling_utils` at line 2718](https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/modeling_utils.py#L2718). This line (and its block of code) has been added with [PR#20321](https://github.com/huggingface/transformers/pull/20321) merged in version 4.25.1
https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/modeling_utils.py#L2718-L2736
Looking at the code, it seems to me that the variable `resolved_archive_file` can take the value `None`, hence raising this error.
The full error stacktrace is:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-72e665f021e6> in <module>
----> 1 toxicity_model = Detoxify('multilingual', device='cuda')
2 # results = toxicity_model.predict()
4 frames
/usr/local/lib/python3.8/dist-packages/detoxify/detoxify.py in __init__(self, model_type, checkpoint, device, huggingface_config_path)
101 def __init__(self, model_type="original", checkpoint=PRETRAINED_MODEL, device="cpu", huggingface_config_path=None):
102 super().__init__()
--> 103 self.model, self.tokenizer, self.class_names = load_checkpoint(
104 model_type=model_type,
105 checkpoint=checkpoint,
/usr/local/lib/python3.8/dist-packages/detoxify/detoxify.py in load_checkpoint(model_type, checkpoint, device, huggingface_config_path)
54 }
55 class_names = [change_names.get(cl, cl) for cl in class_names]
---> 56 model, tokenizer = get_model_and_tokenizer(
57 **loaded["config"]["arch"]["args"],
58 state_dict=loaded["state_dict"],
/usr/local/lib/python3.8/dist-packages/detoxify/detoxify.py in get_model_and_tokenizer(model_type, model_name, tokenizer_name, num_classes, state_dict, huggingface_config_path)
18 ):
19 model_class = getattr(transformers, model_name)
---> 20 model = model_class.from_pretrained(
21 pretrained_model_name_or_path=None,
22 config=huggingface_config_path or model_type,
/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2476 offload_index,
2477 error_msgs,
-> 2478 ) = cls._load_pretrained_model(
2479 model,
2480 state_dict,
/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py in _load_pretrained_model(cls, model, state_dict, loaded_keys, resolved_archive_file, pretrained_model_name_or_path, ignore_mismatched_sizes, sharded_metadata, _fast_init, low_cpu_mem_usage, device_map, offload_folder, offload_state_dict, dtype, load_in_8bit, keep_in_fp32_modules)
2716 return mismatched_keys
2717
-> 2718 folder = os.path.sep.join(resolved_archive_file[0].split(os.path.sep)[:-1])
2719 if device_map is not None and is_safetensors:
2720 param_device_map = expand_device_map(device_map, original_loaded_keys)
TypeError: 'NoneType' object is not subscriptable
```
PS: [link to the related issue](https://github.com/unitaryai/detoxify/issues/75) in the library Detoxify
### Expected behavior
Put a condition on `resolved_archive_file` to handle the case when its value is `None`.
However, if its value SHOULDN'T be `None`, then add a validity check earlier in the code, with more explicit details.
Let me know if I can help on this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21995/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21994
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21994/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21994/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21994/events
|
https://github.com/huggingface/transformers/issues/21994
| 1,613,331,129
|
I_kwDOCUB6oc5gKXq5
| 21,994
|
chinese testdata were transcribed as english
|
{
"login": "v-yunbin",
"id": 38179632,
"node_id": "MDQ6VXNlcjM4MTc5NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/v-yunbin",
"html_url": "https://github.com/v-yunbin",
"followers_url": "https://api.github.com/users/v-yunbin/followers",
"following_url": "https://api.github.com/users/v-yunbin/following{/other_user}",
"gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions",
"organizations_url": "https://api.github.com/users/v-yunbin/orgs",
"repos_url": "https://api.github.com/users/v-yunbin/repos",
"events_url": "https://api.github.com/users/v-yunbin/events{/privacy}",
"received_events_url": "https://api.github.com/users/v-yunbin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker and @sanchit-gandhi though this question would be more appropriate for the [forums](https://discuss.huggingface.co/).",
"Hey, this is related to the update of the `generate()` function. The issue is that you are not modifying the `model.generation_config`. If you want to set the language in a proper manner, the following will work:\r\n```python \r\ntranscriber = pipeline(task=\"automatic-speech-recognition\", model=\"openai/whisper-large\", device=1)\r\ntranscriber.model.generation_config.forced_decoder_ids = transcriber.processor.get_decoder_prompt_ids(language=\"zh\", task=\"transcribe\")\r\nresult = transcriber(audio_bytes, chunk_length_s=30)\r\nprint(result)\r\n```\r\nWe updated the generation config which by defaults should automatically detect the language, but is set to `translate` and not transcribe. \r\ncc @sanchit-gandhi for visibility, this was introduced by #20388",
"Resolved in https://github.com/huggingface/transformers/pull/21965 - Whisper now respects the `config.forced_decoder_ids` if the language is not set in the args / `generation_config`\r\n\r\nThe most up-to-date way of passing the language is to use the args if possible:\r\n```python\r\nresult = transcriber(audio_bytes, chunk_length_s=30, generate_kwargs={\"language\":\"zh\"})\r\n```",
"> Resolve in #21965 - Whisper now respects the `config.forced_decoder_ids` if the language is not set in the args / `generation_config`\r\n> \r\n> The most up-to-date way of passing the language is to use the args if possible:\r\n> \r\n> ```python\r\n> result = transcriber(audio_bytes, chunk_length_s=30, generate_kwargs={\"language\":\"zh\"})\r\n> ```\r\n\r\n@sanchit-gandhi upgrade transformers to version 4.27.1 and try it again, but get follow error:\r\n ```\r\n f\"Unsupported language: {self.language}. Language should be one of:\"\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1177, in __getattr__\r\n raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\nAttributeError: 'WhisperForConditionalGeneration' object has no attribute 'language'\r\n```",
"@sgugger another thing is that I using the pipeline get a Translation result not a *Transcription result.\r\nhow to specify Transcription tasks and language with the pipline.\r\n```\r\nfrom transformers import pipeline\r\n\r\ntranscriber = pipeline(task=\"automatic-speech-recognition\", model=\"openai/whisper-small\")\r\ntranscriber(\"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac\")\r\n```",
"I am sorry but I can't reproduce your errors. The following [notebook](https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=vqXoVLesTUE6) has examples of setting the task and the language on whisper small and both work. Did you run `pip install --upgrade transformers`? \r\nHere is my output (so expected behaviour)\r\n<img width=\"1364\" alt=\"image\" src=\"https://user-images.githubusercontent.com/48595927/226351228-1d3f3b54-98b7-4688-a57d-6e661e8425b3.png\">\r\n",
"> I am sorry but I can't reproduce your errors. The following [notebook](https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=vqXoVLesTUE6) has examples of setting the task and the language on whisper small and both work. Did you run `pip install --upgrade transformers`? Here is my output (so expected behaviour) <img alt=\"image\" width=\"1364\" src=\"https://user-images.githubusercontent.com/48595927/226351228-1d3f3b54-98b7-4688-a57d-6e661e8425b3.png\">\r\n\r\n@ArthurZucker yes, I have run pip install --upgrade transformers and i follow https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=vqXoVLesTUE6 , I still get a error:\r\n```\r\n self._validate_model_kwargs(model_kwargs.copy())\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/generation/utils.py\", line 1090, in _validate_model_kwargs\r\n raise ValueError(\r\nValueError: The following `model_kwargs` are not used by the model: ['task', 'language'] (note: typos in the generate arguments will also show up in this list)\r\n```\r\n\r\n",
"@ArthurZucker when transformers 4.26.1 is the latest version, I try it, it failed. Now i update it to 4.27.2, it works.",
"@ArthurZucker how I to modify parameter \"condition_on_previous_text\"? This parameter is provided by whisper and its important for me .\r\n```\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1606, in generate\r\n return super().generate(\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 28, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/generation/utils.py\", line 1213, in generate\r\n self._validate_model_kwargs(model_kwargs.copy())\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/generation/utils.py\", line 1105, in _validate_model_kwargs\r\n raise ValueError(\r\nValueError: The following `model_kwargs` are not used by the model: ['condition_on_previous_text'] (note: typos in the generate arguments will also show up in this list)\r\n```",
"This is not yet available in the HuggingFace implementation. The PR is currently ongoing, see here #21491 ",
"@ArthurZucker actually, I still have some problem as above,(transformers 4.27.2) when I use transformer pipeline and whisper pipeline recognize a wave file ,all is normal, I use micphone to record Chinese wav bytes and send bytes to transformers pipeline sever , it is not normal I get above abnormal result (not Chinese recognition result, for example \"You.\" and other english words), but I use micphone to record Chinese wav bytes and send bytes to whisper pipeline sever , it is normal, so I'm confused.",
"Can you show me exactly how you are `sending to transformers pipeline server` so that I can check how you are calling the model? ",
"> Can you show me exactly how you are `sending to transformers pipeline server` so that I can check how you are calling the model?\r\n\r\n@ArthurZucker my transformers pipeline server codes is as follows,and the received bytes data is from a web client reording voice throught Browser microphone :\r\n```\r\ndef forward(model, audio_bytes):\r\n #print(len(audio_bytes))\r\n text = model(audio_bytes, chunk_length_s=30, generate_kwargs = {\"task\":\"transcribe\", \"language\":\"<|zh|>\"})['text']\r\n return text\r\n\r\ndef recognize(websocket, path):\r\n global model\r\n global args\r\n global loop\r\n global pool\r\n global vad\r\n #global seg_model\r\n rec = None\r\n phrase_list = None\r\n sample_rate = args.sample_rate\r\n client_ip = websocket.remote_address\r\n last_message = \"\"\r\n audio_bytes = b''\r\n bytesdata = b''\r\n wavdir = \"./audiodata\"\r\n uid = str(uuid.uuid1())\r\n filename = str(client_ip[0])+\"_\"+uid\r\n filepath = os.path.join(wavdir, filename+\".wav\")\r\n wfile = open(filepath,\"wb+\")\r\n phrase_timeout = 4\r\n max_timeout = 20\r\n audio_format = \"wav\"\r\n channel = 1\r\n samplewidth = 16\r\n\r\n logging.info('Connection from %s', websocket.remote_address);\r\n while True:\r\n message = await websocket.recv()\r\n if isinstance(message, str):\r\n if message == '{\"eof\":1}':\r\n if len(audio_bytes):\r\n if audio_format != \"wav\":\r\n audio_bytes = bytes2wav(audio_bytes, audio_format, sample_rate, channel, samplewidth)\r\n else:\r\n pass\r\n response = await loop.run_in_executor(pool, forward, model, audio_bytes)\r\n response = format_result(response)\r\n print(\"last\"+response)\r\n await websocket.send(response)\r\n else:\r\n await websocket.send(\"\")\r\n break\r\n elif \"samplerate\" in message and \"format\" in message:\r\n try:\r\n json_str = json.loads(message)\r\n sample_rate = json_str[\"samplerate\"]\r\n audio_format = json_str[\"format\"]\r\n samplewidth = json_str[\"samplewidth\"]\r\n await websocket.send(\"\")\r\n except:\r\n await websocket.send(\"wrong format\")\r\n else:\r\n await websocket.send(\"\")\r\n else:\r\n audio_bytes += message\r\n #audiotime = audio_length(audio_bytes, audio_format, sample_rate, channel, samplewidth)\r\n audiotime = len(audio_bytes) / 2 / int(sample_rate)\r\n #print(audiotime)\r\n if audiotime > max_timeout :\r\n if audio_format != \"wav\":\r\n audio_bytes = bytes2wav(audio_bytes, audio_format, sample_rate, channel, samplewidth)\r\n else:\r\n pass\r\n response = await loop.run_in_executor(pool, forward, model, audio_bytes)\r\n response = format_result(response)\r\n print(\"first\"+response)\r\n audio_bytes = b''\r\n await websocket.send(response)\r\n else:\r\n await websocket.send(\"\")\r\ndef start():\r\n\r\n global model\r\n global args\r\n global loop\r\n global pool\r\n global vad\r\n logging.basicConfig(level=logging.INFO)\r\n\r\n args = type('', (), {})()\r\n\r\n args.interface = os.environ.get('SERVER_INTERFACE', '0.0.0.0')\r\n args.port = int(os.environ.get('SERVER_PORT', 40000))\r\n args.model_path = os.environ.get('MODEL_PATH', 'model')\r\n #args.seg_model_path = os.environ.get('VOSK_MODEL_PATH', 'seg_model')\r\n args.sample_rate = float(os.environ.get('SAMPLE_RATE', 16000))\r\n\r\n if len(sys.argv) > 1:\r\n args.model_path = sys.argv[1]\r\n #args.seg_model_path = sys.argv[2]\r\n model = whisper.load_model(args.model_path,device=\"cpu\")\r\n```\r\n",
"I have confirmed that ffmpeg_read function(read audio bytes)has some problem and I replace it with whiper provided function, all is normal(both wafile and mic stream)",
"Okay sorry if I don't understand completely, I don't see the `forward2` being called or passed anywhere right? ",
"> Okay sorry if I don't understand completely, I don't see the `forward2` being called or passed anywhere right?\r\n\r\nupdate it, forward2 should be forward.",
"Ok, 2 things we need to check:\r\n1. When calling the pipeline, could you check that `pipeline.model.generation_config.forced_decoder_ids` is properly updates with the `language` and the `task`? \r\n2. Can you also print the `language` that should be outputed by the generation process (`decode_asr` called in the pipeline for whisper should output the language that is detected by the model, which could help us understand if the decoding process went well)",
"> Ok, 2 things we need to check:\r\n> \r\n> 1. When calling the pipeline, could you check that `pipeline.model.generation_config.forced_decoder_ids` is properly updates with the `language` and the `task`?\r\n\r\nyes, set it as follows:\r\nmodel = pipeline(task=\"automatic-speech-recognition\", model=\"openai/whisper-medium\",device=\"cpu\")\r\nmodel.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language=\"zh\", task=\"transcribe\")\r\n\r\n\r\n> 2. Can you also print the `language` that should be outputed by the generation process (`decode_asr` called in the pipeline for whisper should output the language that is detected by the model, which could help us understand if the decoding process went well)\r\n\r\nsorry actually when I use transfomer pipline, met the problem and when I use whisper official pipeline , all is ok。\r\n",
"Okay! After re-reading your issue, I think you said \r\n> I have confirmed that ffmpeg_read function(read audio bytes)has some problem and I replace it with whiper provided function, all is normal(both wafile and mic stream)\r\n\r\nSo this means we should probably update our `ffmpeg_read` function. Is that right? ",
"> Okay! After re-reading your issue, I think you said\r\n> \r\n> > I have confirmed that ffmpeg_read function(read audio bytes)has some problem and I replace it with whiper provided function, all is normal(both wafile and mic stream)\r\n> \r\n> So this means we should probably update our `ffmpeg_read` function. Is that right?\r\n\r\nyes, transformer's ffmpeg_read leads to my problem.\r\n",
"now we can use the parameters of \"fp16\" and \"condition_on_previous_text\"?",
"`fp16`, `load_in_8_bits` and the jax models if want faster inference yes. Conditioning on previous text, the update on that feature is here #21491 !",
"how to use \"fp16, load_in_8_bits\", has sample codes?",
"For load in 8 bits you need `accelerate` and `bits-and-bytes`:\r\n```python \r\nfrom transformers import WhisperForConditionalGeneration\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\", load_in_8bit=True)\r\n```\r\nfor `fp16`:\r\n```python \r\nimport torch\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\", torch_dtype = torch.float16)\r\n```",
"I try it with `model = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-small\", torch_dtype = torch.float16)`\r\nget errors: `RuntimeError: Input type (torch.FloatTensor) and weight type (torch.HalfTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor`\r\n",
"The input should also be halved (the audio)",
"Note that `load_in_8bit` will give you a nice memory saving (~30%) but will run slower than fp16. This is likely due to the bitsandbytes 8bit matmul algorithm which isn't super optimised for \"small\" tensors, but rather is designed more for super large LMs."
] | 1,678
| 1,681
| 1,679
|
NONE
| null |
when adding follow codes in a asr server, i send chinese asr data but i get english result. I don't know how to set the language. and try to use "forced_decoder_ids" to set the language, it failed.
```
transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-large", device=1)
#transcriber.model.config.forced_decoder_ids = (transcriber.tokenizer.get_decoder_prompt_ids(language="zh", task="transcribe"))
transcriber.model.config.forced_decoder_ids = (transcriber.tokenizer.get_decoder_prompt_ids(language="zh", task="transcribe"))
result = transcriber(audio_bytes, chunk_length_s=30)
print(result)
```
my transformers version is 4.26.1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21994/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21993
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21993/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21993/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21993/events
|
https://github.com/huggingface/transformers/pull/21993
| 1,613,318,362
|
PR_kwDOCUB6oc5Ld_kX
| 21,993
|
add 1 to cur_len to make up the new beam length
|
{
"login": "jimmieliu",
"id": 10285837,
"node_id": "MDQ6VXNlcjEwMjg1ODM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10285837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimmieliu",
"html_url": "https://github.com/jimmieliu",
"followers_url": "https://api.github.com/users/jimmieliu/followers",
"following_url": "https://api.github.com/users/jimmieliu/following{/other_user}",
"gists_url": "https://api.github.com/users/jimmieliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimmieliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimmieliu/subscriptions",
"organizations_url": "https://api.github.com/users/jimmieliu/orgs",
"repos_url": "https://api.github.com/users/jimmieliu/repos",
"events_url": "https://api.github.com/users/jimmieliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimmieliu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @gante",
"@gante Thx for the great advice. You are suggesting a better coding style. "
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
cur_len is 1 token shorter comparing to the length of the sequence whose best_sum_logprobs is the numerator.
Fixes # (issue)
add 1 to cur_len
## Who can review?
@LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21993/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21993",
"html_url": "https://github.com/huggingface/transformers/pull/21993",
"diff_url": "https://github.com/huggingface/transformers/pull/21993.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21993.patch",
"merged_at": 1678276076000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21992
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21992/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21992/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21992/events
|
https://github.com/huggingface/transformers/pull/21992
| 1,613,308,110
|
PR_kwDOCUB6oc5Ld9Ti
| 21,992
|
Update `notification_service.py`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
While working on CI report with PyTorch `2.0.0`, the first run failed to send the report due to the following problem:
- The original code checked each line in `summary_short.txt` by `if re.search("FAILED", line):`
- We expect such occurrences have 1-1 correspondence in the file `failures_line.txt`
- However, some lines in `summary_short.txt` might have ` FAILED` but not really what we are looking for. For example, some tests in `tests/extended/test_trainer_ext.py` using `execute_subprocess_async` and we get some lines like
```
/transformers/examples/pytorch/translation/run_translation.py FAILED
```
- In such cases, we get error `stacktraces.pop(0)` at some point, as there is no more element to pop (`stacktraces`, obtained from `failures_line.txt`)
- **This PR avoids this situation by checking with `if line.startswith("FAILED "):` which should give the desired 1-1 correspondence.**
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21992/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21992",
"html_url": "https://github.com/huggingface/transformers/pull/21992",
"diff_url": "https://github.com/huggingface/transformers/pull/21992.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21992.patch",
"merged_at": 1678195240000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21991
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21991/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21991/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21991/events
|
https://github.com/huggingface/transformers/pull/21991
| 1,613,125,973
|
PR_kwDOCUB6oc5LdVCA
| 21,991
|
Skip `test_multi_gpu_data_parallel_forward` for some model tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
This test fails for some models in CI with torch 2.0, causing CUDA in a bad state, and many other tests fail in this situation.
The only way (I could find online) that it won't fail is to use other GPUs, like `P100` or `V100`.
Let's skip it for now for a few model tests. It's likely it will work again in a future PyTorch release.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21991/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21991",
"html_url": "https://github.com/huggingface/transformers/pull/21991",
"diff_url": "https://github.com/huggingface/transformers/pull/21991.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21991.patch",
"merged_at": 1678195417000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.